New Scientific Features in 2010
Contents

New Scientific Features in 2010
 Improved computational features in Scilab 5.3.0
 16th February 2010 : Make Matrix
 16th February 2010 : Quapro  Linear and Linear Quadratic Programming
 14th April 2010 : Cobyla Optimization Toolbox
 21st April 2010 : Floating Point toolbox in ATOMS
 1st of June 2010 : Optkelley
 1st June 2010 : Unconstrained Optimization Problem Toolbox
 17th June 2010 : Low Discrepancy Sequences
 24th of June 2010: Scilab Image and Video Processing toolbox v0.5.3
 July 2010: Mmodd for Partial Differential Equations
 15th July 2010  New module : A Toolbox for Unconstrained Global Optimization of Polynomial functions
 30th of August 2010: Scilab Wavelet Toolbox v0.1.11
 7th of September 2010: ANN Toolbox
 14th of September 2010: Factorization of Structured Matrices Toolbox
 7th of November 2010: Linear System Inversion Toolbox v1.0.2
 3rd of November 2010: Identification Toolbox v1.0
 30th of October 2009: HYDROGR
 25th of September 2010: Financial Module
The goal of this page is to summarize the new or scientific features of Scilab and its environment during the year 2010.
During the year 2010, the following versions of Scilab were released :
 18 February 2010 : Scilab v5.2.1
 21 April 2010 : Scilab v5.2.2
 16 December 2010 : Scilab v5.3.0
In brief, the following is a list of the new or updated scientific features in 2010 :
 16th February 2010  New module : Make Matrix
 16th February 2010  Updated module : Quapro  Linear and Linear Quadratic Programming
 14th April 2010  New module : Cobyla Optimization Toolbox
 21st April 2010  New module : Floating Point Toolbox
 1st June 2010  New module : Unconstrained Optimization Problem Toolbox
 22nd March 2010  New module : Optkelley
 17th June 2010  New module : Low Discrepancy Sequences
 24th of June 2010: Scilab Image and Video Processing toolbox v0.5.3
 July 2010: Mmodd for Partial Differential Equations
 15th July 2010  New module : A Toolbox for Unconstrained Global Optimization of Polynomial functions.
 30th of August 2010: Scilab Wavelet Toolbox v0.1.11
 7th of September 2010: ANN Toolbox
 14th of September 2010: Factorization of Structured Matrices Toolbox
 7th of November 2010: Linear System Inversion Toolbox v1.0.2
 3rd of November 2010: Identification Toolbox v1.0
 30th of October 2009: HYDROGR
Improved computational features in Scilab 5.3.0
The following informations are extracted from the Changes.
In the Optimization module, here are the changes :
 Simulated annealing: Added documentation for accept_func_default and accept_func_vfsa.
 fminsearch: updated printing of neldermead, optimbase and optimsimplex objects.
 fminsearch: added demo for dimensionality effect of the NelderMead algorithm.
Other changes are related to the statistics:
 Bug #7569 fixed: The number of accurate digits during inversion of cdfbet, cdfgam, cdfbin, cdfchi, cdfchin, cdff, cdffnc, cdfnbn, cdfpoi was only 8. Changed to 13.
 Bug #7756 fixed  sprand did not produce normal numbers.
 Bug #7766 fixed  cdff, cdffnc functions did not display %inf in error messages.
 Bug #7768 fixed  For cdfgam, the Scale parameter was, in fact, the Rate.
 Bug #7727 fixed  The help page of sp2adj was not correct. Improved the help page of adj2sp. Added unit tests for sp2adj and adj2sp. Improved implementation by checking the input arguments.
 Bug #8032 fixed  cdfnor was able to fail silently.
The following script computes the event x associated with the probability 0.5.
format("v",25) p = 0.5 q = 1p a = 1 b = 2 x = cdfbet("XY",a,b,p,q)
The results are the following, with 17 significant digits:
 Scilab 5.2.2: 0.29289321750687374
 Scilab 5.3.0: 0.29289321881343810
 Exact 0.292893218813453
This shows that the accuracy of the cdfbet function is now close to the maximum.
16th February 2010 : Make Matrix
A collection of test matrices.
The goal of this toolbox is to provide a collection of test matrices. These matrices might be used to test algorithms for the computation of the solution of linear systems of equations, eigenvalues, matrix norms and other dense linear algebra problems.
The current toolbox is able to generate the following matrices: dingdong, Hadamard, inverse Hilbert, Hilbert, magic, Toeplitz, Vandermonde, Pascal, border, Cauchy, circul, diagonali, Frank, Hankel, identity, Moler, Rosser, urandom, Wilkinson , Wilkinson +.
This module provides the following functions.
 gallery
 hadamard
 magic
 rosser
 wilkinson
 makematrix_dingdong
 makematrix_hadamard
 makematrix_invhilbert
 makematrix_magic
 makematrix_toeplitz
 makematrix_vandermonde
 makematrix_pascal
 makematrix_border
 makematrix_cauchy
 makematrix_circul
 makematrix_diagonali
 makematrix_frank
 makematrix_frank
 makematrix_frankmin
 makematrix_hankel
 makematrix_hilbert
 makematrix_identity
 makematrix_moler
 makematrix_normal
 makematrix_ones
 makematrix_rosser
 makematrix_urandom
 makematrix_wilkinsonm
 makematrix_wilkinsonp
 makematrix_zeros
This module is available in ATOMS :
To install it, type :
atomsInstall('makematrix')
In the following example, we create a Cauchy matrix of size 5.
>A = gallery("cauchy" , 1:5 ) A = 0.5 0.3333333 0.25 0.2 0.1666667 0.3333333 0.25 0.2 0.1666667 0.1428571 0.25 0.2 0.1666667 0.1428571 0.125 0.2 0.1666667 0.1428571 0.125 0.1111111 0.1666667 0.1428571 0.125 0.1111111 0.1
In the following example, we compare the estimated condition number of Hilbert's matrix for increasing values of n with its theoretical value.
nv = 1:30; for n = nv c(n) = log(cond(makematrix_hilbert(n))); e(n) = log(exp(3.5*n)); end scf(); plot ( nv , c , "bo" ); plot ( nv , e , "r" ); legend(["cond" "exact"]); xtitle("Condition number of the Hilbert matrix","N","Condition");
This produces the following figure.
16th February 2010 : Quapro  Linear and Linear Quadratic Programming
This toolbox defines linear quadratic programming solvers. The matrices defining the cost and constraints must be full, but the quadratic term matrix is not required to be full rank.
This toolbox already existed in previous versions of Scilab. The novelty is that it is now available as a ATOMS module.
Features:
 linpro : linear programming solver
 quapro : linear quadratic programming solver
 mps2linpro : convert lp problem given in MPS format to linpro format
http://atoms.scilab.org/toolboxes/quapro
To install it, type :
atomsInstall('quapro')
The linpro function can solve linear programs in general form:
Minimize c'*x A*x <= b Aeq*x = beq lb <= x <= ub
The following example is extracted from "Operations Research: applications and algorithms", Wayne L. Winstons, Section 5.2, "The Computer and Sensitivity Analysis", in the "Degeneracy and Sensitivity Analysis" subsection. We consider the problem:
Min 6*x1  4*x2  3*x3  2*x4 such as: 2*x1 + 3*x2 + x3 + 2* x4 <= 400 x1 + x2 + 2*x3 + x4 <= 150 2*x1 + x2 + x3 + 0.5*x4 <= 200 3*x1 + x2 + x4 <= 250; x >= 0;
The following script allows to solve the problem.
c = [6 4 3 2]'; A = [ 2 3 1 2 1 1 2 1 2 1 1 0.5 3 1 0 1 ]; b = [400 150 200 250]'; ci=[0 0 0 0]'; cs=[%inf %inf %inf %inf]'; [xopt,lagr,fopt]=linpro(c,A,b,ci,cs)
This produces :
xopt = [50,100,2.842D14,0]
14th April 2010 : Cobyla Optimization Toolbox
COBYLA is a derivative free non linear constrained optimization method.
The author of this Scilab module is Yann Collette.
This software is a Scilab interface of COBYLA2, a contrained optimization by linear approximation package developed by Michael J. D. Powell in Fortran. The original source code can be found at:
http://plato.la.asu.edu/topics/problems/nlores.html
The toolbox is managed under ATOMS :
http://atoms.scilab.org/toolboxes/scicobyla
The sources are available at:
http://forge.scilab.org/index.php/p/scicobyla
The calling sequence of the cobyla function is:
[x_opt, eval_func] = cobyla(x0, func, nb_constr, rhobeg, rhoend, message, eval_func_max)
The following is an example from the help page.
nb_constr_test_1 = 0; xopt_test_1 = [1 0]'; function [f, con, info] = test_1(x) d__1 = x(1) + 1.; d__2 = x(2); f = d__1 * d__1 * 10. + d__2 * d__2; con = 0; info = 0; endfunction rhobeg = 1; rhoend = 1e3; message_in = 0; eval_func_max = 200; x0 = ones(xopt_test_1); [x_opt, status, eval_func] = cobyla(x0, test_1, nb_constr_test_1, rhobeg, rhoend, message_in, eval_func_max);
The previous script produces the following output.
>[x_opt, status, eval_func] = cobyla(x0, test_1, nb_constr_test_1, rhobeg, rhoend, message_in, eval_func_max) size_x = 2 res = 0 x_opt[0] = 1.000314 x_opt[1] = 0.000438 eval_func = 59. status = 0. x_opt =  1.0003138  0.0004382
21st April 2010 : Floating Point toolbox in ATOMS
The goal of this toolbox is to provide a collection of algorithms for floating point number management. This is more a learning tool than an operationnal component, although it might complement some features which are not provided by Scilab. The flps_systemgui function allows to see the distribution of floating point numbers graphically. By varying the rounding mode, the precision and the logscale, we can actually see the distribution of a toy floating point number. Adding or suppressing the denormals can be instructive too.
The functions allow to compute automatically the properties of the current Scilab system with respect to the doubles. It is similar in spirit to the number_properties functions, except that our functions are based on macros and that the returned values are made consistent with the references cited in the bibliography. Moreover, the flps_radix function returns the current radix and allows to check that the current rounding mode is roundtonearest (which is IEEE's default).
The functions allow to create virtual floating point systems, which allows to see their discrete nature in simplified examples. The rounding mode of such a virtual floating point system can be configured to one of the four mode from the IEEE standard. This feature allows to see the effect of the rounding mode on the distribution of floating point numbers. This is not easy to see with a straightforward Scilab, since the rounding mode is roundtonearest, most of the time.
The following is a list of the current functions :
 Convert
 flps_Me2sm : Returns the (s,m) representation given (M,e).
 flps_double2hex : Converts a double into a hexadecimal string.
 flps_frombary : Returns the floating point number given its bary decomposition.
 flps_hex2double : Converts a hexadecimal string into its double.
 flps_sme2M : Returns the integral significand from (s,m,e).
 flps_tobary : Returns the digits of the bary decomposition.
 Functions
 flps_chop : Round matrix elements to t significant binary places.
 flps_frexp : Returns the exponent and fraction.
 flps_minimumdecimalstr : Returns the minimum string for equality.
 flps_signbit : Returns the sign bit of x
 Number
 flps_number2hex : Converts a floating point number into a hexadecimal string.
 flps_numbereval : Returns the value of the current floating point number.
 flps_numbergetclass : Returns the class of the number;
 flps_numberisfinite : Returns true if the number is finite.
 flps_numberisinf : Returns true if the number is an infinity.
 flps_numberisnan : Returns true if the number is a nan.
 flps_numberisnormal : Returns true if the number is bnormal.
 flps_numberissubnormal : Returns true if the number is subnormal.
 flps_numberiszero : Returns true if the number is zero.
 flps_numbernew : Returns a new floating point number.
 Properties
 flps_emax : Returns the maximum exponent and value before overflow.
 flps_emin : Returns the minimum exponent and value before underflow.
 flps_eps : Returns the machine epsilon and the precision for Scilab doubles.
 flps_isIEEE : Returns true if the current system satisfies basic IEEE requirements.
 flps_radix : Compute the radix used for Scilab doubles.
 System
 flps_systemall : Returns the list of floating point numbers of the given floating point system.
 flps_systemgui : Plots all the numbers in the current floating point system
 flps_systemnew : Returns a new floating point system.
This toolbox is available in ATOMS :
http://atoms.scilab.org/toolboxes/floatingpoint
and is managed in the Scilab Forge :
http://forge.scilab.org/index.php/p/floatingpoint/
In order to install it, type :
atomsInstall('floatingpoint')
The flps_radix function allows to get the radix and the rounding mode of the current Scilab system. [ radix , rounding ] = flps_radix ()
On typical systems, we get the following session, meaning that we have a base2 machine with roundtonearest rounding mode. In other rounding modes, we would get rounding=%f, but this has never been observed by us in practice.
>[ radix , rounding ] = flps_radix () rounding = T radix = 2.
The flps_IEEEsingle function allows to create a virtual single precision floating point system based on the IEEE standard single precision. The flps_numberformat function returns the floating point number corresponding to the given double and the given floating point system. The following script allows to produce the single precision floating point number associated with 1/3.
flps = flps_IEEEsingle ( ); flpn = flps_numberformat ( flps , 1/3 )
The previous script produces the following output. As we can see, IEEE single precision floating point numbers are associated with radix 2, precision p=24 and exponent range from 126 to 127.
>flps = flps_IEEEsingle ( ) flps = Floating Point System: ====================== radix= 2 p= 24 emin= 126 vmin= 1.175D38 emax= 127 vmax= 1.701D+38 eps= 0.0000001 r= 1 gu= T alpha= 1.401D45 >flpn = flps_numberformat ( flps , 1/3 ) flpn = Floating Point Number: ====================== s= 0 M= 11184811 m= 1.3333334 e= 2 d= [1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1] flps= floating point system ====================== x= (1)^0 * 1.3333334 * 2^2 x= 11184811 * 2^(224+1) x= (1)^0 * (1.[0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1])_2 * 2^2
The following example displays a toy system, including denormals and negative numbers.
radix = 2; p = 3; e = 3; flps = flps_systemnew ( "format" , radix , p , e ); h = flps_systemgui ( flps );
This produces the following figure.
1st of June 2010 : Optkelley
These Scilab files are implementations of the algorithms from the book 'Iterative Methods for Optimization', published by SIAM, by C. T. Kelley. The book, which describes the algorithms, is available from SIAM (service@siam.org).
The optimization codes have the calling convention [f,g] = objective(x) returns both the objective function value f and the gradient vector g. I expect g to be a column vector. The NelderMead, HookeJeeves, Implicit Filtering, and MDS codes do not ask for a gradient.
This module was already available in previous versions of Scilab. An important update of the module has been performed. The help pages have been entirely updated. Several bugs have been fixed and unit tests were created, along with more demonstrations scripts.
This toolbox is available in ATOMS :
http://atoms.scilab.org/toolboxes/optkelley
To install it, type :
atomsInstall('optkelley')
This toolbox provide the following algorithms:
 optkelley_bfgswopt: Steepest descent/bfgs with polynomial line search.
 optkelley_cgtrust: Steihaug NewtonCGTrust region algorithm.
 optkelley_diffhess: Compute a forward difference Hessian.
 optkelley_dirdero: Finite difference directional derivative.
 optkelley_gaussn: Damped GaussNewton with Armijo rule.
 optkelley_gradproj: Gradient projection with Armijo rule, simple linesearch
 optkelley_hooke: HookeJeeves optimization.
 optkelley_imfil: Unconstrained implicit filtering.
 optkelley_levmar: LevenbergMarquardt.
 optkelley_mds: Multidirectional search.
 optkelley_nelder: NelderMead optimizer, No tiebreaking rule other than Scilab's gsort.
 optkelley_ntrust: Dogleg trust region.
 optkelley_polyline: Polynomial line search.
 optkelley_polymod: Cubic/quadratic polynomial linesearch.
 optkelley_projbfgs: Projected BFGS with Armijo rule, simple linesearch
 optkelley_simpgrad: Simplex gradient.
 optkelley_steep: Steepest descent with Armijo rule.
The following is a sample session.
>function y = quad ( x ) >x1 = [1 1]' >x2 = [2 2]' >y = max ( norm(xx1) , norm(xx2) ) >endfunction >x0 = [1.2 1]'; >v1 = x0 + [0.1 0]'; >v2 = x0 + [0 0.1]'; >v0 = [x0 v1 v2]; >[x,lhist,histout] = optkelley_mds(v0,quad,1.e4,100,100) histout = 3. 3.2572995 0.0953114 0.1414214 7. 3.0675723 0.1897272 0.2 11. 2.6925824 0.3749899 0.5656854 15. 1.9723083 0.7202741 0.8 19. 1.0049876 0.9673207 2.2627417 22. 0.9219544 0.4234080 1.1313708 26. 0.7810250 0.3006404 0.5656854 30. 0.7810250 0.1409295 0.2828427 34. 0.7810250 0.0675032 0.1414214 37. 0.7211103 0.0851155 0.1 41. 0.7211103 0.0421066 0.05 45. 0.7211103 0.0209308 0.025 49. 0.7211103 0.0104335 0.0125 53. 0.7211103 0.0052086 0.00625 57. 0.7211103 0.0026022 0.003125 61. 0.7211103 0.0013006 0.0015625 65. 0.7211103 0.0006502 0.0007812 69. 0.7211103 0.0003251 0.0003906 73. 0.7211103 0.0001625 0.0001953 77. 0.7211103 0.0000813 0.0000977 lhist = 20. x = 1.4 1.4 1.3999023 1.6 1.5999023 1.6
1st June 2010 : Unconstrained Optimization Problem Toolbox
The goal of this toolbox is to provide unconstrained optimization problemsin order to test optimization algorithms.
The More, Garbow and Hillstrom collection of test functions is widely used in testing unconstrained optimization software. The code for these problemsis available in Fortran from the netlib software archives.
 Provides 35 unconstrained optimization problems.
 Provide the function value, the gradient, the function vector, the Jacobian.
 Provide the Hessian matrix for 18 problems.
 Provides the starting point for each problem.
 Provides the optimum function value and the optimum point x for many problems.
 Provide finite difference routines for the gradient, the Jacobian and the Hessian matrix.
 Macro based functions : no compiler required.
 All function values, gradients, Jacobians and Hessians are tested.
Features
 uncprb_getclass — Returns the name.
 uncprb_getfunc — Returns the function vector, the Jacobian and, if available, the Hessian.
 uncprb_getgrdfcn — Returns the gradient.
 uncprb_getgrdfd — Compute the gradient by finite differences.
 uncprb_gethesfcn — Returns the function value.
 uncprb_gethesfd — Compute the Hessian by finite differences.
 uncprb_getinitf — Returns the starting point.
 uncprb_getinitpt — Returns the starting point.
 uncprb_getname — Returns the name.
 uncprb_getobjfcn — Returns the function value.
 uncprb_getopt — Returns the name.
 uncprb_getproblems — Lists the problems.
 uncprb_getvecfcn — Returns the function vector and the Jacobian.
 uncprb_getvecjac — Returns the Jacobian.
 uncprb_getvecjacfd — Compute the Jacobian by finite differences
This toolbox is available in ATOMS :
http://atoms.scilab.org/toolboxes/uncprb
and is manage under Scilab's Forge :
http://forge.scilab.org/index.php/p/uncprb/
To install it, type :
atomsInstall('uncprb')
In the following session, we compute the function and Jacobian matrix of Rosenbrock's test case.
>nprob = 1 nprob = 1. >[n,m,x0]=uncprb_getinitf(nprob) x0 =  1.2 1. m = 2. n = 2. >option = 3 option = 3. >[fvec,J]=uncprb_getfunc(n,m,x0,nprob,option) J = 24. 10.  1. 0. fvec =  4.4 2.2
17th June 2010 : Low Discrepancy Sequences
The goal of this toolbox is to provide a collection of low discrepancy sequences. These random numbers are designed to be used in a MonteCarlosimulation. For example, low discrepancy sequences provide a higherconvergence rate to the MonteCarlo method when used in numericalintegration. The toolbox takes into account the dimension of the problem, i.e.generate vectors with arbitrary size.
The current prototype has the following features :
 manage arbitrary number of dimensions,
 skips a given number of elements in the sequence,
 leaps (i.e. ignores) a given number of elements from call to call,
 fast sequences based on compiled source code,
 suggest optimal settings to use the best of the sequences,
 object oriented programming.
Overview of sequences
 The Halton sequence,
 The Sobol sequence,
 The Faure sequence,
 The Reverse Halton sequence of Vandewoestyne and Cools,
 The Niederreiter base 2 and arbitrary base sequence.
This module currently provides the following functions:
 lowdisc_cget : Returns the value associated with the given key for the given object.
 lowdisc_configure : Update one option of the current object and returns an updated object.
 lowdisc_destroy : Destroy the current object and returns an updated object.
 lowdisc_new : Creates and returns a new sequence.
 lowdisc_next : Returns the next vector in the sequence.
 lowdisc_startup : Startup a random number object.
Provides the following functions to extend the maximum dimension of the Halton and Faure sequences
 lowdisc_primes100 : Returns a matrix containing the 100 first primes.
 lowdisc_primes1000 : Returns a matrix containing the 1000 first primes.
 lowdisc_primes10000 : Returns a matrix containing the 10000 first primes.
Provides the following functions to suggest expert settings for the sequences :
 lowdisc_fauresuggest : Returns favorable parameters for Faure sequences.
 lowdisc_haltonsuggest : Returns favorable parameters for Halton sequence.
 lowdisc_niederbase : Returns optimal base for Niederreiter sequence.
 lowdisc_niedersuggest : Returns favorable parameters for Niederreiter sequence.
 lowdisc_sobolsuggest : Returns favorable parameters for Sobol sequences.
 lowdisc_soboltau : Returns favorable starting seeds for Sobol sequences.
This component currently provides the following sequences:
 "slow" sequences based on macros : Halton, Sobol, Faure, Reverse Halton, Niederreiter base 2,
 "fast" sequences based on C source code : Halton, Sobol, Faure, Reverse Halton, Niederreiter in arbitrary base.
This toolbox is available in ATOMS at :
http://atoms.scilab.org/toolboxes/lowdisc
and is managed in the Scilab Forge :
http://forge.scilab.org/index.php/p/lowdisc/
To install it, type :
atomsInstall('lowdisc')
The following example plots the 2D Faure sequence.
lds = lowdisc_new("fauref"); lds = lowdisc_configure(lds,"dimension",2); lds = lowdisc_startup (lds); [lds,computed] = lowdisc_next (lds,100); lds = lowdisc_destroy(lds); plot(computed(:,1),computed(:,2),"bo"); xtitle("Faure sequence","X1","X2");
This produces the following figure.
24th of June 2010: Scilab Image and Video Processing toolbox v0.5.3
SIVP intends to do image processing and video processing tasks. SIVP is meant to be a useful, efficient, and free image and video processing toolbox for Scilab.
The authors of this module are Shiqi Yu, Jia Wu, Shulin Shang and Vincent Etienne.
This module provides the following functions.
 addframe — Add a frame to the video file. (experimental)
 aviclose — Close a video file. (experimental)
 avicloseall — Close all opened video files/cameras. (experimental)
 avifile — Create a new video file to write. (experimental)
 aviinfo — Get the information about video files. (experimental)
 avilistopened — Show all opened video files. (experimental)
 aviopen — Open a video file. (experimental)
 avireadframe — Grabs and returns a frame from a opened video file or camera (experimental)
 camopen — Open a camera. (experimental)
 camshift — Track an object by color. It gets the object position, size and orientation.
 corr2 — 2D correlation coefficient
 detectlefteyes — Find left eyes in the image.
 detectrighteyes — Find right eyes in the image.
 detectfaces — Find faces in the image.
 detectforeground — Background modeling and get foreground mask.
 edge — Find edges in a single channel image.
 filter2 — 2D digital filtering
 fspecial — Create some 2D special filters
 hsv2rgb — Convert a HSV image to the equivalent RGB image.
 im2bw — Convert image to binary
 im2double — Convert image to double precision
 im2int16 — Convert image to 16bit signed integers
 im2int32 — Convert image to 32bit signed integers
 im2int8 — Convert image to 8bit signed integers
 im2uint16 — Convert image to 16bit unsigned integers
 im2uint8 — Convert image to 8bit unsigned integers
 imabsdiff — Calculate absolute difference of two images
 imadd — Add two images or add a constant to an image
 imcomplement — Complement image
 imcrop — Crop image
 imdivide — Divide two images or divide an image by an constant.
 imfilter — Image filtering
 imfinfo — Get the information about image file
 imhist — get the histogram of an image
 imlincomb — Linear combination of images
 immultiply — Multiply two images or multiply an image by an constant.
 imnoise — Add noise (gaussian, etc.) to an image
 impyramid — Image pyramid reduction and expansion
 imread — Reads image file
 imresize — Resizes image
 imshow — Displays images in graphic window
 imsubtract — Subtract two images or subtract a constant from an image
 imwrite — Write image to file
 ind2rgb — convert indexed image to true color image
 mat2gray — Convert matrix to grayscale image
 mean2 — Average/mean of matrix elements
 meanshift — Track an object by color.
 ntsc2rgb — Convert a NTSC image to the equivalent RGB image.
 rectangle — Draw a rectangle on image
 rgb2gray — Convert RGB images to gray images
 rgb2hsv — Convert a RGB image to the equivalent HSV image.
 rgb2ntsc — Convert a RGB image to the equivalent NTSC image YIQ.
 rgb2ycbcr — Convert a RGB image to the equivalent YCbCr image.
 std2 — Standard deviation of 2D matrix elements — Standard deviation of 2D matrix elements
 xs2im — Convert graphics to an image matrix.
 ycbcr2rgb — Convert a YCbCr image to the equivalent RGB image.
To install this module, type:
atomsInstall('SIVP')
The following script is the example of the detectlefteyes function.
SIVP_PATH = getSIVPpath(); im = imread(SIVP_PATH + 'images/lena.png'); face = detectfaces(im); rect = face*diag([1,1,1,0.7]); subface = imcrop(im, rect); leyes = detectlefteyes(subface); [m,n] = size(leyes); for i=1:m, im = rectangle(im, leyes(i,:)+rect*diag([1,1,0,0]), [0,255,0]); end; imshow(im);
This produces the following figure.
July 2010: Mmodd for Partial Differential Equations
The MmodD project contains tools for the study and solution of partial differential equations (PDEs) in 2d and 3d. A set of commandline functions and a graphical user interface let you preprocess, solve, and postprocess generic PDEs for a broad range of engineering and science applications.
http://forgesitn.univlyon1.fr/projects/mmodd
The project is leaded by Thierry Clopeau, with the help of the following developpers :
 Sofian Smatti
 Marcel Ndeffo
 David Delanoue
 Wu Yiwen
 Kevin Vervier
 Amandine Berger
 Hocéane Kadio
 Laurence Siao
 Mbiengop Alex
 Yvon Goudron
 Feriel Ben Cheikh
 Simon Géraud
 Laurène Soyeux
 Yannick Meyapin
 Marion Neyroud
 Karine Mari
 Antoine Landra
 Ahmed Radji
The module provides the help pages of the following functions:
 line2d — Type declaration
 line3d — Type declaration
 p1_1d — Type declaration
 p1_2d — Type declaration
 square2d — Create a mesh on a square
 tri2d — Type declaration
The following functions are available:
 base
 base/assemble
 base/BiGradConj
 base/complement
 base/det2d
 base/GradConj
 base/GradConjPre
 base/interpol
 base/lsolve
 base/name
 base/p1
 base/spdiag
 devel
 devel/xmesh
 edp
 edp/assemble_edp
 edp/assemble_edp_df1d
 edp/assemble_edp_df2d
 edp/assemble_edp_df3d
 edp/assemble_edp_p1_1d
 edp/assemble_edp_p1_2d
 edp/assemble_edp_p1_3d
 edp/assemble_edp_p1nc3d
 edp/assemble_edp_q1p2d
 edp/assemble_edp_q1p3d
 edp/ConvDx
 edp/ConvDy
 edp/ConvDz
 edp/ConvGrad
 edp/D2x
 edp/D2y
 edp/D2z
 edp/Dir
 edp/Dirichlet
 edp/Dn
 edp/Dx
 edp/Dy
 edp/Dz
 edp/edp
 edp/Grad
 edp/Id
 edp/kId
 edp/kLaplace
 edp/Laplace
 edp/lsolve_edp
 edp/Neumann
 edp/upDx
 edp/upDy
 edp/upDz
 edp/xedp
 export
 export/Cell_gmv
 export/Cell_vtk
 export/CellRT_vtk
 export/dcomp3d_TETGEN
 export/dcomp3d_vtk
 export/exportGMV
 export/exportNETGEN
 export/exportNETGEN2
 export/exportSMESH
 export/exportSMESH2
 export/exportTETGEN
 export/exportVTK
 export/grid2d_gmv
 export/grid2d_vtk
 export/grid3d_gmv
 export/grid3d_vtk
 export/hex3d_gmv
 export/hex3d_vtk
 export/Node_gmv
 export/Node_vtk
 export/quad2d_gmv
 export/quad2d_vtk
 export/quad3d_vtk
 export/tet3d_gmv
 export/tet3d_vtk
 export/tri2d_gmv
 export/tri2d_vtk
 export/tri3d_gmv
 export/tri3d_vtk
 export/VCell_vtk
 export/vector_gmv
 export/VNode_vtk
 import
 import/importBAMG
 import/importGRUMMP
 import/importMESH
 import/importNETGEN
 import/importNETGEN_NC
 import/importTETGEN
 import/importTetra
 import/importTetraNC
 import/importTetraNC2
 import/importVMESH
 line2d
 line2d/line2d
 line2d/x_line2d_Cell
 line2d/x_line2d_Node
 line2d/y_line2d_Cell
 line2d/y_line2d_Node
 line3d
 line3d/line3d
 line3d/x_line3d_Cell
 line3d/x_line3d_Node
 line3d/y_line3d_Cell
 line3d/y_line3d_Node
 line3d/z_line3d_Cell
 line3d/z_line3d_Node
 meshtool
 meshtool/bamg_poly
 meshtool/menu_mesh_disp
 meshtool/mesh_disp
 meshtool/meshtool
 meshtool/tri2d_plot
 meshtool/tri2d_show_bnd
 meshtool/tri2d_show_cell
 meshtool/tri2d_show_face
 meshtool/tri2d_show_node
 meshtool/tri2d_show_rect
 meshtool/tri3d_plot
 p1_1d
 p1_1d/ConvDx_p1_1d_p1_1d
 p1_1d/Dn_p1_1d
 p1_1d/Dx_p1_1d
 p1_1d/Id_p1_1d
 p1_1d/interpol_p1_1d
 p1_1d/kId_p1_1d
 p1_1d/kLaplace_p1_1d
 p1_1d/Laplace_p1_1d
 p1_1d/p1_1d
 p1_1d/p1_1d_to_p0_1d
 p1_2d
 p1_2d/ConvDx_p1_2d_p1_2d
 p1_2d/ConvDy_p1_2d_p1_2d
 p1_2d/ConvGrad_p1_2d_p1_2d
 p1_2d/Dn_p1_2d
 p1_2d/Dx_p1_2d
 p1_2d/Dy_p1_2d
 p1_2d/Id_p1_2d
 p1_2d/interpol_p1_2d
 p1_2d/kId_p1_2d
 p1_2d/kLaplace_p1_2d
 p1_2d/Laplace_p1_2d
 p1_2d/p1_2d
 p1_2d/p1_2d_to_p0_2d
 tri2d
 tri2d/square2d
 tri2d/tri2d
 tri2d/tsquare2d
 tri2d/x_tri2d_Cell
 tri2d/x_tri2d_Node
 tri2d/y_tri2d_Cell
 tri2d/y_tri2d_Node
 vartool
 vartool/colorbar
 vartool/menu_var_disp
 vartool/p0_2d_plot2d
 vartool/p0_2d_plot3d
 vartool/p1_2d_plot2d
 vartool/p1_2d_plot3d
 vartool/rgbcolor
 vartool/var_disp
 vartool/var_plot
 vartool/var_plot3d
 vartool/vartool
 vartool/xcolorbar
15th July 2010  New module : A Toolbox for Unconstrained Global Optimization of Polynomial functions
"Many problems in science and engineering can be reduced to the problem of finding optimum bounds for the range of a multivariable polynomial on a specified domain. Local optimization is an important tool for solving polynomial problems, but there is no guarantee of global optimality. For polynomial optimization problems, an alternate approach is based on the Bernstein form of the polynomial. If a polynomial is written in the Bernstein basis over a box, then the range of the polynomial is bounded by the values of the minimum and maximum Bernstein coefficients. Global optimization based on the Bernstein form does not require the iterative evaluation of the objective function. Moreover, the coefficients of the Bernstein form are needed to be computed only once, i.e., only on the initial domain box. The Bernstein coefficients for the subdivided domain boxes can then be obtained from the initial box itself. Capturing these beautiful properties of the Bernstein polynomials, global optimum for the polynomial on the given domain can be obtained. The toolbox is developed based on the above ideas."
The authors of this module are Dhiraj B. Magare, Bhagyesh V. Patil and P. S. V. Nataraj.
This toolbox is available in ATOMS at :
http://atoms.scilab.org/toolboxes/Global_Optim_toolbox/
To install it, type :
atomsInstall('Global_Optim_toolbox')
This module provides the following functions :
 InvUVW — Calculation for M
 cutoff — Cutoff Test
 finalresult — Gives minimizers (Lsol) at which Global Minimum (zcap) estimate will lie
 getverind — Get Vertex Indices
 ind2x — Converts into Subscripted Indices
 lin2sub — Converts into Subscripted Indices
 loop — Maximum Width Property applied on given Polynomial function.
 store — Stores Temporary Solution after the Vertex Property
 sub2lin — Converts Subscripted Indices to Linear Indices
 subdirectionB — Calculation of B on Subdivided Boxes
 vertex — Vertex Property applied on given polynomial function.
30th of August 2010: Scilab Wavelet Toolbox v0.1.11
This toolbox is aimed to mimic matlab wavelet toolbox. Most of the functions are similiar to their counterparts in Matlab equivalents.
The author of this module is Holger Nahrstaedt.
To install it, type:
atomsInstall('swt')
This module provides the following functions.
 DOGauss — DOGauss wavelet
 FSfarras — First Stage Filters
 appcoef — One Dimension Approximation Coefficent Reconstruction
 appcoef2 — Two Dimension Approximation Coefficent Reconstruction
 biorfilt — biorthogonal wavelet filter set
 biorwavf — biorthogonal spline wavelets scaling filter
 cauwavf — complex cauchy wavelet
 centfrq — Wavelet center frequency
 cgauwavf — complex gauss wavelet
 cmorwavf — complex morlet wavelet
 coifwavf — coiflets scaling filter
 conv — convolution
 cplxdual2D — Complex 2D dualtree wavelet transform
 cwt — Continous Wavelet Transform
 cwtplot — Plots cwt coeffs
 dbwavf — daubechies scaling filter
 ddencomp — Default values for denoising or compression
 detcoef — One Dimension Detail Coefficent Extraction
 detcoef2 — Two Dimension Detail Coefficent Extraction
 dualfilt1 — Second Stage Filters
 dualtree — 1D dualtree complex wavelet transform
 dualtree2D — Real 2D dualtree wavelet transform
 dwt — Discrete Fast Wavelet Transform
 dwt2 — Two Dimensional Discrete Fast Wavelet Transform
 dwt3 — Three Dimensional Discrete Fast Wavelet Transform
 dwtmode — Discrete Wavelet Transform Extension Mode
 dyaddown — dyadic downsampling
 dyadup — dyadic upsampling
 fbspwavf — complex frequency B spline wavelet
 gauswavf — gauss wavelet
 iconv — periodic convolution
 icplxdual2D — Complex 2D Dualtree Wavelet Inverse Transform
 idualtree — 1D inverse dualtree complex wavelet transform
 idualtree2D — Real 2D dualtree wavelet inverse transform
 idwt — Inverse Discrete Fast Wavelet Transform
 idwt2 — Two Dimension Inverse Discrete Fast Wavelet Transform
 idwt3 — Three Dimension Inverse Discrete Fast Wavelet Transform
 ind2rgb — convert indexed image to true color image
 iswt — Inverse Stationary Wavelet Transform
 iswt2 — Two Dimensional Inverse Stationary Wavelet Transform
 legdwavf — legendre wavelet scaling filter
 mexihat — mexican hat wavelet
 morlet — morlet wavelet
 orthfilt — orthogonal wavelet filter set
 poisson — poisson wavelet
 qmf — quadrature mirror
 rbiorwavf — reverse biorthogonal spline wavelets scaling filter
 scal2frq — Scale to frequency
 shanwavf — complex shannon wavelet
 sinus — sinus wavelet
 swt — Stationary Wavelet Transform
 swt2 — Two Dimentional Stationary Wavelet Transform
 symwavf — symlets scaling filter
 thselect — Threshold selection for denoising
 upcoef — Direct Restruction
 upcoef2 — Two Dimension Direct Restruction
 upwlev — Single Level Reconstruction from multiple level decompostion
 upwlev2 — Single Level Reconstruction from two dimension multiple level * decompostion
 wavedec — Multiple Level Discrete Fast Wavelet Transform
 wavedec2 — Two Dimension Multiple Level Discrete Fast Wavelet Transform
 wavedecplot — Plots wavedec coeffs
 wavefun — Wavelet and Scaling Functions
 wavefun2 — 2D Wavelet and Scaling Functions
 waverec — Multiple Level Inverse Discrete Fast Wavelet Transform
 waverec2 — Two Dimension Multiple Level Inverse Discrete Fast Wavelet Transform
 wcodemat — Matrix Coding
 wden — Automatic 1D denoising
 wenergy — Energy Statistics from multiple level decompostion
 wenergy2 — Energy Statistics from two dimension multiple level decompostion
 wextend — signal extension
 wfilters — wavelet filter set
 wkeep — signal extraction
 wmaxlev — maximun wavelet decompostion level
 wnoise — Noisy wavelet test data
 wnoisest — Estimate noise of 1D wavelet coefficients
 wnorm — Matrix Normalization
 wrcoef — Restruction from single branch from multiple level decompstion
 wrcoef2 — Restruction from single branch from two dimension multiple level decompstion
 wrev — vector fliping
 wrev2 — matrix fliping
 wrev3 — 3D matrix fliping
 wrot3 — 3D matrix rotation
 wtresh — Soft or hard thresholding
7th of September 2010: ANN Toolbox
This is a toolbox for artificial neural networks.
The author of this module is Ryurick M. Hristev.
Features:
 Only layered feedforward networks are supported directly at the moment (for others use the "hooks" provided)
 Unlimited number of layers
 Unlimited number of neurons per each layer separately
 User defined activation function (defaults to logistic)
 User defined error function (defaults to SSE)
 Algorithms :
 standard (vanilla) with or without bias, online or batch
 momentum with or without bias, online or batch
 SuperSAB with or without bias, online or batch
 Conjugate gradients
 Jacobian computation
 Computation of result of multiplication between "vector" and Hessian
The module provides the following functions.
 ANN_FF — Algorithms for feedforward nets.
 ANN_GEN — General utility functions
ann_FF_ConjugGrad — Conjugate Gradient algorithm.
 ann_FF_Hess — computes Hessian by finite differences.
 ann_FF_INT — internal implementation of feedforward nets.
 ann_FF_Jacobian — computes Jacobian by finite differences.
 ann_FF_Jacobian_BP — computes Jacobian trough backpropagation.
 ann_FF_Mom_batch — batch backpropagation with momentum.
 ann_FF_Mom_batch_nb — batch backpropagation with momentum (without bias).
 ann_FF_Mom_online — online backpropagation with momentum.
 ann_FF_Mom_online_nb — online backpropagation with momentum.
 ann_FF_SSAB_batch — batch SuperSAB algorithm.
 ann_FF_SSAB_batch_nb — batch SuperSAB algorithm (without bias).
 ann_FF_SSAB_online — online SuperSAB training algorithm.
 ann_FF_SSAB_online_nb — online backpropagation with SuperSAB
 ann_FF_Std_batch — standard batch backpropagation.
 ann_FF_Std_batch_nb — standard batch backpropagation (without bias).
 ann_FF_Std_online — online standard backpropagation.
 ann_FF_Std_online_nb — online standard backpropagation
 ann_FF_VHess — multiplication between a "vector" V and Hessian
 ann_FF_grad — error gradient trough finite differences.
 ann_FF_grad_BP — error gradient trough backpropagation
 ann_FF_grad_BP_nb — error gradient trough backpropagation (without bias)
 ann_FF_grad_nb — error gradient trough finite differences
 ann_FF_init — initialize the weight hypermatrix.
 ann_FF_init_nb — initialize the weight hypermatrix (without bias).
 ann_FF_run — run patterns trough a feedforward net.
 ann_FF_run_nb — run patterns trough a feedforward net (without bias).
 ann_d_log_activ — derivative of logistic activation function
 ann_d_sum_of_sqr — derivative of sumofsquares error
 ann_log_activ — logistic activation function
 ann_pat_shuffle — shuffles randomly patterns for an ANN
 ann_sum_of_sqr — calculates sumofsquares error
This toolbox has long been available in Scilab. It was provided in the Former Toolbox Center, since 2005. One origin of this toolbox is the book:
"The ANN Book", Edition 1, Ryurick M. Hristev, 1998
One other related document is the thesis submitted by Ryurick M. Hristev for the degree of Master of Science in Mathematics in the University of Canterbury:
"Matrix Techniques in Articifial Neural Networks", Ryurick M. Hristev, University of Canterbury, 2000
The toolbox provides 9 demonstrations:
 encoder 434 on ANN without biases
 tight encoder 424 on ANN with biases
 encoder 434 on ANN without biases compare with encoder_nb
 tight encoder 424 on ANN with biases compare with encoder
 encoder 848 on ANN without biases
 encoder 838 on ANN with biases
 encoder 858 on ANN without biases
 encoder 848 on ANN with biases
 tight encoder 424 on ANN with biases uses a mixed standard/conjugate gradients method
The demonstration "encoder 434 on ANN without biases" produces the following output.
>// Loose 434 encoder on a backpropagation network without biases >// (Note that the tight 424 encoder will not work without biases) >// ensure the same random starting point >rand('seed',0); >// network def. >//  neurons per layer, including input >N = [4,3,4]; >// inputs >x = [1,0,0,0; > 0,1,0,0; > 0,0,1,0; > 0,0,0,1]'; >// targets, at training stage is acts as identity network >t = x; >// learning parameter >lp = [8,0]; >// init randomize weights between >r = [1,1]; >W = ann_FF_init_nb(N,r); >// 500 epochs are enough to ilustrate >T = 500; >W = ann_FF_Std_online_nb(x,t,N,W,lp,T); >// full run >ann_FF_run_nb(x,N,W) ans = 0.9797963 0.0000130 0.0149148 0.0206898 0.0000013 0.9782197 0.0171215 0.0172901 0.0156931 0.0209699 0.9786566 0.0000488 0.0178893 0.0183367 0.0000022 0.9758474 >// encoder >encoder = ann_FF_run_nb(x,N,W,[2,2]) encoder = 0.9830039 0.0220232 0.9736080 0.0137882 0.9259607 0.2809842 0.0144560 0.9837462 0.0157942 0.988017 0.8082282 0.2658832 >// decoder >decoder = ann_FF_run_nb(encoder,N,W,[3,3]) decoder = 0.9797963 0.0000130 0.0149148 0.0206898 0.0000013 0.9782197 0.0171215 0.0172901 0.0156931 0.0209699 0.9786566 0.0000488 0.0178893 0.0183367 0.0000022 0.9758474
This toolbox is provided under GPL licence.
This toolbox is available on ATOMS:
http://atoms.scilab.org/toolboxes/ANN_Toolbox
To install it:
atomsInstall('ANN_Toolbox')
On the negative side, there is no example in the help pages, which are sometimes poorly formatted.
14th of September 2010: Factorization of Structured Matrices Toolbox
The Factorization of Structured Matrices Toolbox is a Scilab 5 toolbox for the fast factorization of matrices with displacement structure like, e.g., (block) Toeplitz matrices.
The author of this module is Sander Wahls.
Implemented algorithms:
 Generalized Schur algorithm for fast LDL/Cholesky factorization of strongly
regular hermitian matrices with displacement structure of Stein type
 Fast LDL/Cholesky factorization of strongly regular (block) Toeplitz matrices
 Fast QR factorization of (block) Toeplitz matrices
Features:
 Mainly written in C for speed
 Implements real and complex arithmetic
Current Limitations:
 Source hasn't been optimized in any way
This module provides the following functions.
 ldl_blocktoep — LDL factorization of a strongly regular hermitian block Toeplitz matrix with positivedefinite first block
 ldl_gs — LDL factorization for strongly regular hermitian matrices with displacement structure
 qr_blocktoep — QR factorization of a block Toeplitz matrix
Here is a sample session.
>// create indefinite hermitian Toeplitz matrix >T = toeplitz(1:5,1:5)+toeplitz(%i*(0:4),%i*(0:4)); >// compute displacement with respect to the shift matrix >F = [spzeros(1,5);speye(4,4) spzeros(4,1)]; >D = TF*T*F'; >// compute generator >G = [T(:,1) [0;T(2:$,1)]]; >J = [1 0;0 1]; >// test generator >disp(DG*J*G'); 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 >// compute LDL factorization >[L,d] = ldl_gs(F,G,1,1) d = 1.  1.  1.  1.  1. L =  1. 0 0 0 0  2.  i 1.7888544 + 0.8944272i 0 0 0  3.  2.i 2.6832816 + 1.3416408i  1.5491933  0.7745967i 0 0  4.  3.i 3.5777088 + 1.7888544i  2.0655911  1.0327956i 1.4605935 + 0.7302967i 0  5.  4.i 4.472136 + 2.236068i  2.5819889  1.2909944i 1.8257419 + 0.9128709i  1.4142136  0.7071068i >// test factorization >disp(TL*diag(d)*L'); 0 0 0 0 0 0  4.441D16 2.220D16 + 1.332D15i  8.882D16  2.220D15i  7.105D15  1.332D15i 0 2.220D16  1.332D15i  3.553D15 + 2.220D16i  1.776D15  4.219D15i  1.954D14  9.770D15i 0  8.882D16 + 2.220D15i  1.776D15 + 4.441D15i  1.332D15  4.441D16i  2.087D14  7.994D15i 0  7.105D15 + 1.332D15i  1.954D14 + 8.882D15i  2.087D14 + 7.772D15i  5.596D14 + 1.554D15i
7th of November 2010: Linear System Inversion Toolbox v1.0.2
The Linear System Inversion Toolbox is a Scilab 5 toolbox for stable inversion of linear timeinvariant systems. The inverses are optimal in the sense that some norm criterion is minimized. Decision delays are also implemented for some cases.
The author of this module is Sander Wahls.
Changes in Version 1.0.2:
* Shorter startup message * Tests are now integrated with Scilabs testing system * Improved documentation
To install it, type:
atomsInstall('lsitbx')
This module provides the following functions:
 dh2norm — H2 norm of a stable discretetime system.
 h2invsyslin — Stable inverse with minimal weighted H2 norm.
 h8invsyslin — Stable inverse with (approximately) infimal Hinfinity norm.
 lsitbx_license — Licensing terms of the Linear System Inversion Toolbox.
 parapinv — Parapseudoinverse.
 wkf — WienerKalman Filter.
The following is a sample session.
>// Example 1: Approximate delayone left inverse of H(z)=[1/z;11/z] >z = poly(0,"z"); >H = [1/z ; 11/z]; // create transfer function of the channel >SYSH = tf2ss(H); // convert into statespace model >SYSH.dt = "d"; // mark as discretetime >SYSL = tf2ss(1/z); // create statespace model of the >SYSL.dt = "d"; // mark as discretetime >Q = 1; // covariance of the data >R = 1e8*eye(2,2); // covariance of the noise is very small > // => filter is approximately a left inverse >SYSK = wkf(SYSL,SYSH,Q,R); // compute wienerkalman filter >disp(clean(ss2tf(SYSK*SYSH))); // should be approximately L(z)=1/z 2  2.361D09 + 1.0000000z  6.180D09z  2 z
3rd of November 2010: Identification Toolbox v1.0
Identification Toolbox is used to construct mathematical models of LTI systems from measured inputoutput data sequences. The tool allows preprocessing of signals, identification of LTI systems and validating of constructed models. The tool is aimed particularly at blackbox identification.
The author if this module is Martin Novotny.
This module provides the following functions.
 alias_k — Realizes ideal antialiasing filter.
 ampl_k — Realizes amplitude filter. Removes frequencies with frequencyspectrumamplitude
 arIdent — ARX or ARMAX estimation from a given range of number of parameters.
 armax — ARMAX model estimation.
 arx — ARX model estimation.
 d_k — Realizes ideal lowpass filter.
 estimatex0 — Initial state estimate
 fitFactor — Fit factor
 h_k — Realizes ideal highpass filter.
 loadmat — loads all the variables in a matlab 5 binary file into Scilab
 lsim — Response of a discretetime LTI system.
 nanhandle — Handles missing values.
 nanredundant — Removes redundant NaN caused by non buffered sampling.
 notrend — Removes mean values and / or trends.
 pp_k — Realizes ideal bandpass filter.
 pz_k — Realizes ideal bandstop filter.
 rarx — Recursive ARX model estimation.
 repairtime — Data adjustment according to time line.
 residues — Computes the residues res=yvys.
 rsmp — Resamples the input vector (matrix).
 subid — Combined subspace identifiacation.
 subspaceIdent — Subspace identification in a given range of parameters
 valPred — ARX or ARMAX model prediction validation.
 valSim — ARX or ARMAX model simulation validation.
 validate — Model validation
To install it, type:
atomsInstall('identification')
30th of October 2009: HYDROGR
HYDROGR is a set of models and function to perform data analysis and modelling of hydrological time series. In particular HYDROGR contains the rainfallrunoff model GR4J developped in CEMAGREF (see http://www.cemagref.fr/webgr/index.htm). The functions are written in Scilab languages and C.
The author of this module is Julien LERAT from CEMAGREF.
This module provides the following functions.
 ANN_CONV_W — Function to convert the weight and bias stored in a matrix or vector form in the other form (vector or matrix form respectively)
 ANN_JACOB — Function to calculate the jacobian performance vector of a feed forward artifical neural network
 ANN_LMBR — Function to train a feedforward artificial neural network with one hidden layer.
 ANN_NORM — Function to normalise data to train a feed forward network
 ANN_OIPDERIV — Function to estimate input variables influence on one output variable based on * ANN partial derivative
 ANN_REPET — Function to train different repetitions of a feedforward artificial neural network with a split sample test procedure.
 ANN_SIM — Function to simulate the outputs of a feedforward artificial neural network with one hidden layer
 COMBIN — Function to generate all possible combinations of elements stored in a list element
 CORRCROIS — Extension of the corr function
 CRIT — Function to calculate numerical quality criteria
 GRAPH_BOX — Function to generate french boxplots
 GRAPH_CORREL — Function to generate graphs showing correlations in a set of variables
 GRAPH_DATE — Function to add date values in abscissae of time series graphs
 GR_CAL_LM — Function to calibrate GR models with a split sample test using lsqrsolve
 GR_SIM — Function to simulate discharges with a GR rainfall runoff model
 GUMB_AJUST — Function to adjust a Gumbel law and plot the result
 IO_ENTETE — Function to generate a header when printing a matrix with fprintfMat
 IO_PRDATA — Function to print data frames in a text file
 IO_PRLATEX — Function to print a table in latex format
 IO_READDATA — Function to read data from a text file
 PROPAG_CAL_LM — Function to calibrate routing models with a split sample test using lsqrsolve
 REGRESS_LIN — Function to perform a linear regression with various statistical tests
 TARAGE — Function to apply rating curves and convert water levels or discharge data
 c_AVRG — Function to make a selective columnwise average (excluding missing data)
 c_CONVDATE — Function to convert dates from AAAAMMJJhhmm format to Excel numerical format and viceversa
 c_CORRIGE — Function to correct data
 c_ETP — Function to generate potential evapotranspiraton time series
 c_GENEPAR — Function to calculate the parameters of the daily rainfall generator
 c_GENEPLUIE — Function to calculate random daily rainfall timeseries
 c_GR2M — Function to simulate runoff series from rainfall and evapotranspiration with GR2M model (monthly water balance)
 c_GR4J — Function to simulate runoff series from rainfall and evapotranspiration with GR4J model
 c_INTER * ANN — Function to calculate smoothed inter * ANNual daily values
 c_KNN1SIM — Function to resample daily values
 c_PROPAG — Function permitting to propagate hydrographs with simple models
The following is an example for the c_GR4J function.
// P is a vector containing daily rainfall values P = max(0,exp(convol(1/3*ones(1,3),rand(4998,1)+0.2).^5)*1012)'; // (fictious data on 5000 days) // Generation of ETP daily time series (5000 days), latitude = 45deg, start = 1/1/1980 T = [0.687;0.498;2.774;6.086;10.565;13.702;16.159;15.585;12.619;8.486;3.300;0.778]; ETP = c_ETP(5000,24,%pi/4,T,198001010000,1); // GR4J parameters X =[665;1.18;90;3.8;0]; // GR4J simulation [Qsim,Ssim,Rsim]=c_GR4J(24,X,P,ETP,[0.6;0.7]); // Plots x=(1:size(P,1))'; subplot(3,1,1),plot(x,Qsim); // Calculated discharge subplot(3,1,2),plot(x,Ssim); // Production store filling level subplot(3,1,3),plot(x,Rsim); // Routing store filling level
This produces the following figure.
25th of September 2010: Financial Module
The module is dedicated to finance. There are three main areas that are covered: (i) risk measure and management, (ii) asset allocation, and (iii) pricing.
The author of this module is Francesco Menoncin from Brescia University  Economics Department.
For what concerns the risk measure, some functions are dedicated to the computation of Value at Risk (VaR) and Expected Shortfall (ES). Backtest is also implemented in order to check the goodness of such risk measures. Both VaR and ES are also computed in an Extreme Value Theory framework (EVT). Furthermore, it is possible to estimate the parameters of the EVT density function (through maximum likelihood). The Mean Excess Function for graphical study of an EVT distribution is also implemented. The interest rate risk is faced by functions aimed at computing duration, convexity, and yield to maturity. Furthermore, Merton, Vasicek and Cox, Ingersoll and Ross interest rate models are implemented together with the estimation of their parameters. Parametric interpolation of the interest rate curve is possible through both Svennson’s and NelsonSiegel’s models. Finally, some technical analysis indicators are implemented: Bollinger bands, moving averages, Hurst index.
The asset allocation problem is faced by two functions which compute: (i) the optimal portfolio minimizing the variance of its return and (ii) the optimal portfolio minimizing the expected shortfall of its return. In both cases, the portfolios with and without a riskless asset and with and without short selling are computed.
Pricing problem is approached through functions aimed at: (i) computing the spread on Interest Rate Swaps, (ii) computing the value of options in the Black and Scholes framework (with Greeks and implied volatility), (iii) simulating stochastic processes (through Euler discretization).
Features
 backtest : Apply the backtest to Expected Shortfall, Value at Risk and a Linear Spectral risk measure.
 bollinger : Plots the historical prices, the Bollinger bands, and the bpercentage.
 bsgreeks : Compute the Greeks for Black and Scholes put and call options.
 bsimpvol : Compute the implied volatility in a Black and Scholes framework.
 bsoption : Compute the value of both a call and a put option in a Black and Scholes framework.
 cfr : Compare and merge two or more time series according to dates.
 duration : Compute both duration and convexity of cash flows by using the yieldtomaturity.
 esvarevt : Compute both Expected Shortfall and Value at Risk.
 esvarlin : Compute Expected Shortfall, Value at Risk and a Linear Spectral risk measure on a set of assets.
 esvaroptim : Compute the optimal portfolio minimizing the Expected Shortfall.
 euler : Simulate the solution of a system of stochastic differential equation.
 evt : Estimate the parameters of the Generalized Pareto Distribution.
 gbm : Estimate the parameters of a Geometric Brownian Motion.
 hedge : Compute the hedge ratio between an asset and a derivative on that asset.
 hurst : Compute the Hurst index on historical prices.
 interest : Estimate the parameters of three spot interest rate models (Merton  Vasicek  CIR).
 irs : Compute both the spread and the value of the legs of a fixforfloating Interest Rate Swap.
 markowitz : Compute the optimal portfolio minimizing the variance.
 mef : Compute and draw the Mean Excess Function.
 movav : Compute and draw the moving average of a given time series.
 nelson_siegel : Estimate the parameters for the Nelson Siegel model of spot interest rates.
 svennson : Estimate the parameters for the Svennson model of spot interest rates.
The following is an example of the bsgreeks function.
We compute the Greeks on both a call and put option with: underlying price 25 euros, strike price 25 euros, 0.001 (annual) riskless interest rate, 3 month time to maturity (i.e. T=3/12), and 0.2 (annual) volatility.
>[D,G,Th,R,V]=bsgreeks(25,25,0.01,3/12,0.2) V = 4.9727729 R = 3.0550246  3.1793699 Th = 2.1113101 1.8619344 G = 4.9727729 D = 0.5298926  0.4701074