# Contributor - FSolve

The Fsolve scilab function is the function which solves a non linear equations system. Today, this function doesn't support sparse Jacobian. Today, fsolve is based on the powell method which is, maybe, not the best method. Today, it's not possible to fine tune the parameters of the fsolve method (we don't have access to the number of evaluation of the system, etc...).

## Where to find some informations related to this method

- the scilab interface: scilab-5.0/modules/optimization/sci_gateway/c/sci_fsolv.c
- the fortran interface: scilab-5.0/modules/optimization/sci_gateway/fortran/sci_f_fsolve.f
- This function checks the parameters and, if the Jacobian is given, it calls hybrd1 (Powell without Jacobian) or hybrdj1 (Powell with Jacobian);

- The Powell method without Jacobian: scilab-5.0/modules/optimization/src/fortran/hybrd1.f
- This function checks some parameters and then call the core method hybrd which can be found in hybrd.f in the same directory.

- The Powell method with Jacobian: scilab-5.0/modules/optimization/src/fortran/hybrj1.f
- This function checks some parameters and then call the core method hybrj which can be found in hybrj.f in the same directory.

## What to do

- give access to more parameters (iteration limits, etc ...);
- add an interface to deal with sparse Jacobian. This will allow to solve bigger system.

Under Octave: Octave uses the same method as Scilab, the hybr[dj] function from the MinPack package.

Under Matlab: Matlab proposes several other methods. Here is a part of the Matlab documentation related to the keyword which allows to set the solver:

**NonlEqnAlgorithm** Specify one of the following algorithms for solving nonlinear equations:

- 'dogleg' — Trust-region dogleg algorithm (default)
- 'lm' — Levenberg-Marquardt
- 'gn' — Gauss-Newton

A good source of inspiration can be found here: the book of C. T. Kelley

## Some codes

An example of implementation of a newton method under scilab (note the strategy applied for the step size):

function [solution] = NewtonReduced(MyFunc,Jacobian,Guess,tol) // solves the non-linear vector equation F(x)=0 // set using Newton Raphson iteration (but with a the ability to reduce the // length of the step until the a decrease in error is detected). // INPUTS; // Myfunc = Handle to the function which calculates the vector F(x) // Jacobian = Handle to the function which returns the Jacobian Matrix // Guess = Initial Guess (a vector) // tol = Tolerance // // OUTPUTS // solution = The solution to F(x) = 0 x = Guess; //set the error 2*tol to make sure the loop runs at least once _error = 2*tol while _error > tol //calculate the function values at the current iteration F = feval(MyFunc,x); error1 = max(abs(F)); error2 = error1; //calculate the jacobian matrix J = feval(Jacobian,x); //calculate the update (solve the linear system) dx = J\(-F); //update the x value i = 1; while (error2>=error1 || ~isreal(F)) xnew = x+(0.5)^i*dx; F = feval(MyFunc,xnew); error2 = max(abs(F)); i = i+1; end x = xnew; //calculate the error F = feval(MyFunc,x); _error = max(abs(F)); printf('error = %f \n', _error); end //while loop solution = x; endfunction

Here is an example of a function which checks the analytical derivatives of a function:

function [grad,hess] = checkderivatives(str,x,varargin) // CHECKDERIVATIVES - Check if a function returns the correct gradient and hessian matrix // // checkderivatives(str,x); // [grad,hess] = checkderivatives(str,x,varargin) // varargin are the parameters of the function specified in str [nargout,nargin] = argn(); x = x(:); d = length(x); // options dec = 1e-4; add = ceil(d*log(d)); n = 1+d+d*d+add; if ~isempty(varargin) then a=strcat('varargin{',num2str((1:length(varargin))'),'},')';a($)=')'; command = [str '(xcur,' a(:)' ';']; else command = [str '(xcur);']; end //computation xcur = x; eval(['[lval,grad0,hess0] = ' command]); //checking f = zeros(n,1); z = (rand(n,d)-.5)*dec; //random directions for i = 1:n xcur = x+z(i,:)'; eval(['f(i) = ' command]); end indlower = find(tril(ones(d),-1)); [indlower_r,indlower_c] = ind2sub([d,d],indlower'); M = [z,.5*z.^2,z(:,indlower_r).*z(:,indlower_c)]; R = M\(f-lval); grad = R(1:d); low = zeros(d);low(indlower) = R(2*d+1:$); hess = diag(R(d+1:2*d)) + low + low'; if ~isempty(grad0) then falsegrad = find(abs(grad0-grad)>sqrt(dec)); if ~isempty(falsegrad) disp(['gradient in directions ' num2str(falsegrad') ' is false']); else disp('gradient seems to be OK'); end if nargout==0 then [grad0 grad] clear grad end end if ~isempty(hess0) then [fhr,fhc] = find(abs(hess0-hess)>100*sqrt(dec)); if ~isempty(fhr) disp('hessian is false in the following positions:') disp([fhr,fhc]); else disp('hessian seems to be OK'); end end endfunction

A C++ derivatives checker can be found in the Ipopt tool.

## Benchmark

A good comparison of several solvers for non linear system of equations can be found here (it's a PowerPoint presentation).

Some ideas of test problems:

- thngs from minpack (-2, in one of the packages, there are some fortran test problems)
- a problem of flow network (gaz / water) which is sparse and can be huge (I have an instance with around 6000 equations and 6000 variables)
- does an eigen problem fit our needs ? To be verified.
- We still need some other test problems (academic, industrial)

## Existing solvers