Contributor - FSolve

The Fsolve scilab function is the function which solves a non linear equations system. Today, this function doesn't support sparse Jacobian. Today, fsolve is based on the powell method which is, maybe, not the best method. Today, it's not possible to fine tune the parameters of the fsolve method (we don't have access to the number of evaluation of the system, etc...).

What to do

Under Octave: Octave uses the same method as Scilab, the hybr[dj] function from the MinPack package.

Under Matlab: Matlab proposes several other methods. Here is a part of the Matlab documentation related to the keyword which allows to set the solver:

NonlEqnAlgorithm Specify one of the following algorithms for solving nonlinear equations:

A good source of inspiration can be found here: the book of C. T. Kelley

Some codes

An example of implementation of a newton method under scilab (note the strategy applied for the step size):

function [solution] = NewtonReduced(MyFunc,Jacobian,Guess,tol)
// solves the non-linear vector equation  F(x)=0
// set using Newton Raphson iteration (but with a the ability to reduce the
// length of the step until the a decrease in error is detected).

// Myfunc   = Handle to the function which calculates the vector F(x)
// Jacobian = Handle to the function which returns the Jacobian Matrix
// Guess  = Initial Guess (a vector)
// tol    = Tolerance
// solution = The solution to F(x) = 0

x = Guess;

//set the error 2*tol to make sure the loop runs at least once
_error = 2*tol

while _error > tol
  //calculate the function values at the current iteration
  F = feval(MyFunc,x);
  error1 = max(abs(F));
  error2 = error1;

  //calculate the jacobian matrix
  J = feval(Jacobian,x);

  //calculate the update (solve the linear system)
  dx = J\(-F);

  //update the x value
  i = 1;
  while (error2>=error1 || ~isreal(F))
    xnew   = x+(0.5)^i*dx;
    F      = feval(MyFunc,xnew);
    error2 = max(abs(F));
    i      = i+1;

  x = xnew;

  //calculate the error
  F = feval(MyFunc,x);
  _error = max(abs(F));

  printf('error = %f \n', _error);
end //while loop

solution = x;

Here is an example of a function which checks the analytical derivatives of a function:

function [grad,hess] = checkderivatives(str,x,varargin)
//  CHECKDERIVATIVES - Check if a function returns the correct gradient and hessian matrix
// checkderivatives(str,x);
//  [grad,hess] = checkderivatives(str,x,varargin)
// varargin are the parameters of the function specified in str

[nargout,nargin] = argn();

x = x(:);
d = length(x);

// options
dec = 1e-4;
add = ceil(d*log(d));
n = 1+d+d*d+add;
if ~isempty(varargin) then
  command = [str '(xcur,' a(:)' ';'];
  command = [str '(xcur);'];

xcur = x;
eval(['[lval,grad0,hess0] = ' command]);

f = zeros(n,1);
z = (rand(n,d)-.5)*dec; //random directions
for i = 1:n
  xcur = x+z(i,:)';
  eval(['f(i) = ' command]);
indlower = find(tril(ones(d),-1));
[indlower_r,indlower_c] = ind2sub([d,d],indlower');

M = [z,.5*z.^2,z(:,indlower_r).*z(:,indlower_c)];
R = M\(f-lval);

grad = R(1:d);
low  = zeros(d);low(indlower) = R(2*d+1:$);
hess = diag(R(d+1:2*d)) + low + low';

if ~isempty(grad0) then
  falsegrad = find(abs(grad0-grad)>sqrt(dec));
  if ~isempty(falsegrad)
    disp(['gradient in directions ' num2str(falsegrad') ' is false']);
    disp('gradient seems to be OK');
  if nargout==0 then
    [grad0 grad]
    clear grad

if ~isempty(hess0) then
  [fhr,fhc] = find(abs(hess0-hess)>100*sqrt(dec));
  if ~isempty(fhr)
    disp('hessian is false in the following positions:')
    disp('hessian seems to be OK');

A C++ derivatives checker can be found in the Ipopt tool.


A good comparison of several solvers for non linear system of equations can be found here (it's a PowerPoint presentation).

Some ideas of test problems:

Existing solvers


public: Contributor - FSolve (last edited 2011-03-30 16:18:04 by localhost)