mailOptimisation precision in the relaxation dispersion analysis.


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Edward d'Auvergne on July 23, 2013 - 10:51:
Dear Paul, Mathilde, and Dominique,

The following is some more information you should be aware of.  And that is that the optimisation precision is relax is much higher than in the original fitting_main_kex.py program (https://gna.org/task/?7712#comment2).  There are 3 differences:

- By default, the scipy.optimize.leastsq() function used in fitting_main_kex.py has a strange function tolerance of 1.49012e-08 (the amount the chi-squared value is allowed to change by).  In relax, the default is 1e-25.

- There is an X tolerance in the scipy function whereas this is not used in relax (this is the length of the step).  This allows relax to take more small steps which, for the dispersion equations appears to be important.  Small steps are also needed when optimisation algorithms are adjusting themselves when they encounter a change in the curvature of the optimisation space.  It's best to turn this off in scipy.

- The maximum number of target function calls is also higher at 10,000,000.  In the example myworkbook.xls file that came with the fitting_main_kex.py program, the maximum is 100.  And inside the script, the maximum of 100 for Monte Carlo simulations appears to be hardcoded on line 513.

This means that this part of relax will be slower, but the results and errors will be more accurate.  For basic quadratic problems, low precision is ok.  But for convoluted spaces, higher precision is essential.  For relaxation dispersion, I do not know how convoluted the space is so I have no idea how big the optimisation differences will be.  The differences are often seen in rarer edge cases.

Regards,

Edward



Related Messages


Powered by MHonArc, Updated Tue Jul 23 11:20:07 2013