On 10/20/06, Alexandar Hansen <viochemist@xxxxxxxxx> wrote:
First, I tried the test-suite again, and it failed (new failures this time
though :) )
#############################
# Results of the test suite #
#############################
Updated to revision 2654:
The model-free tests:
Constrained BFGS opt, backtracking line search {S2=0.970, te=2048,
Rex=0.149} ....... [ Failed ]
Constrained BFGS opt, backtracking line search {S2=0.970, te=2048,
Rex=0.149} ....... [ Failed ]
Constrained Newton opt, GMW Hessian mod, backtracking line search
{S2=0.970, te=2048, Rex=0.149} [ Failed ]
Constrained Newton opt, GMW Hessian mod, More and Thuente line search
{S2=0.970, te=2048, Rex=0.149 } [ Failed ]
I've deliberately started a new thread to talk about optimisation
tests in the test suite. This originates from the post located at
https://mail.gna.org/public/relax-devel/2006-10/msg00112.html
(Message-id: <481156b20610190725ud6bab67w1f8fbbdf849da52c@xxxxxxxxxxxxxx>).
These are new optimisation tests I have added. The problem with
setting up these types of test is that machine precision and round-off
error causes slightly different optimisation results on different
systems (different numbers of iterations, function counts, gradient
counts, etc). The model-free parameters should be, to within machine
precision, the same each time. Things which may influence this are
the CPU type, Python version, Numeric version, underlying C library,
operating system, etc.
Would you have the text output of one of these tests? Can you see if
it is a model-free parameter or optimisation static causing the
problem? I've tried the tests on Windows and exactly the same tests
fail (excluding the third one). The problem is the chi-squared value
is ~1e-24 when ~1e-27 was expected. Optimisation terminates a little
earlier on Windows (less iterations of the optimisation algorithm).
I'm wondering if testing the optimisation statistics is worthwhile in
the test suite considering the variability?
Edward