mailRe: Optimisation tests in the test suite.


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Chris MacRaild on October 19, 2006 - 18:54:
On Fri, 2006-10-20 at 01:04 +1000, Edward d'Auvergne wrote:
On 10/20/06, Alexandar Hansen <viochemist@xxxxxxxxx> wrote:
First, I tried the test-suite again, and it failed (new failures this time
though :) )

#############################
# Results of the test suite #
#############################
Updated to revision 2654:

The model-free tests:

    Constrained BFGS opt, backtracking line search {S2=0.970, te=2048,
Rex=0.149} ....... [ Failed ]
    Constrained BFGS opt, backtracking line search {S2=0.970, te=2048,
Rex=0.149} ....... [ Failed ]
    Constrained Newton opt, GMW Hessian mod, backtracking line search
{S2=0.970, te=2048, Rex=0.149}  [ Failed ]
    Constrained Newton opt, GMW Hessian mod, More and Thuente line search
{S2=0.970, te=2048, Rex=0.149 }  [ Failed ]

I've deliberately started a new thread to talk about optimisation
tests in the test suite.  This originates from the post located at
https://mail.gna.org/public/relax-devel/2006-10/msg00112.html
(Message-id: <481156b20610190725ud6bab67w1f8fbbdf849da52c@xxxxxxxxxxxxxx>).

These are new optimisation tests I have added.  The problem with
setting up these types of test is that machine precision and round-off
error causes slightly different optimisation results on different
systems (different numbers of iterations, function counts, gradient
counts, etc).  The model-free parameters should be, to within machine
precision, the same each time.  Things which may influence this are
the CPU type, Python version, Numeric version, underlying C library,
operating system, etc.

Would you have the text output of one of these tests?  Can you see if
it is a model-free parameter or optimisation static causing the
problem?  I've tried the tests on Windows and exactly the same tests
fail (excluding the third one).  The problem is the chi-squared value
is ~1e-24 when ~1e-27 was expected.  Optimisation terminates a little
earlier on Windows (less iterations of the optimisation algorithm).
I'm wondering if testing the optimisation statistics is worthwhile in
the test suite considering the variability?


The BFGS and Newton optimisation tests fail of my machine for this
reason (chi2 as big as 7e-21 in some cases). I'm running Linux on dual
64 bit AMDs. Python is 2.4.1 compiled with gcc. 

Testing optimisation stats may be appropriate in some cases, but it is
clearly expecting too much to have 1e-27 correct to a relative error of
1e-8, which I think is what you are testing for. If the optimisation
algorithm in question should terminate based on some chi2 tolerance,
then it is should be adequate to demand the value to be less than that
tolerance. Alternatively, if the expected chi2 is finite, because of
noise in the data, then it is fair enough to test for it (+/- a
reasonable tolerance).


Chris



Edward

_______________________________________________
relax (http://nmr-relax.com)

This is the relax-devel mailing list
relax-devel@xxxxxxx

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-devel





Related Messages


Powered by MHonArc, Updated Fri Oct 20 11:40:29 2006