mailRe: Optimisation tests in the test suite.


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Gary S. Thompson on October 20, 2006 - 16:36:
Chris MacRaild wrote:
On Fri, 2006-10-20 at 19:27 +1000, Edward d'Auvergne wrote:
  
The BFGS and Newton optimisation tests fail of my machine for this
reason (chi2 as big as 7e-21 in some cases). I'm running Linux on dual
64 bit AMDs. Python is 2.4.1 compiled with gcc.
      
As long as the difference between the model-free parameters is tiny,
this shouldn't matter.


    
Testing optimisation stats may be appropriate in some cases, but it is
clearly expecting too much to have 1e-27 correct to a relative error of
1e-8, which I think is what you are testing for. If the optimisation
algorithm in question should terminate based on some chi2 tolerance,
then it is should be adequate to demand the value to be less than that
tolerance. Alternatively, if the expected chi2 is finite, because of
noise in the data, then it is fair enough to test for it (+/- a
reasonable tolerance).
      
The function tolerance between iterations is set to 1e-25 (I'll get to
the importance of this in my next post).
    

Is this the only termination criterion? If so, why are we seeing
apparently normal optimisations terminating with chi2 >> 1e-25 ?
I guess another possibly related question is why is this only happening
for the BFGS and Newton optimisation tests - is there something special
about these algorithms that makes chi2 poorly determined in these test
cases?

  
  The test is to be within
'value +/- value*error' where the error is 1e-8.  This equation
removes the problem of the different scaling between the model-free
parameters (the 1e12 difference between S2 and te, etc.).
    

This is fine for finite values like S2 and te. The issue here is that
the expected value for chi2 in these tests is 0 (assuming a perfect
optimisation). It seems to me that the best way to ensure that
optimisation is behaving correctly in these cases is to test for the
following:
1) The optimised values are correct to within some tolerance (1e-8
relative error seems about right here)
2) Termination is normal. ie. the optimiser has thrown no errors or
warnings, and has not reached its maximum number of iterations.
3) Chi2 is small ( <= 1e-20 seems about right based on the few values
reported so far, but something less restrictive might be required)

On reflection, it is probably worth having at least some tests where we
expect a finite chi2. Testing for that value then should be much easier
to deal with.

Chris
  

I would I guess go further than this and suggest that operating with no error is pretty liely to cause failure as the results will be dominated by floating poiing/implementation effects. I would suggest that all cases ecept for the most simple tests aught to have some finite chi squared bound and finite errors (I am happy to proved wrong ;-)

regards
gary
  
  Testing the
chi-squared value to be within 8 orders of magnitude might be better.
Or maybe the difference between the two values being less than 1e-8?

Edward

    


_______________________________________________
relax (http://nmr-relax.com)

This is the relax-devel mailing list
relax-devel@xxxxxxx

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-devel

.

  


-- 
-------------------------------------------------------------------
Dr Gary Thompson
Astbury Centre for Structural Molecular Biology,
University of Leeds, Astbury Building,
Leeds, LS2 9JT, West-Yorkshire, UK             Tel. +44-113-3433024
email: garyt@xxxxxxxxxxxxxxx                   Fax  +44-113-2331407
-------------------------------------------------------------------


Related Messages


Powered by MHonArc, Updated Fri Oct 20 18:00:34 2006