mailRe: Optimisation tests in the test suite.


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Gary S. Thompson on October 20, 2006 - 11:24:
Alexandar Hansen wrote:

Maybe I'm missing something, but to me 1e-21 is the same as 1e-27 in computer language. Let's assume a small amount of error in measurements of 1%. With that amount of random error, I couldn't possibly expect much less than 1e-3 or 1e-4 as an overall chi2. Is there significance to having this high of precision when our measurements could never reach it?

Alex


Hi Alex
  Two points:

1. the tolerable level of error differs depending on whether you are
looking at a low level optimisation or a final result as errors
propogate add etc
2. you want to know that a function is operating properly so you want to
assign the tightest bounds to it you can.... as its better to have
bounds that are to tight rather than to loose

regards gary


On 10/19/06, *Chris MacRaild* <c.a.macraild@xxxxxxxxxxx <mailto:c.a.macraild@xxxxxxxxxxx>> wrote:


    On Fri, 2006-10-20 at 01:04 +1000, Edward d'Auvergne wrote:
    > On 10/20/06, Alexandar Hansen <viochemist@xxxxxxxxx
    <mailto:viochemist@xxxxxxxxx>> wrote:
    > > First, I tried the test-suite again, and it failed (new
    failures this time
    > > though :) )
    > >
    > > #############################
    > > # Results of the test suite #
    > > #############################
    > > Updated to revision 2654:
    > >
    > > The model-free tests:
    > >
    > >     Constrained BFGS opt, backtracking line search {S2=0.970,
    te=2048,
    > > Rex=0.149} ....... [ Failed ]
    > >     Constrained BFGS opt, backtracking line search {S2= 0.970,
    te=2048,
    > > Rex=0.149} ....... [ Failed ]
    > >     Constrained Newton opt, GMW Hessian mod, backtracking line
    search
    > > {S2=0.970, te=2048, Rex=0.149}  [ Failed ]
    > >     Constrained Newton opt, GMW Hessian mod, More and Thuente
    line search
    > > {S2=0.970, te=2048, Rex=0.149 }  [ Failed ]
    >
    > I've deliberately started a new thread to talk about optimisation
    > tests in the test suite.  This originates from the post located at
    > https://mail.gna.org/public/relax-devel/2006-10/msg00112.html
    > (Message-id: <
    481156b20610190725ud6bab67w1f8fbbdf849da52c@xxxxxxxxxxxxxx
    <mailto:481156b20610190725ud6bab67w1f8fbbdf849da52c@xxxxxxxxxxxxxx>>).
    >
    > These are new optimisation tests I have added.  The problem with
    > setting up these types of test is that machine precision and
    round-off
    > error causes slightly different optimisation results on different
    > systems (different numbers of iterations, function counts, gradient
    > counts, etc).  The model-free parameters should be, to within
    machine
    > precision, the same each time.  Things which may influence this are
    > the CPU type, Python version, Numeric version, underlying C library,
    > operating system, etc.
    >
    > Would you have the text output of one of these tests?  Can you
    see if
    > it is a model-free parameter or optimisation static causing the
    > problem?  I've tried the tests on Windows and exactly the same tests
    > fail (excluding the third one).  The problem is the chi-squared
    value
    > is ~1e-24 when ~1e-27 was expected.  Optimisation terminates a
    little
    > earlier on Windows (less iterations of the optimisation algorithm).
    > I'm wondering if testing the optimisation statistics is
    worthwhile in
    > the test suite considering the variability?
    >

    The BFGS and Newton optimisation tests fail of my machine for this
    reason (chi2 as big as 7e-21 in some cases). I'm running Linux on dual
    64 bit AMDs. Python is 2.4.1 compiled with gcc.

    Testing optimisation stats may be appropriate in some cases, but it is
    clearly expecting too much to have 1e-27 correct to a relative
    error of
    1e-8, which I think is what you are testing for. If the optimisation
    algorithm in question should terminate based on some chi2 tolerance,
    then it is should be adequate to demand the value to be less than that
    tolerance. Alternatively, if the expected chi2 is finite, because of
    noise in the data, then it is fair enough to test for it (+/- a
    reasonable tolerance).


Chris



    > Edward
    >
    > _______________________________________________
    > relax ( http://nmr-relax.com)
    >
    > This is the relax-devel mailing list
    > relax-devel@xxxxxxx <mailto:relax-devel@xxxxxxx>
    >
    > To unsubscribe from this list, get a password
    > reminder, or change your subscription options,
    > visit the list information page at
    > https://mail.gna.org/listinfo/relax-devel
    >

------------------------------------------------------------------------

_______________________________________________
relax (http://nmr-relax.com)

This is the relax-devel mailing list
relax-devel@xxxxxxx

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-devel




--
-------------------------------------------------------------------
Dr Gary Thompson
Astbury Centre for Structural Molecular Biology,
University of Leeds, Astbury Building,
Leeds, LS2 9JT, West-Yorkshire, UK             Tel. +44-113-3433024
email: garyt@xxxxxxxxxxxxxxx                   Fax  +44-113-2331407
-------------------------------------------------------------------






Related Messages


Powered by MHonArc, Updated Fri Oct 20 13:40:35 2006