mailRe: Optimisation tests in the test suite.


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Edward d'Auvergne on October 20, 2006 - 17:07:
>> So here my thought. What we have here are regression tests so we either
>>
>>
>> 1. define a set of results for each  test on each particular platform
>> (you have a mode there someone can run the tests on a version we believe
>> works, and then say e-mails them to us for inclusion) We then store them
>> and then use those results only for that platform
>> 2. define a s set of result for each test which encompases worst case
>> performance (as long as it is reasonable), run the tests on a variety of
>> platforms and if it fails on some platforms decide on a case by case
>> basis if the result is reasonable and downgrade you regression tests
>> till it works everywhere.
>>
>> I would go for 2. its a lot easier to work with and much more likley to
>> be used by the user for testing their implementation
>
>
> I agree!  There are too many variables to sanely handle point 1.  The
> model-free parameter tests should be tight but the optimisation stats
> tests should be set to the hypothetical worst case.  The question is,
> how would you initially define 'worst case' when building these tests?
>
1. Implement the test case and if possible calulculate the correct
results and use as the test case.
2.  If you can't do this (which happens in many cases)
    a. write a test case and run the code in a state where you believe
it to be fully functional and working.
    b. get a result check it to thye best of you ability and  add a
'reasonable' amount of uncertainty (2-10 ulp [units in the last place]?
in many cases but in some cases much more!)
    c.  run it on some other architectures without changing the code, if
the results are wildly different investigate to see if the result is due
to a problem or if it just an implementation problem
    d. enshrine the results in the test case and ask people to report
errors (if possible make it easy for them: dump to a file etc in a clean
format)
    e. if it fails again repeat step c if need be regressing to the
revision of the routine  you had at that point.... and see if the code
is failing or if it is a platform problem

That sounds like the best and easiest approach. As for the report, I need to completely overhaul the test suite system adding the ability to run unit tests, catching stderr and stdout to minimise the amount of stuff printed, etc., so that feature should be added then.

Edward



Related Messages


Powered by MHonArc, Updated Thu Oct 26 11:41:19 2006