The BFGS and Newton optimisation tests fail of my machine for this reason (chi2 as big as 7e-21 in some cases). I'm running Linux on dual 64 bit AMDs. Python is 2.4.1 compiled with gcc.
As long as the difference between the model-free parameters is tiny, this shouldn't matter.
Testing optimisation stats may be appropriate in some cases, but it is clearly expecting too much to have 1e-27 correct to a relative error of 1e-8, which I think is what you are testing for. If the optimisation algorithm in question should terminate based on some chi2 tolerance, then it is should be adequate to demand the value to be less than that tolerance. Alternatively, if the expected chi2 is finite, because of noise in the data, then it is fair enough to test for it (+/- a reasonable tolerance).
The function tolerance between iterations is set to 1e-25 (I'll get to the importance of this in my next post). The test is to be within 'value +/- value*error' where the error is 1e-8. This equation removes the problem of the different scaling between the model-free parameters (the 1e12 difference between S2 and te, etc.). Testing the chi-squared value to be within 8 orders of magnitude might be better. Or maybe the difference between the two values being less than 1e-8?
Edward