> there is an alternative 3 which would be to back port the > functionality > from numpy in the expectation that 'one of these days' numeric will > be > replaced by numpy (is numeric still maintained?) > > also it would be possible to check whether nans etc produce the > expected > result at the start of the script and warn the user if they had a > non > compliant result. > > A final note is that i believe scipy propogates nan infs etc properly > whereas numeric doen't for ufuncs and some other cases... > http://cens.ioc.ee/~pearu/scipy/tutorial.pdf >
As I understand, Numeric is no longer maintained, and Numpy is its appointed successor, so in that sense porting to Numpy will be neccessary some time soon anyway. In principle it should be a fairly trivial process - changing the import statements being the major job. The thing that makes things more complicated, however, is that Scientific is still reliant on Numeric and not 'yet' compatible with Numpy. So if we do move over to Numpy, an alternative PDB parser will be required (and another MPI interface?).
Yep, Numeric is essentially dead and they do recommend to switch everything to Numpy "users should transisition [sic] to NumPy as quickly as possible". However Numpy is so incredibly broken it shouldn't even be called alpha software. I tried it out in relax and everything broke. This was especially evident in the minimisation code where many linear algebra function calls are made. They should have at least have made a functional product before telling everyone to switch to it. They may have stabilised things by now but the fact that you have to pay to read the manual is telling. I have a feeling there will be a significant speed hit as well. A good indication that NumPy is worth switching to might be when Scientific Python switches to it. However feel free to branch the 1.3 line if you want to give it a go, it was a few years ago that I tried and failed.
The issue with Numeric's handling of NaN and INFs is that often it will raise a FloatingPointError of one flavour or another, rather than returning NaN or INF. Divide by zero is the clasic example, but there are many others. This is also true of Python's math functions, and again is fairly platform dependent. The propagation of NaNs is OK though, i think. That is, NaN is always the result of any math operation on NaN.
Those examples you gave before Chris were a bit crazy. I don't think it will be physically possible to ever catch NaNs while not restricting relax to one very specific version of Python - that is until they fix their floating point operations. Maths operations on NaN, that by definition surely must be an error? As NaN is not properly handled in Python (and is handled differently in different version) catching all places in the program where they arise and preventing them in the first place would be good. I have a feeling that the initial chi-squared value I arbitrarily set to 1e300 might be the source. Maybe using inf in that spot will fix the bug, or maybe another method would be better.
I will also look into placing an upper iteration limit on the backtracking line search to prevent it from hanging. Chris, if you set the 'print_flag' value to say 5, does relax just sit there doing nothing or does the line search continue forever? If it hangs without printing anything it could be a bug in Numeric. In version 24.x the eigenvalue function freezes and ctrl-c does nothing (try the eigenvalue Hessian modification together with Newton minimisation to see the result). You have to manually 'kill' relax. If that's the case, mandating the use of Numeric 23.x as Scientific Python does might solve the problem. But avoiding NaNs at all costs and iteration limits should cover all issues (well apart from the 'none' minimisation option).
Edward