I also agree that even with the strictest tests, a long double implementation of the algorithm comes out with errors that are much smaller than anything we could ever hope to measure experimentally. 23 decimal places seems to be needed to get things to machine precision. This is overkill. The paper makes all these points in a manner that I hope is clear (see last fig).
Best, Andy.
The tcp in this function in relax is defined as 1/(4*nu_CPMG), but inyour script you have "tcp=1./(2*nu_cpmg);". Apart from that, it looks ok. Changing to the correct tcp gives: Relative times of the various approaches: Baldwin 0.159632 1.0 Numerical 19.974315 125.127261451 Meiboom 0.056497 0.353920266613 Carver 0.080027 0.501321790117 Nikolai 0.453068 2.83820286659 NikolaiFun 0.284052 1.77941766062 NikolaiLong 50.020116 313.34642177 NikolaiLong 50.430594 315.917823494 Maximum error over grid (s-1): Meiboom: 237214256.582 Baldwin: 6.57168541807e-10 CarverRichards: 29.9287668504 Nikolai fun: 104.735381561 Nikolai dougle (9): 142.46676955 Nikolai long double (18): 0.063449549192819563320223 Nikolai long double (23): 5.7775487944482434059452e-7 That's far more reasonable for Nikolai's fun!I was hoping that the modification would make it more stable. If anything the reverse is true. I would have included a note to this effect in the paper (and still will if you can show it's better). Note the error is very temperamental. You can miss it even by setting up the grid search differently. Give it a narrow grid, and you won't see the error. Only relatively specific, but seemingly hard to predict sets of parameters (all with large deltaOs) lead it to explode.It is worth looking at all deviations rather than just the single maximum R2eff difference. It could be that there is an edge case in the grid whereby a maths domain error occurs. Or an trig function overflow. The median deviation to avoid such outliers might also be of worth. By switching in your code all numpy.max() calls with numpy.median(), and making sure the "Nikolai long double (*)" results are pre-converted to a numpy float64 array, I then see an incredibly different picture: Relative times of the various approaches: Baldwin 0.152725 1.0 Numerical 19.596446 128.3119725 Meiboom 0.05604 0.366934031756 Carver 0.077236 0.505719430349 Nikolai 0.43944 2.87732853167 NikolaiFun 0.273977 1.7939237191 NikolaiLong 50.243433 328.979754461 NikolaiLong 50.784841 332.524740547 Median error over grid (s-1): Meiboom: 4.0064828013 Baldwin: 3.9630521087e-12 CarverRichards: 0.00661151728704 Nikolai fun: 6.62225829728e-12 Nikolai dougle (9): 6.70041799822e-12 Nikolai long double (18): 2.62099543458e-12 Nikolai long double (23): 2.27758174227e-12 ! So this is much clearer - outliers are the major difference you are seeing. A study of such outliers and their cause would be quite interesting. But in most of these cases, simple collapsed equations for each model should be derivable at the edges which then produce the expected R2eff over the entire parameter space rather than computational artefacts in some corners. Anyway, this result makes it crystal clear that the different models should not be judged by the single maximum outlier over the entire parameter space. Regards, Edward
Attachment:
240514 exchange14_supp.pdf
Description: Adobe PDF document