Hi, On 8 May 2014 15:12, Troels Emtekær Linnet <tlinnet@xxxxxxxxxxxxx> wrote:
Yep, I have just done it. I could wish that the features: 1) sr #3138: Interpolating theta through spin-lock offset [Omega], rather than spin-lock field strength [w1] https://gna.org/support/index.php?3138
This isn't too difficult or too much work, but it will require a bit of pre-planning. It'll be very easy to head down the wrong path!
2) sr #3124: Grace graphs production for R1rho analysis with R2_eff as function of Omega_eff https://gna.org/support/index.php?3124
This will be similar in concept to sr #3138. With pre-planning, i.e. discussing on the mailing list so I can point you in the right direction as needed, this and the above sr would take less time to implement than the B14 models.
But I dont have so much time for it now, so it must be later.
Add it as you need it :)
There are still some issues with; bug #22017: LinAlgError, for all numerical CPMG models. https://gna.org/bugs/?22017 And the B14 also blows up with nan.
These are separate issues. They are not related - for bug #22017 you identified R*tcp[i] as containing NaNs, but using print statements I can identify these NaNs as originating much earlier in the code! You should try the same to find the origin - temporary printouts are a useful debugging tool. You'll be surprised what the origin is ;) As for NaNs, almost all dispersion models had the NaN problem until I fixed them! Here is a demo of some of these problems: import numpy a = numpy.zeros(2) print(1/a) a = numpy.array([None, None], numpy.float64) print(a)
These bugs can probably soon be closed, as Wont-Fix, since it is not a problem of relax. ;-) https://gna.org/bugs/?22024 #22024 Minimisation space for CR72 is catastrophic. The chi2 surface over dw and pA is bounded. https://gna.org/bugs/?22021 #22021 Model B14 shows bad fitting to data.
True, there's nothing to fix there.
https://gna.org/bugs/?21799 bug #21799: Insufficient recommendations/Warning message for the execution of dauvergne protocol with 1 field is incomplete Something should be done here. Just a little notits of the number of fields at least. But the GUI check, can be added later.
This is already in the GUI. The problem is that some users just refuse to measure at multiple fields and like to complain hard! There's absolutely nothing we can do in the software that will change such stubborn refusals ;) The single field strength issue is documented (see the 'About' button). The second sentence is "Importantly, data at multiple magnetic field strengths is essential for this analysis.". This is also in the manual. And if you look at gui/analyses/auto_model_free.py, you will see from line 373: # Relaxation data. if not hasattr(cdp, 'ri_ids') or len(cdp.ri_ids) == 0: missing.append("Relaxation data") # Insufficient data. if hasattr(cdp, 'ri_ids') and len(cdp.ri_ids) <= 3: missing.append("Insufficient relaxation data, 4 or more data sets are essential for the execution of the dauvergne_protocol auto-analysis.") This will block the execution of the analysis, popping up the missing data window. It's also mentioned in the relax manual. But I've just added a few more sentences mentioning it in more prominent places in the manual, just so you can close this thing and have less to do.
And then these two: #21788 Only Warning is raised for missing loading R1 relaxation rate for the off-resonance R1rho relaxation dispersion models. Confirmed None Thu 13 Mar 2014 12:38:58 PM UTC
This is a trivial 3-4 line fix ;) Really, this is incredibly easy!
#22019 The IT99 model is listed with parameter kex instead of tex.
This is also rather easy :) Though it is spread out in various parts of relax.
Phew....
It sounds like a lot, but it should only be a few days to whack the entire lot. I believe that you are now capable enough with relax that you could knock off the last 5 in this list in 10 min! 3 are simple bug closures. The Omega_eff part is a bit more work, but not too much. You just need to understand the data structures used at that point in relax. And the maths domain checking is important but with the --numpy-raise flag, it's easy to work out the failure points and fix them. Unfortunately such checks do make the code slower! Regards, Edward