Hi. The following logic is: If global pA = 1.0 or kex = 0.0: Return whole array as R20, flat lines. Any superfluous R20 in array, get cleaned by multiplying with disp_structure of value 1/0. If spin specific dw (or similar variable) = 0.0: Find the mask. Do calculation. Replace spin specific positions in array with R20. If math violations. Find the mask. Replace with 1.0, so calculation does not go inf or nan. Do calculation. Replace with 1e100 or r20. Hm, am I consistent here. Forgotton math violations. Get caught by: fix_invalid(back_calc, copy=False, fill_value=1e100) Get cleaned by multiplying with disp_structure of value 1/0.. 2014-06-18 11:55 GMT+02:00 Edward d'Auvergne <edward@xxxxxxxxxxxxx>:
Sounds like some special masking is required to allow the r2eff/r1rho values to be calculated for all, even with numpy warnings, but then reset to the r20/r1rho_prime values for the spins with dw == 0.0. Should this be considered a bug? Will this be a problem if I merge the branch back to the trunk? Cheers, Edward On 18 June 2014 11:42, Troels Emtekær Linnet <tlinnet@xxxxxxxxxxxxx> wrote:Hi ed. Comment to "Wait for replacement, since this is spin specific." When doing a clustered analysis, some of the spins can have dw = 0.0 I cannot just throw the whole result back as r20, if just some of thespinshave dw = 0.0. Best Troels 2014-06-18 9:04 GMT+02:00 Edward d'Auvergne <edward@xxxxxxxxxxxxx>:Hi, I'll make a running list of things to be done prior to merger of the branch. These include my previous points, and a few new ones: - Shift once off parameter conversions to lib.dispersion. - DPL94 profiling script is broken. This is not essential but it would show far more impressive speed ups than the current scripts - really a lot more. - Replace "##" with "#". - You need to add your name to the copyright notice of all lib.dispersion modules, appending to the bottom of the current list of authors (these are sorted by date). - Write a script for running all profiling scripts and making a table of speed ups to present in the release messages (I'll do that, it's an easy job :). - For the dw == 0.0 checks, you have the comment text "Wait for replacement, since this is spin specific.". What does this mean? - For the 'MMQ CR72' model, for a little speed and for consistency, the eta_scale variable can be shifted out of the function just like you have done with the 'CR72' model. To obtain the fastest Python code possible there are still a number of minor speed ups though, compared to the current changes, they are nothing. But they can be reserved for a future date. Anyway, I'll keep looking at the code and expand the list as needed. Cheers! Edward On 18 June 2014 08:30, Edward d'Auvergne <edward@xxxxxxxxxxxxx> wrote:Oh, and the DPL94 profiling script is not in a functional state. So only analytic CPMG models are covered. One R1rho model would be useful as the speed up there, if there are many offsets, could be up to an order of magnitude faster than the CPMG models! I think you will see a speed up of over 100 times. Regards, Edward On 18 June 2014 08:19, Edward d'Auvergne <edward@xxxxxxxxxxxxx>wrote:Hi, I'll look into it, it should only take me a few minutes to script up. I can copy the disp_spin_speed branch scripts directly into thetrunk,and they run if I remove the *_orig arguments to the r2eff_*() functions. The reason I asked if you had more plans for theprofilingscripts is because you only have the B14, CR72, DPL94, and TSMFK01 models covered. Regards, Edward On 17 June 2014 23:39, Troels Emtekær Linnet <tlinnet@xxxxxxxxxxxxx> wrote:Hi Edward. This does indeed sound very good. It would weight much for me, toknowhow much my effort have paid off. But I can't allocate more time for anything strictly needed. Best Troels On 17 Jun 2014 22:55, "Edward d'Auvergne" <edward@xxxxxxxxxxxxx> wrote:Not quite yet ;) I have to merge this back to trunk. But first I need to see if there is anything to clean up (whitespace, comments, formatting, naming consistency, API consistency, etc.). And then this needs to be released to all relax users, either as relax 3.2.3, or as 3.2.4with3.2.3 being reserved for all other trunk changes. For presenting this, I was thinking of a timing table from you profiling scripts. Do you intend on creating a few more? Maybe for anumericmodel were I think there are speed ups, though no where near what you are seeing for the analytic models. I was thinking of witing one masterscriptthat runs all your profiling scripts, one after the other, thenrepeatingthis 10 times. The log would be captured by the script, and then therewillbe timing statistics for each (grepping just for the func_*() target functions for a single number to use), so that an average and standard deviation can be presented for relax 3.2.2 vs. the new code. Then in the release message, it would look like: Speed comparison for relax-3.2.2 vs. relax-3.2.3: Single spin analysis: CR72: 3.2+/-0.3 s vs. 2.8+/-0.2 s -> 1.14x faster LM63: ... Cluster of 100 spins: CR72: 53.5+/-2.4 s vs. 3.6+/-0.2 s -> 14.9x faster This would be a great way to strongly present these insane speedups.What do you think? Regards, Edward On Tuesday, 17 June 2014, Troels E. Linnet <NO-REPLY.INVALID-ADDRESS@xxxxxxx> wrote:Update of task #7807 (project relax): Percent Complete: 0% => 100% Open/Closed: Open => Closed Effort: 0.00 => 100 _______________________________________________________ Follow-up Comment #263: This now complete. _______________________________________________________ Reply to this item at: <http://gna.org/task/?7807> _______________________________________________ Message sent via/by Gna! http://gna.org/