mailRe: Relaxation dispersion


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Sébastien Morin on January 08, 2009 - 05:40:
Hi,

I thought about the task (Should I add the implementation of relaxation
dispersion as a task on the Gna web site ?) and here is how I see the
flow-through of an analysis...


1.
A user will want to perform relaxation dispersion analysis. The user
will first have to chose whether CPMG or R1rho experiments were
recorded, the choice of which will determine if intensities are
associated to different CPMG pulse train frequencies or to different
R1rho spin lock field strength.

Let's say the user types the command:
    relax_disp.exp_type('cpmg')
and chooses the CPMG experiment.

The intensities will then be associated to different CPMG pulse train
frequencies, or to a null frequency (for the reference spectrum).

We can now assume that the user recorded CPMG experiment from which
R2eff values are extracted as follows:
    R2eff = ( 1 / T ) * Ln( Icpmg / Iref )
where T is the constant time relaxation delay, Icpmg is the peak height
with CPMG, and Iref is the peak height without CPMG (reference).

There should be a function to input the delay T used... Should this
function be called something like:
    relax_disp.cpmg_delayT()
and accept a float ?

From this step, peak intensities will be used to calculate R2eff for the
different frequencies used. How should duplicated reference spectra (if
present) be treated (since they have an effect in every R2eff calculated) ?

In the future, we could add a function for selecting this approach or
another (if others exist and are of interest).


2.
The user will then need to chose which mathematical model to use, fast
or slow exchange. Let's say the user types a command like:
    relax_disp.select_model('fast')
and chooses fast exchange.

(Should this function be renamed to something like:
relax_disp.cpmg_select_model() since it is associated to CPMG experiments ?)

The R2eff values calculated in part 1 will then be input in the chosen
equation and minimised.

This is different as in standard curve fitting for R1 and R2, since the
R2eff values are first calculated and then are passed to the dispersion
equations to be minimised.

Am I right ?

In the case of CPMG data in the fast-exchange limit, should the
relaxation dispersion specific user commands be as follows:

    relax_disp.exp_type('cpmg')
    relax_disp.cpmg_delayT(0.040)
    relax_disp.cpmg_select_model('fast')

and should the data go as follows:

    intensities  -->  calculated [R2eff]  -->  minimised [R2, Rex, kex]

???

======================================
======================================

Moreover, what about the intermediate-exchange limit ? Is that of any
importance and should it be implemented also in relax ?
    I propose yes, although I'll have to check for the equations, and
everything...


Also, as for choosing whether slow-, intermediate- or fast-exchange
equations are best suited for the data, there is, of course, the use of
model selection implemented into relax, but also the use of the alpha
parameter (Millet et al., JACS, 2000, 122: 2867-2877 (equation 14))
which can help determining the exchange limit...
    How could we combine both ?


Finally, what about group fitting of dispersion data to extract better
parameters based on a common exchange rate (kex or kA, for fast- or
slow-exchange, respectively) ? Should this be thought about in the
beginning of the development or will it be easy to implement this when
the rest works ?


Ok, enough for now... Already a lot of work for tomorrow..!

Regards,


Séb  :)





Edward d'Auvergne wrote:
Hi,

I'll comment on all your points below.


On Tue, Jan 6, 2009 at 11:15 PM, Sébastien Morin
<sebastien.morin.1@xxxxxxxxx> wrote:
  
Hi Ed,

I started a branch (relax_disp) for implementation of relaxation
dispersion code.


============================================

The CPMG relaxation dispersion approach I presently work with is derived
from pulse sequences developped in Lewis Kay's laboratory (Hansen,
Vallurupalli & Kay, 2008, J.Phys.Chem.B 112: 5898-5904). The equation
for R2eff is as follows :

R2eff = ( 1 / T ) * Ln( Icpmg / Iref )

   where T is the constant time relaxation, Icpmg is the peak height
with CPMG, and Iref is the peak height without CPMG.

Then, from these R2eff (recorded with varying CPMG frequency), one can
fit relaxation dispersion and extract Rex, pa, pb, dw, tex, etc,
depending on the exchange regime (slow or fast)...

I wrote an interactive Mathematica notebook concerning the basics of
CPMG relaxation dispersion... You can find it at :
http://maple.rsvs.ulaval.ca/mediawiki/index.php/Relaxation_dispersion

============================================
    

I've checked out the branch to watch the changes and downloaded the
Mathematica notebook.  This notebook seems quite useful to get an idea
of what happens to the relaxation dispersion profiles.


  
A few questions come to my mind, at this point.

How do we want to treat relaxation dispersion within relax ?

   Do we want to support both CPMG and R1rho relaxation dispersion
approaches ?
      -> I propose starting with CPMG.
    

I would suggest that too, but relax should be designed to handle both
from the start.  Maybe a user function such as relax_disp.exp_type()
could be used to specify which was collected.  If this is done then,
once relax is running with the CPMG data, adding support for R1rho may
only take one or two days of work adding a few additional functions to
the maths_fns code.  This will be very trivial and could be added by
anyone else interested in collecting this type of data.


  
   Do we want the code to work with peak heights or with calculated R2eff ?
       -> I propose starting with R2eff.
    

Here I suggest peak heights as this code already exists.  Hence
calculating R2eff can be one part of the minimisation function.  Or
maybe even calculate these as the first part of the
specific_fns.relax_disp.Relax_disp.minimise() function prior to
running the mathematical optimisation.  relax now has all the
infrastructure for peak heights, and therefore doing it this way makes
it significantly easier for the user while not being too much for the
implementor (a single small function will be sufficient).

In the future, R2eff may be input instead and this part skipped.
Maybe with the user function relax_disp.r2eff_read().


  
   Do we want to handle both slow- and fast-exchange ?
      -> I propose implementing both...
    

I think we should have everything (but not all done at once or all by
you).  First though, the simple fast-exchange formula.  But then
possibly all the formula in your Mathematica notebook.  These formula
are in reality mathematical models.  Therefore they should be
specified using a user function relax_disp.select_model() mimiking
model_free.select_model().  With this user function, each
model/equation can be implemented one by one after relax can already
successfully perform a relaxation dispersion analysis.


  
   Do we want to support the use of multiple field data simultaneously ?
      -> I propose yes, otherwise users will use some other in-house
programs which allow it...
    

Is multiple field strength data not essential for relaxation
dispersion?  The frq.set() user function I recently added to relax can
be used to specify which spectrometer frequency the peak intensities
come from.


  
   Do we want the code to be able to choose between slow- and
fast-exchange automatically, using some model selection criteria ?
      -> I propose yes, otherwise users will use some other in-house
programs which allow it...
    

This is statistical model selection which relax already implements.  I
have no idea which model selection technique will be best, but I would
guess AIC or BIC model selection will do a decent job.  Currently
(well, I haven't followed the relaxation dispersion literature too
closely) the way the model is selected is by using F-tests.  However
this is statistically flawed and should never, ever be done.
Statisticions would cringe if they saw what we in the NMR field do
here.  You just have to read the fine print on F-test to see why -
F-tests are only for testing the significance of models which are
"parametric restrictions" of each other!  That is definitely not the
case here!!!  Oh, the second problem is that hypothesis testing for
model selection was abandoned by this field in the 40's with only
likelihood ratio tests remaining today.  The reason is that you can
manipulate your tests to get the result you want - as has been done in
quite a number of model-free papers!

For implementing this - at the end - just run the model_selection()
user function and see which methods are missing from the specific_fns
code.  Only a few basic methods are required which have nothing to do
with model selection itself.


  
Let me know if you have a few ideas...
    

I have a few other small ideas, but I'll save these until later.

Cheers,

Edward


  
Regards,


Séb  :)





Edward d'Auvergne wrote:
    
Hi,

If you would like to implement this, please feel free.  I have so much
to do currently with relax that I won't have a chance to develop this
code for a long time.  This development will have to be done within a
branch because of the disruptive nature of the code.  I would suggest
that you do something similar to what you did with the consistency
testing code, by copying and modifying the reduced spectral density
mapping code.  However in this case, I would suggest copying and
modifying the relaxation curve-fitting code.  Don't be afraid of doing
anything, as I'll check all committed code as always and every change
can always be reverted.

For a start, I would develop a system test.  Having some limited
relaxation dispersion data, as peak heights, and the final results
will be required.  Published data could be useful for this, as then it
could be checked against the published results to make sure that the
code is correct.  Then you can store the input data into
'test_suite/shared_data', write a script called something like
'test_suite/system_tests/scripts/relaxation_dispersion.py' which
implements a complete relaxation dispersion analysis (the script can
be later copied into 'sample_scripts/'), and then have code in the
system test checking if the final values are correct.

Then I would implement the necessary user functions.  Then the
specific_fns code.  And finally the maths_fns code with the relaxation
dispersion equations.  Finally debug until the system test passes.
This is the order that I develop the analyses in relax, and is the
best way to do this.  At all points you can copy and modify the
relaxation curve-fitting code as the concepts are quite similar.  So
similar in fact that implementing this analysis will not be too hard.
It might be nice if you had a protein system to run this on in the end
so you can publish the fact that you have implemented this code in
relax.  Well, assuming you decide to write this code.

Regards,

Edward


On Tue, Dec 23, 2008 at 5:40 PM, Sébastien Morin
<sebastien.morin.1@xxxxxxxxx> wrote:

      
Hi,

I have seen pass by the new pipe type 'relax_disp' a few weeks ago...
Since relaxation dispersion is in the list of techniques planned to be
supported in the future, I guess this is the first step toward
implementation...

I would like to contribute to the development of this type of analysis.

What first steps could I do ? Should there be a new branch created to
start this ? Should some unit and system tests be written ? Should first
equations be introduced ?

Let me know what are your plans for this, Ed, and I'll try to help you
in any way you find useful.

Regards,


Séb  :)

_______________________________________________
relax (http://nmr-relax.com)

This is the relax-devel mailing list
relax-devel@xxxxxxx

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-devel


        
      
    

  





Related Messages


Powered by MHonArc, Updated Thu Jan 08 10:40:36 2009