mailRe: Reduced spectral density mapping


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Sebastien Morin on December 01, 2006 - 16:13:
Hi Edward

Thanks for your help.

I have another question about reduced spectral density mapping.

With the script jw_mapping.py, one has to select the frequency
(jw_mapping.set_frq()). I would like to know if it is possible to select
datasets at multiple fields and then optimize everything together...
Would this lead to better values as is the case with the model-free
approach ?

I tried by simply putting three fields :

===============================================================
jw_mapping.set_frq(name, frq=499.719 * 1e6, frq=599.739 * 1e6,
frq=799.744 * 1e6)
===============================================================

but as I thought, ended up with an error :

===============================================================
SyntaxError: duplicate keyword argument
===============================================================

Of course...

Thanks for help !


Séb :)



Edward d'Auvergne wrote:
On 11/29/06, Sebastien Morin <sebastien.morin.1@xxxxxxxxx> wrote:
Hi everyone !

I just started using the jw_mapping.py script to get spectral densities
out of my data.

I have some questions :

1.
Reading about reduced spectral density mapping, I thought one would
extract J(0), J(wN) and J(0.87wH). However, when reading the results
file from the jw_mapping.py script, I see J(0), J(wN) and J(wH)... Is it
the same thing in this case or is relax using another approach than the
one giving J(0.87wH) ?

The J(0.87wH), J(0.921wH), and J(0.955wH) terms are from the paper of
Farrow et al., (1995b) JBNMR, 6, 153.  There are three methods in that
paper for determining the spectral density values.  The technique
currently used in relax is 'method 1', which is the same as the other
Farrow et al., 1995 publication (Farrow et al., (1995a) Biochem, 34,
886) and the Lefevre et al., (1996) Biochem, 25, 2674 publication.  If
you're not worried about precision in the x-intercept of the spectral
density graphs, the frequency, then essentially wH == 0.87wH.
Currently I only have the Lefevre reference on the website, but this
needs updating.


2.
I would like to know what are the units of spectral densities in the
results file when using jw_mapping.py. I get values ranging from 1e-10
to 1e-13, is it what one would get ?

The units are the standard seconds per radian.  Nanosecond or
picosecond per radian is what is normally plotted.


3.
I would like to know if it is possible to get the spectral densities
using data from more than one field at the same time or must one get
fits from data at each field separately ?

If you'd like to do something a bit fancier, there are methods 2 and 3
of Farrow et al., 1995b or the techniques used in Butterwick et al.,
2004 (as well as many other variants).  Anyway, each field strength
data set is treated separately yet the multiple frequencies can be
used to extract one J(0).  These additional techniques are most
welcome to be added to relax!


4.
In the results file, when using the script jw_mapping.py, what does
'remap_table' stand for ? With the few tests I made, I always get
[0,0,0]...

The 'remap_table' is an internal relax data structure.  It links the
NMR data to the frequency data.  If you have data at two field
strengths, then the first field is '0' and the second '1'.  Hence
you'll have a remap table of [0,0,0,1,1,1] if the R1, R2, and NOE have
been collected.  'ri_lables', 'remap_table', 'frq_labels', and
'frequencies' are used to store info about the relaxation data (so it
can easily later be read back into relax).


5.
I would like to propose that fitted values and errors be output
separately from the monte carlo simulations when using the jw_mapping.py
script. This would render analysis easier as the file at which one would
look more closely would be the values and errors one. This file would be
more practical, also, if the values and errors would be on the same
line. This would reduce the amount of gawk, sort, tail, head, etc to
use...

The results file is designed to contain absolutely every last bit of
data associated with a run, it's just one massive repository of data.
The best way to view this file is using the command 'less -S
results.bz2'.  However instead of manually extracting the data from
this file, it can be read back into relax and user functions can be
used to create files containing the specific values with just their
errors using 'value.write()' or viewed using 'value.display()',
'grace.view()', 'grace.write()', or 'molmol.write()'.  You can use
these with the any value, any simulation, or any x-y data combination
you can think of.  The documentation associated with these functions
will tell you all the possible combinations.

I hope this helps,

Edward


-- 
         ______________________________________    
     _______________________________________________
    |                                               |
   || Sebastien Morin                               ||
  ||| Etudiant au PhD en biochimie                  |||
 |||| Laboratoire de resonance magnetique nucleaire ||||
||||| Dr Stephane Gagne                             |||||
 |||| CREFSIP (Universite Laval, Quebec, CANADA)    ||||
  ||| 1-418-656-2131 #4530                          |||
   ||                                               ||
    |_______________________________________________|
         ______________________________________    





Related Messages


Powered by MHonArc, Updated Sat Dec 02 16:20:14 2006