mailRE: Addition of PDC support to relax 1.3.10 and an inconsistency in the error calculation.


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Neidig, Klaus-Peter on February 21, 2011 - 19:25:
Dear Edward,

proposal: instead of writing "used integral errors" I write "experimental 
integral errors".

If there are no replicates then there is only a RMSD per spectrum, so all 
values in each column are identical but the different columns
usually have at least slightly different values. If values inside columns 
differ then there must have been replicates and the error is
the sum of 2 errors, the later calculated from the replicates using the 
selected method (worst case.., average variance..)

It is a good idea to guide the relax users to select proper PDC options. I 
can copy these identically into our handbook. Perhaps
I just prepare 2-3 pictures tomorrow (I'm currently at home) and send them to 
you.

Indeed the heart of Marquardt  the Hessian matrix. If I remember correctly 
the sqrt of the diagonal elements of  the inverse of it
are the errors of the fitted parameters, at least in case of uncorrelated fit 
parameters. I have to look into the code tomorrow.

With your latest comparison the errors of the fitted parameters in relax and 
PDC are very similar, although they could even be
more similar. I used the y error values as given in the section so far called 
"used integral errors"  and the "from fit using calculated y uncertainties" 
which means I passed these y errors directly into Marquardt. I might also do 
what you rather call bootstrap tomorrow,
to see how these compare. The tests we have done internally were against 
Matlab with even more similar results.

Providing test data and test results is an excellent idea. The PDC can 
currently not read any other program's results so if
you provide something which you have already validated I can write some code 
to read it and redo the calculations and
make an automated comparison. If by accident this could be done in a similar 
format you are just getting from the PDC reading
would be easy, but any other format is also easy.

I can change "Monte Carlo" into "Bootstrapping" in the PDC dialog windows and 
documentation. I would however need to
discuss this here, since in another project the same technique is used also 
under the name Monte Carlo. In fact in this
other project the american customer for whom it was programmed validated this 
method. Another option would be that you tell
me how your MC works and I implement the same in the PDC. Sorry that I'm not 
a Python programmer/user, I
do everything in Java today.

Best regards,
Peter





Dr. Klaus-Peter Neidig
Software Development / Head of Analysis Group

Bruker BioSpin GmbH
Silberstreifen
76287 Rheinstetten, Germany
Phone: +49 721 5161-6447
Fax:     +49 721 5161-6480

                                                               
Peter.Neidig@xxxxxxxxxxxxxxxxx
                                                               www.bruker.com

Bruker BioSpin GmbH: Sitz der Gesellschaft/Registered Office: Rheinstetten, 
HRB 102368 Amtsgericht Mannheim
Geschäftsführer/Managing Directors: Jörg Laukien, Dr. Bernd Gewiese, Dr. 
Gerhard Roth

Diese E-Mail und alle Anlagen können Betriebs- oder Geschäftsgeheimnisse, 
oder sonstige vertrauliche Informationen enthalten. Sollten Sie diese E-Mail 
irrtümlich erhalten haben, ist Ihnen eine Kenntnisnahme des Inhalts, eine 
Vervielfältigung oder Weitergabe der E-Mail und aller Anlagen ausdrücklich 
untersagt. Bitte benachrichtigen Sie den Absender und löschen/vernichten Sie 
die empfangene E-Mail und alle Anlagen. Vielen Dank.

This message and any attachments may contain trade secrets or privileged, 
undisclosed or otherwise confidential information. If you have received this 
e-mail in error, you are hereby notified that any review, copying or 
distribution of it and its attachments is strictly prohibited. Please inform 
the sender immediately and delete/destroy the original message and any 
copies. Thank you.
________________________________________
From: edward.dauvergne@xxxxxxxxx [edward.dauvergne@xxxxxxxxx] On Behalf Of 
Edward d'Auvergne [edward@xxxxxxxxxxxxx]
Sent: Monday, February 21, 2011 6:24 PM
To: Neidig, Klaus-Peter
Cc: Neidig, Klaus-Peter; relax-devel@xxxxxxx; Michael Bieri
Subject: Re: Addition of PDC support to relax 1.3.10 and an inconsistency in 
the error calculation.

The fit error determination in the PDC depends on the method the user has 
chosen. Under the
tag Fit parameter Error estimation method: the options are
- from fit using arbitray y uncertainties
- from fit using calculated y uncertainties
- from Monte Carlo simulations


Which of these does the "SECTION:  used integral errors" belong to?
This section appears to be errors associated with each peak intensity
from "SECTION:  used integrals".  I simply took these to be individual
peak errors and then performed standard Monte Carlo simulations on
these.  Each peak appears to have a different error, but does "Fit
parameter Error estimation method:  from fit using arbitray y
uncertainties" means one error for all peaks?



Perhaps the label "used integral errors" is a misleading. We could just leave 
out the word "used".
If these errors are used or not during the fit depends on the chosen fit 
error calculation. The way you
used the error would be "- from fit using calculated y uncertainties" but the 
sample output files I had
sent so far used the option "-from fit using arbitray y uncertainties". In 
the mean I made a comparison
and the differences in these methods is about those you see. I have a bug 
somewhere else in the code
which I must fix first, then I will send new output files that all used "- 
from fit using calculated y uncertainties".
The first option (~ one error for all peaks) is normally not recommended but 
described in most books
and there may be cases to use, e.g. if there is no reasonable y error 
calculation from the data.

Hi,

Ah, so these are measured errors and not back-calculated errors from the fit. 
 And they are not used when fitting to a single error value.  How do these 
numbers arise?  The one error for all peaks is required for duplicated 
spectra.  And if not all are duplicated, then it needs to be one error for 
all peaks for all spectra.  If there is an RMSD measurement, then you have 
the same error for all peaks in one spectrum.  The example files contain 
different errors for every single peak, but where does this come from and how 
are they calculated?



The first 2 are offered by the our standard Marquardt implementation.


The Levenberg-Marquardt optimisation and Monte Carlo simulations are
two unrelated techniques, one is for optimisation, the other for error
propagation.  A very low quality error estimation method comes from
the covariance matrix.  So one can use Levenberg-Marquardt (or any
other optimisation algorithm) and then determine the errors using the
covariance matrix defined at the solution.  I don't understand the
differences in the error estimation methods in the PDC :S




The first option assumes the same
error for all y values. This then cancales out of the formula, therefore we 
call it arbitrary.


But this could be used for either the Monte Carlo simulations or the
covariance matrix error estimation methods.  In relax, this is used in
combination with MC simulations.



The second option uses the y errors as determined before. As we discussed 
earlier the base plane RMS
is one source for these errors the other is from optional repetition 
experiments. If repetition experiments
are available the systematic error can be calculated as:
- worst case per peak scenario
- average variance calculation


As I showed using theoretical examples in
https://mail.gna.org/public/relax-devel/2010-11/msg00027.html (data,
python and java scripts are attached to https://gna.org/task/?7180),
the 2 worst case per peak methods on average significantly
overestimate the errors, and have huge variances meaning that the real
errors compared to the estimates are multiplied by a normal
distribution of 1.594+/-0.852.  So 95% of the estimates from
duplicated spectra (assuming a normal distribution) will be scaled by
a factor between -0.11 and 3.298!  This type of error estimate would
be fatal for subsequent model-free analysis.

Therefore in relax we will need to catch "Systematic error estimation
of data:  worst case per peak scenario" and re-perform the relaxation
curve-fitting and Monte Carlo simulations.  Or maybe throw an in relax
error and tell the user that they must go back to the PDC and use
"average variance calculation" with "Monte Carlo simlautions".  In the
example file you sent
(http://svn.gna.org/viewcvs/relax/1.3/test_suite/shared_data/pdc_files/testT1.txt?revision=12551&view=markup),
it says "Fit parameter Error estimation method:  from fit using
arbitray y uncertainties".  Above you said that this corresponds to 1
error for all peaks.  But you described in
https://mail.gna.org/public/relax-devel/2010-11/msg00021.html that the
current "worst case per peak scenario" method in the PDC uses a
different estimate for each peak.  The file also says "Random error
estimation of data: RMS per spectrum (or trace/plane)".  So I am quite
confused as to how to interpret the data.  Where do I read the peak
intensity errors from?




We had the discussion before and as a result I implemented the average 
variance method as you
suggested. We can The default value is however set to the first option since 
in older PDC versions
this was the only option and I did not want that users suddenty get different 
results if they overlook
this detail. However, in the documentation we can more explicitly describe 
this and recommend the
average variance option.

Maybe I'll add a catch to relax then, telling the user to switch to this 
method in the PDC rather than using the "worst case per peak scenario" 
option.  This will prevent them from pushing data into relax with too big 
errors - something which will really change the optimised values and also the 
model-free models selected.  The result of this would be catastrophic - it 
will hide real molecular motions.



The test output files I have sent so far calculated the fit parameter errors 
via from fit using arbitray y uncertainties.
This option was chosen accidentally. The error there is different from the 
other 2 methods. Perhaps it is
a better idea to generate a new set of output files using the option "from 
fit using calculated y uncertainties".
That usually compares well with MC simulations.


What is the "from fit using calculated y uncertainties"?  Is this the
covariance matrix technique for error estimation?  For simple single
exponentials, the covariance matrix technique and Monte Carlo
simulations should not be too different, I hope.  How does this
combine with "worst case per peak scenario" and "average variance
calculation"?




Marquardt internally uses a covariance matrix technique. I do not fully 
understand the problem. One thing
is to get y values and errors of y values.
The y values are used to fit a function. When doing this the determined y 
errors can be used or not.

Are we both talking about this technique 
http://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm ?  In this 
method, you use the Jacobian matrix during optimisation and not the 
covariance matrix.  The covariance matrix is estimated using this Jacobian 
matrix, but only at the end.  The covariance matrix is not used at any point 
during optimisation as it is not part of the algorithm.

The matrix J^T.J is used as part of the algorithm and then if you invert 
this, you have the covariance matrix:

covar = (J^T.J)^-1

The elements of this estimated covariance matrix can then be used as a very 
rough estimate of the fitted parameter uncertainties.  Is this what "from fit 
using calculated y uncertainties" means?



In the PDC I do 1000 MC simulations such that a vary the
y values based on a gaussian random distribution, the mean is the original y, 
the width is the error of y.


You mean you take the back-calculated y value and randomise this 1000
times with a Gaussian distribution centred at the back-calculated y
with an error set to the peak intensity error?  If you take the
original y, this is not Monte Carlo simulations but Bootstrapping - a
technique which should never be used for error estimation.  Well, you
can use bootstrapping for a direct calculation of values, but never
for optimised values.



No, we do not take back-calculated y values and randomise these. We take the 
original y
values, randomise them and use them as input for the fit. I
I'm now confused by myself about what you write but you are certainly more 
experienced in
these fields.

I would suggest having a look at Numerical Recipes in C (or in fortran, 
etc.).  For example in the newer version, see section 15.6 "Confidence limits 
on estimated model parameters".  This also has a good section on the 
Levenberg-Marquardt algorithm and also describes how bad the covariance 
matrix is for error estimation, and has a section about bootstrapping 
directly after the Monte Carlo section.  Things are explained very nicely in 
this book.  But taking the original y values should be considered a bug.



I am quite confused.  The way I see the analysis, using the notations
from the statistics and optimisation fields, there are the following
categories:

Peak intensity notation:
  - Height (the physical height of the Lorentizian peak at its mean,
or maximum point)
  - Volume (sum of all points, i.e. the discrete integration of the 
Lorentizian)
  - Height (as above but via fitting peak shape to a function)
  - Volume (as above but via fitting peak shape to a function)

Peak errors:
  - 1 average error for all peaks (from some spectra duplicated,
triplicated, etc.)
  - 1 max error for all peaks (largest ever occurring difference)
  - 1 average error for all peaks in one spectrum (only if all spectra
are duplicated)
  - 1 max error for each peak across all spectra (current "worst case
per peak scenario"?).
  - Each peak has a different error (from the baseplane RMSD)

Optimisation methods:
  - Levenberg-Marquardt
  - Simplex
  - Newton
  - etc.

Error estimation methods:
  - Covariance matrix (bad)
  - Jackknife (also bad)
  - Bootstrapping (never to be used with optimisation)
  - Monte Carlo simulations (considered the gold standard)
  - etc.

I am having trouble classifying the PDC file
http://svn.gna.org/viewcvs/relax/1.3/test_suite/shared_data/pdc_files/testT1.txt?revision=12551&view=markup



into these categories.  This is very, very confusing :S

Regards,

Edward
.


In Summary the PDC does:
1. Determine all y (4 options available)

2. Determine all y errors. RMS calculation can always be done. If repetition 
experiments are available
    they are used and errors are calculated from those (2 options available) 
and added. The errors coming
   from the RMS calculation and the ones coming from the repetitions were 
assumed to be independent.
   The latter is usually much larger.
   The output contains only the total error not the individual contributions. 
If no repetition experiments are
   available the error came from RMS only.

This is where I am lost.  At no point is there 1 error value per peak per 
spectrum.  But this is what is in the example PDC file in the "SECTION:  used 
integral errors".  From the RMSD, there is 1 error for all peaks per spectra. 
 For replicated spectra there is 1 error for all peaks per spectra (unless 
not all are replicated, in which case there is 1 error for all peaks for all 
spectra).  Am I missing something?


3. The y values are used for fitting. The y errors can be used by Marquardt 
which internally does a covariance
    type analysis but they may also be replaced by a "unique" error used for 
all peaks. As a third option the
    Marquardt can be run N times with randomised y values and using the 
"unique" error for all peaks.

The first and second are simply the errors used in the optimisation target 
function, which should be a chi-squared statistic.  This is part of the 
Levenberg-Marquardt algorithm.

This third option is technically Bootstrapping.  For fitting, it gives 
neither the parameter values nor the parameter errors.


I would suggest the following:

Below is a testT1_rel.txt made this morning. It used the errors of y as given 
in the section "used integral errors".
Perhaps you can re-run your script and check the ratio of your and our errors.

I have re-run this, and the errors are very similar to what relax determines 
using Monte Carlo simulations.  The results are at the very end.  However I 
have used the 1 error per peak per spectrum as given in the PDC file, but I 
don't know the source of this error.


If the differences in the errors remain I have to talk to some people here 
because I cannot just change major
things in the code. I have to report to a responsible person and he finally 
tells me what I should do. There are
some people here with  "simulation experience". What really surprises me at 
the moment is that several people
used the PDC (and its newer sections) and compared fits and errors to other 
programs and did not report
problems to me.

I've noticed this reporting issue too ;)  It can take a while for bugs to 
surface in relax.  To counter this, we have built in a test suite (thanks to 
Gary Thompson) which includes tests comparing to other programs.  Luckily 
relax can control both Modelfree4 from Palmer and Dasha, so comparing 
model-free results can all be done internally in relax.  It might be worth 
setting up a test data set an push this though both relax 
(http://nmr-relax.com) and Art Palmer's curvefit 
(http://biochemistry.hs.columbia.edu/labs/palmer/software/curvefit.html).  
Both relax and curvefit can use the Levenberg-Marquardt optimisation 
algorithm for finding the parameter values and the Monte Carlo simulation 
method for determining the parameter errors.  These results could be used as 
a baseline and internal check for the PDC.  I would not call these changes 
major though, more a refinement and improvements to the error estimates.  The 
fitted values are not changing, only the error estimates are being improved.

Cheers,

Edward



[edward@localhost pdc_files]$ relax convert_data.py



                                     relax repository checkout

                              Molecular dynamics by NMR data analysis

                             Copyright (C) 2001-2006 Edward d'Auvergne
                         Copyright (C) 2006-2011 the relax development team

This is free software which you are welcome to modify and redistribute under 
the conditions of the
GNU General Public License (GPL).  This program, including all modules, is 
licensed under the GPL
and comes with absolutely no warranty.  For details type 'GPL' within the 
relax prompt.

Assistance in using the relax prompt and scripting interface can be accessed 
by typing 'help' within
the prompt.

script = 'convert_data.py'
----------------------------------------------------------------------------------------------------
#! /usr/bin/env python

# The data - PDC T2, PDC T2 err, PDC scale factor, relax R2, relax R2 err (MC 
sim).
data = [
['Gln',  2,  0.4560, 0.012326 ,   2.22814 ,       2.19316068443,      
0.0264822775112],
['Ile',  3,  0.4289, 0.0089800 ,   2.22814 ,       2.33150942395,      
0.0218282475286],
['Phe',  4,  0.4259, 0.015866 ,   2.22814 ,       2.34809773828,      
0.0393993956573],
['Val',  5,  0.4446, 0.0084807 ,   2.22814 ,       2.24905317514,      
0.0190416227806],
['Lys',  6,  0.4386, 0.0032737 ,   2.22814 ,       2.27991249993,      
0.0073387122964],
['Thr',  7,  0.4505, 0.012887 ,   2.22814 ,       2.21988325334,      
0.0289980021014],
['Leu',  8,  0.4464, 0.017357 ,    2.22814,       2.23992585719,      
0.0394637142884],
['Thr',  9,  0.5081, 0.016893 ,    2.22814,       1.96827404586,      
0.0285585555608],
['Gly', 10,  0.4789, 0.0045383 ,    2.22814,       2.08825352037,     
0.00856203608884],
['Lys', 11,  0.5006, 0.0087386 ,    2.22814,       1.99766965295,      
0.0154821098023],
['Thr', 12,  0.4621, 0.028354 ,    2.22814,       2.16418266589,      
0.0589894950385],
['Ile', 13,  0.4400, 0.014724 ,   2.22814 ,       2.27249997894,      
0.0323797055027],
['Thr', 14,  0.4547, 0.011743 ,   2.22814 ,       2.19913359459,      
0.0261282530191],
['Leu', 15,  0.4387, 0.033559 ,   2.22814 ,       2.27947735732,      
0.0804008170395],
['Glu', 16,  0.4792, 0.038163 ,    2.22814,       2.08663871613,      
0.0742976416989],
['Val', 17,  0.4377, 0.013371 ,   2.22814 ,       2.28454679926,      
0.0319478148557],
['Glu', 18,  0.4744, 0.025534 ,   2.22814 ,       2.10794161516,      
0.0491390514111],
['Ser', 20,  0.4458, 0.0075985 ,   2.22814 ,        2.2432376545,      
0.0174075720472],
['Asp', 21,  0.4257, 0.0036873 ,   2.22814 ,       2.34905882789,     
0.00896916086091],
['Thr', 22,  0.4396, 0.012438 ,   2.22814 ,       2.27492782819,      
0.0283784241729],
['Ile', 23,  0.4054, 0.018904 ,    2.22814,       2.46692508258,      
0.0512139594604],
['Asn', 25,  0.4174, 0.010689 ,   2.22814 ,       2.39550908267,       
0.028141856637],
['Val', 26,  0.4688, 0.035345 ,    2.22814,       2.13297271051,      
0.0738836932379],
['Lys', 27,  0.4233, 0.013666 ,   2.22814 ,       2.36245343286,       
0.034473000886],
['Ala', 28,  0.4193, 0.014360 ,   2.22814 ,       2.38481618154,      
0.0369276054641],
['Lys', 29,  0.4298, 0.011476 ,   2.22814 ,       2.32688199592,      
0.0267154381038],
['Thr', 30,  0.4132, 0.026029 ,    2.22814,       2.42020223098,      
0.0641345037416],
['Gln', 31,  0.4216, 0.023199 ,   2.22814 ,       2.37198006924,      
0.0607157552927],
['Asp', 32,  0.4268, 0.011464 ,   2.22814 ,         2.343030317,      
0.0287089229986],
['Lys', 33,  0.4560, 0.014005 ,   2.22814 ,       2.19288519999,      
0.0322404521468],
['Glu', 34,  0.4468, 0.0067758 ,   2.22814 ,       2.23809730593,       
0.016155226898],
['Gly', 35,  0.4452, 0.0072146 ,   2.22814 ,       2.24610605696,      
0.0166231784999],
['Ile', 36,  0.5023, 0.011122 ,   2.22814 ,       1.99097003714,       
0.018556395961],
['Asp', 39,  0.4360, 0.0041799 ,   2.22814 ,       2.29368308937,      
0.0107946336402],
['Gln', 40,  0.4435, 0.0097303 ,   2.22814 ,       2.25465715695,      
0.0219579437623],
['Gln', 41,  0.4496, 0.010974 ,   2.22814 ,       2.22439489827,      
0.0246838738103],
['Arg', 42,  0.4386, 0.060259 ,    2.22814,       2.27979894813,        
0.14609662846],
['Leu', 43,  0.4603, 0.027054 ,   2.22814 ,       2.17239195077,      
0.0562093749792],
['Ile', 44,  0.4387, 0.012389 ,   2.22814 ,       2.27944810393,      
0.0274015962376],
['Phe', 45,  0.4413, 0.014084 ,   2.22814 ,       2.26621382042,      
0.0325598012054],
['Ala', 46,  0.4601, 0.017401 ,    2.22814,       2.17327103743,      
0.0366263114163],
['Gly', 47,  0.4563, 0.0095167 ,    2.22814,       2.19159971705,      
0.0200974396044],
['Lys', 48,  0.4543, 0.034380 ,    2.22814,       2.20125816954,      
0.0769594227925],
['Gln', 49,  0.4730, 0.0099851 ,   2.22814 ,       2.11409184343,      
0.0211565921351],
['Leu', 50,  0.4455, 0.020395 ,   2.22814 ,       2.24441747047,      
0.0466054926297],
['Glu', 51,  0.4687, 0.018214 ,   2.22814 ,       2.13351495913,      
0.0369010861993],
['Asp', 52,  0.4888, 0.017327 ,   2.22814 ,       2.04583349976,      
0.0309251585403],
['Arg', 54,  0.4561, 0.0059463 ,   2.22814 ,       2.19228267695,      
0.0125176652031],
['Thr', 55,  0.4396, 0.013766 ,   2.22814 ,       2.27502074947,      
0.0319568936121],
['Leu', 56,  0.4221, 0.0064286 ,   2.22814 ,       2.36913113746,      
0.0158844775147],
['Ser', 57,  0.4351, 0.0057569 ,   2.22814 ,       2.29814478201,      
0.0142640516301],
['Asp', 58,  0.4207, 0.010105 ,   2.22814 ,       2.37720483191,      
0.0272911357828],
['Tyr', 59,  0.4493, 0.0055646 ,   2.22814 ,       2.22553611629,      
0.0125933167265],
['Asn', 60,  0.4377, 0.0058785 ,   2.22814 ,       2.28459962433,       
0.013370924242],
['Ile', 61,  0.4270, 0.010546 ,   2.22814 ,       2.34213728573,      
0.0264153868176],
['Gln', 62,  0.5067, 0.012662 ,   2.22814 ,       1.97367886288,      
0.0217945020797],
['Lys', 63,  0.4619, 0.0094808 ,   2.22814 ,       2.16500024708,      
0.0195312534223],
['Glu', 64,  0.4312, 0.0095453 ,   2.22814 ,       2.31913930394,      
0.0223294049604],
['Ser', 65,  0.4568, 0.0043694 ,   2.22814 ,       2.18918681844,     
0.00908363177241],
['Thr', 66,  0.4584, 0.0054046 ,   2.22814 ,       2.18167008464,      
0.0112902195625],
['Leu', 67,  0.4410, 0.018436 ,    2.22814,       2.26772345178,      
0.0444564014823],
['His', 68,  0.4451, 0.0043067 ,   2.22814 ,       2.24680733455,     
0.00926065649886],
['Leu', 69,  0.4447, 0.015813 ,   2.22814 ,       2.24851949133,      
0.0379125750722],
['Val', 70,  0.4319, 0.011793 ,   2.22814 ,       2.31543911345,      
0.0279226422469],
['Leu', 71,  0.4596, 0.0046414 ,   2.22814 ,        2.1756894166,     
0.00987045835498],
['Arg', 72,  0.4622, 0.012209 ,   2.22814 ,       2.16361975928,      
0.0253849160526],
['Leu', 73,  0.5386, 0.016241 ,    2.22814,       1.85657597782,      
0.0255460422485],
['Arg', 74,  0.6202, 0.020482 ,    2.22814,        1.6122677679,      
0.0231001217925],
['Gly', 75,  0.8603, 0.038094 ,    2.22814,       1.16243924266,       
0.023448186193],
['Gly', 76,   1.311, 0.011609 ,    2.22814,      0.763064480891,      
0.0030557315479]]

r1 = []
r1_err = []
print("%-4s %-3s %-15s %-15s %-15s %-15s" % ('Name', 'Num', 'r1', 'err', 
'r1_relax_ratio', 'err_relax_ratio'))
for i in range(len(data)):
    r1.append(1.0/data[i][2])
    r1_err.append((data[i][3] / data[i][4]) / data[i][2]**2)

    print("%-4s %-3s %15.10f %15.10f %15.10f %15.10f" % (data[i][0], 
data[i][1], r1[-1], r1_err[-1], r1[-1]/data[i][5], r1_err[-1]/data[i][6]))

print("\nr1 = %s" % r1)
print("\nr1_err = %s" % r1_err)
----------------------------------------------------------------------------------------------------
Name Num r1              err             r1_relax_ratio  err_relax_ratio
Gln  2      2.1929824561    0.0266041877    0.9999187345    1.0046034622
Ile  3      2.3315458149    0.0219089603    1.0000156083    1.0036976307
Phe  4      2.3479690068    0.0392563299    0.9999451763    0.9963688334
Val  5      2.2492127755    0.0192553009    1.0000709634    1.0112216314
Lys  6      2.2799817601    0.0076376394    1.0000303785    1.0407329060
Thr  7      2.2197558269    0.0284983531    0.9999425977    0.9827695359
Leu  8      2.2401433692    0.0390916333    1.0000971068    0.9905715673
Thr  9      1.9681165125    0.0293674283    0.9999199637    1.0283233068
Gly  10     2.0881186051    0.0088809833    0.9999353933    1.0372513237
Lys  11     1.9976028765    0.0156501208    0.9999665729    1.0108519458
Thr  12     2.1640337589    0.0595936413    0.9999311948    1.0102415911
Ile  13     2.2727272727    0.0341332766    1.0001000193    1.0541564881
Thr  14     2.1992522542    0.0254909885    1.0000539575    0.9756101371
Leu  15     2.2794620470    0.0782584545    0.9999932834    0.9733539709
Glu  16     2.0868113523    0.0745875623    1.0000827341    1.0039021502
Val  17     2.2846698652    0.0313233584    1.0000538689    0.9804538617
Glu  18     2.1079258010    0.0509198386    0.9999924978    1.0362397543
Ser  20     2.2431583670    0.0171595251    0.9999646549    0.9857506292
Asp  21     2.3490721165    0.0091318485    1.0000056570    1.0181385574
Thr  22     2.2747952684    0.0288863527    0.9999417301    1.0178984051
Ile  23     2.4666995560    0.0516230601    0.9999085799    1.0079880682
Asn  25     2.3957834212    0.0275352936    1.0001145220    0.9784462342
Val  26     2.1331058020    0.0721789076    1.0000623972    0.9769260903
Lys  27     2.3623907394    0.0342296457    0.9999734626    0.9929407026
Ala  28     2.3849272597    0.0366574491    1.0000465772    0.9926841616
Lys  29     2.3266635644    0.0278814427    0.9999061270    1.0436453477
Thr  30     2.4201355276    0.0684217822    0.9999724389    1.0668482370
Gln  31     2.3719165085    0.0585767922    0.9999732035    0.9647708728
Asp  32     2.3430178069    0.0282452147    0.9999946607    0.9838479390
Lys  33     2.1929824561    0.0302281071    1.0000443508    0.9375832257
Glu  34     2.2381378693    0.0152332221    1.0000181240    0.9429283912
Gly  35     2.2461814915    0.0163365171    1.0000335846    0.9827553178
Ile  36     1.9908421262    0.0197839980    0.9999357545    1.0661551968
Asp  39     2.2935779817    0.0098684839    0.9999541751    0.9142027654
Gln  40     2.2547914318    0.0222022254    1.0000595544    1.0111249767
Gln  41     2.2241992883    0.0243651941    0.9999120615    0.9870895586
Arg  42     2.2799817601    0.1405860375    1.0000801878    0.9622811903
Leu  43     2.1724961981    0.0573069191    1.0000479874    1.0195259972
Ile  44     2.2794620470    0.0288907296    1.0000061168    1.0543447654
Phe  45     2.2660321777    0.0324575464    0.9999198475    0.9968594762
Ala  46     2.1734405564    0.0368915808    1.0000780018    1.0072425911
Gly  47     2.1915406531    0.0205136512    0.9999730498    1.0207096813
Lys  48     2.2011886419    0.0747614857    0.9999684146    0.9714403125
Gln  49     2.1141649049    0.0200303096    1.0000345593    0.9467644631
Leu  50     2.2446689113    0.0461196303    1.0001120295    0.9895749994
Glu  51     2.1335609132    0.0372111378    1.0000215391    1.0084022359
Asp  52     2.0458265139    0.0325475652    0.9999965853    1.0524623555
Arg  54     2.1925016444    0.0128287457    1.0000998810    1.0248513199
Thr  55     2.2747952684    0.0319705364    0.9999008884    1.0004269112
Leu  56     2.3691068467    0.0161935932    0.9999897470    1.0194602345
Ser  57     2.2983222248    0.0136479695    1.0000772113    0.9568087574
Asp  58     2.3769907297    0.0256241117    0.9999099353    0.9389170128
Tyr  59     2.2256843980    0.0123713940    1.0000666274    0.9823777394
Asn  60     2.2846698652    0.0137711736    1.0000307454    1.0299343099
Ile  61     2.3419203747    0.0259590946    0.9999073876    0.9827262730
Gln  62     1.9735543714    0.0221339042    0.9999369242    1.0155728308
Lys  63     2.1649707729    0.0199437392    0.9999863861    1.0211192658
Glu  64     2.3191094620    0.0230403782    0.9999871323    1.0318402214
Ser  65     2.1891418564    0.0093978203    0.9999794618    1.0345884315
Thr  66     2.1815008726    0.0115433500    0.9999224392    1.0224203287
Leu  67     2.2675736961    0.0425448547    0.9999339621    0.9570017664
His  68     2.2466861379    0.0097563407    0.9999460583    1.0535258134
Leu  69     2.2487069935    0.0358870316    1.0000833892    0.9465733073
Val  70     2.3153507756    0.0283736636    0.9999618484    1.0161525308
Leu  71     2.1758050479    0.0098615796    1.0000531470    0.9991004674
Arg  72     2.1635655560    0.0256494311    0.9999749479    1.0104201645
Leu  73     1.8566654289    0.0251268237    1.0000481807    0.9835896879
Arg  74     1.6123831022    0.0238982645    1.0000715355    1.0345514485
Gly  75     1.1623852145    0.0231001211    0.9999535217    0.9851559902
Gly  76     0.7627765065    0.0030314259    0.9996226080    0.9920458727

r1 = [2.1929824561403506, 2.3315458148752621, 2.3479690068091101, 
2.2492127755285649, 2.2799817601459189, 2.2197558268590454, 
2.2401433691756272, 1.9681165124975399, 2.0881186051367719, 
1.9976028765481419, 2.1640337589266392, 2.2727272727272729, 
2.1992522542335604, 2.2794620469569185, 2.0868113522537564, 
2.2846698652044779, 2.1079258010118043, 2.2431583669807091, 
2.3490721165139767, 2.2747952684258417, 2.4666995559940799, 
2.3957834211787254, 2.1331058020477816, 2.3623907394283012, 
2.3849272597185784, 2.3266635644485807, 2.4201355275895451, 
2.3719165085388996, 2.3430178069353325, 2.1929824561403506, 
2.2381378692927485, 2.2461814914645104, 1.9908421262193909, 
2.2935779816513762, 2.254791431792559, 2.2241992882562278, 
2.2799817601459189, 2.1724961981316535, 2.2794620469569185, 
2.2660321776569226, 2.1734405564007826, 2.1915406530791146, 
2.201188641866608, 2.1141649048625792, 2.244668911335578, 2.1335609131640707, 
2.0458265139116203, 2.1925016443762333, 2.2747952684258417, 
2.369106846718787, 2.2983222247759136, 2.3769907297361539, 
2.2256843979523704, 2.2846698652044779, 2.3419203747072599, 
1.9735543714229324, 2.164970772894566, 2.3191094619666046, 
2.1891418563922942, 2.1815008726003491, 2.2675736961451247, 
2.2466861379465288, 2.2487069934787498, 2.3153507756425098, 
2.1758050478677111, 2.1635655560363478, 1.8566654288897142, 
1.6123831022250887, 1.1623852144600721, 0.76277650648360029]

r1_err = [0.026604187674262224, 0.021908960327458739, 0.039256329888494042, 
0.019255300851873068, 0.0076376393742018656, 0.028498353066316331, 
0.039091633313933949, 0.029367428292696705, 0.0088809832670172228, 
0.015650120819053141, 0.059593641324394583, 0.03413327663803193, 
0.025490988510701602, 0.078258454528893073, 0.074587562253744674, 
0.031323358447156807, 0.050919838563005242, 0.017159525098024916, 
0.0091318484997306177, 0.028886352705770644, 0.051623060062711716, 
0.027535293649668169, 0.072178907573441836, 0.034229645721721408, 
0.036657449069796476, 0.02788144268955875, 0.068421782245556562, 
0.058576792228853564, 0.028245214723318316, 0.030228107121372902, 
0.015233222109109863, 0.016336517069652536, 0.019783997987628171, 
0.0098684839256478925, 0.022202225374497418, 0.024365194102863259, 
0.14058603752635557, 0.057306919080342175, 0.028890729555661861, 
0.032457546373300998, 0.036891580812815593, 0.020513651173707466, 
0.074761485726614368, 0.020030309594628797, 0.046119630340251502, 
0.03721113783141361, 0.03254756520159726, 0.012828745705261387, 
0.031970536368197362, 0.016193593172273917, 0.013647969516280896, 
0.025624111685121922, 0.012371394017083925, 0.013771173631860843, 
0.025959094636482449, 0.022133904172475703, 0.019943739155053694, 
0.023040378158236934, 0.009397820347615311, 0.011543349996187752, 
0.042544854747859025, 0.0097563406703048468, 0.035887031574619428, 
0.028373663586608974, 0.0098615795560442095, 0.025649431054374158, 
0.02512682372176913, 0.02389826446011489, 0.023100121087805757, 
0.0030314258702649616]



Related Messages


Powered by MHonArc, Updated Tue Feb 22 11:40:16 2011