mailRe: Relax_fit.py problem


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Chris MacRaild on October 16, 2008 - 01:09:

Well, the Jackknife technique
(http://en.wikipedia.org/wiki/Resampling_(statistics)#Jackknife) does
something like this.  It uses the errors present inside the collected
data to estimate the parameter errors.  It's not great, but is useful
when errors cannot be measured.  You can also use the covariance
matrix from the optimisation space to estimate errors.  Both are rough
and approximate, and in convoluted spaces (the diffusion tensor space
and double motion model-free models of Clore et al., 1990) are known
to have problems.  Monte Carlo simulations perform much better in
complex spaces.


I have used (and extensively tested) Bootstrap resampling for this
problem. In my hands it works very well provided the data quality is
high (which of course it must be if the resulting values are to be of
any use in model-free analysis). In other words it gives errors
indistinguishable from those derived by Monte Carlo based on duplicate
spectra. Bootstraping, like Jacknife, does not depend on an estimate
of peak hight uncertainty. Its success presumably reflects the smooth
and simple optimisation space involved in an exponential fit to good
data - I fully expect it to fail if applied to the complex spaces of
model-free optimisation.

While on the topic, I can also confirm that baseline RMSD is a good
estimator of peak hight uncertainty. In my hands no sqrt(2) correction
is required. Interestingly, there seems to be no simple relationship
between baseline RMSD and peak volume uncertainty. I never managed to
understand why that is, but perhaps it is related to the behaviour of
noise under apodisation?

Chris



Related Messages


Powered by MHonArc, Updated Thu Oct 16 10:00:39 2008