mailRe: m0 models


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Martin Ballaschk on February 06, 2013 - 17:57:
Hi Edward,

thank you for your extensive comments. This helps me a lot.

On 05.02.2013, at 11:21, Edward d'Auvergne <edward@xxxxxxxxxxxxx> wrote:

It cannot mean that the "m0"-residues behave like a static body (S^2 would 
be 1).

Statistically, yes.  Physically, no.  You just can't see it from the
data you have.  That is the meaning of this model.  A good analogy is
as follows - you could have a picture of an elephant but, if you only
have 4 pixels in that picture, you probably won't be able to tell that
your picture is of an elephant.

Haha! Grey pixels, you say? :)

I'm not done with the analysis of all of my complexes, but I fear that even 
with everything done "correctly" there will be "m0" all over the place and I 
don't know how to interpret this in terms of mobility. Judging from the runs 
I did until now, especially the interesting (i.e. probably more mobile) 
regions of the more interesting protein show this behaviour. As I said, I 
have quite large areas that disappear from my spectra from one protein 
variant to the other – so this is an indication for exchange mobility in this 
regions which is interesting for itself! Neighbouring regions have a lot of 
"m0" (in 62 of ~220 assigned residues minus 28 unresolved) and in the 
ellipsoidal diffusion model there is also a lot of strange Rex = 0.0000 
terms, the other models show Rex of around 10^-18 (=nearly zero). Convergence 
is reached in 20-30 rounds for each diffusion model, no oscillations are 
visible. 

The current data are not perfect, as the necessary (!) R1 temperature 
compensation was not used yet and also no soft pulses. So obviously I have 
re-record some of the data. I used only one single sample, which was pretty 
stable over the time I measured (no visible precipitation, but very slight 
decreasing TROSY intensity). The temperature is off by less than 1 K 
(remember our fucked-up but-now-apparently-fixed calibration procedure). The 
consistency tests returned a fairly centered distribution (ratio of j0 at 
different fields: 0.993 +/- 0.174) of moderate consistency ("j0 test" 
(field1-field2)/field2 = 0.08).  

That said, I don't see so overwhelmingly much of these stark m0 effects in 
the protein I expect to be more rigid, although I have only a dataset wich is 
highly inconsistent due to large temperature diffences, that was much less 
stable used only old-school experiments with hard pulses have been used. 

My SH3 testing data don't show this kind of behaviour (no m0 at all), but 
these have incredibly fat signal. Having a "real" protein changes a few 
things I guess, especially in terms of S/N.

Maybe it's because of more complex motions. Maybe I should have gone for 
relaxation dispersion in the first place. But "one step after another" seemed 
reasonable at that time. (I'm currently quite desperately looking for an 
introductory review like Séb Morin's "practical guide" for relaxation 
dispersion – do you know one?)

Maybe this relates to model m9 in relax.  Sometimes the very weak
peaks, broadened by chemical exchange, are too noisy to extract
model-free motions from.  This is visible in relax as the selection of
model m9.  In such a case, model m0 will probably not be picked.

I excluded the really noisy/weak peaks beforehand and m9 gets picked 
sometimes (9 times m9 opposed to 62 times m0 out of ~220 picked signals).

I don't know if this is completely relevant to your question, but
noise is another issue which affects the reliability of the te
parameters.  As te increases, so does the errors.  [...]
Whereas
noise shifts parameter values around randomly and governs which
motions are statistically significant, bias on the other hand shifts
everything in one direction.  

So do you think if my data are too noisy this could be a consequence? I 
already reached the limit in terms of scans, protein concentration and 
measuring time. Maybe I should write a grant for two new magnets ...

Bias could probably in some cases
hide motions, but more likely will result in artificial motions.  Bias
could also be introduced if the spherical, spheroidal, or ellipsoidal
diffusion is too simplified for the system or if partial dimerisation
is occurring.


That would be the jackpot of course – throwing over all the work we did just 
to find out that the simplistic diffusion models don't fit our system. :D

The 10 seconds for the NOE seems a little excessive (unless this is an
IDP or very small protein).  Have you quantified the time required for
recovery?  

I didn't really "quantify" in terms which delay length is the minimum I can 
use, but tested the recommendation of 10s by Lakomek/Bax for TROSY-based 
sequences and deuterated proteins (from the paper I cited earlier). There 
they reported that for their system they got identical values for HSQC-based 
readouts and TROSY-based readouts if they used the mentioned precautions. 

I tried "traditional" non-selective pulses and 3s interscan delay vs. 
selective pulses and 10s with the same 45 kDa protein complex and saw large 
differences in HetNOE values. Before, I tested also SH3 with different 
combinations of soft/hard pulses and length of interscan delay and the trend 
was that with non-selective hard pulses you get higher HetNOE ratios 
(sometimes > 1) and the longer the delay is the lower the HetNOE ratios 
values get. 

As for the peak picking and fitting, [...]

I do it quite similarly, except that CCPN analysis always searches for maxima 
and I always pick positive noise (except there is no maximum, then it picks 
noise at the reference position). My workaround is to set the boundaries of 
the "search box" to 0 that the crazy searching algorithm doesn't let the 
peaks wander around too much. Contrary to what one would expect the routine 
still looks for maxima. If you asked me, I'd say that's pretty broken, but on 
the mailing list they weren't really open for discussion on that matter. But 
after all the difference should be tiny and not significant for my problems.

One other thing you need to be
very careful with is sample concentration.  If you require multiple
samples then you should aim to have identical protein concentrations
(volume does not matter).  Slight concentration differences can have a
large effect on the global tumbling of you system, hence the data
cannot be combined.  

What do you say – how much concentration difference is still OK? I measured 
samples between 320-330 µM (which is the maximum that is feasible for the 
more unstable complexes) which amounts to a concentration difference of ~ 3%. 
An additional problem is the inaccuracy of the concentration determination by 
UV(280nm) and of course the sample degradation over time. I never quantified 
the concentration after the measurements, which in hindsight seems pretty 
stupid. I should check if there are any differences (there certainly are, but 
I wonder if they turn out as significant).

Sorry for the long sermon. I appreciate that you always read my stuff and 
also answer in a really helpful and extensive manner.

Cheers
Martin

-- 
Martin Ballaschk
AG Schmieder
Leibniz-Institut für Molekulare Pharmakologie
Robert-Rössle-Str. 10
13125 Berlin
ballaschk@xxxxxxxxxxxxx
Tel.: +49-30-94793-234/315
Büro: A 1.26
Labor: C 1.10




Related Messages


Powered by MHonArc, Updated Thu Feb 07 11:00:05 2013