mailRe: relax-users Digest, Vol 16, Issue 1


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Gary Thompson on September 04, 2007 - 12:24:
relax-users-request@xxxxxxx wrote:
Send relax-users mailing list submissions to
        relax-users@xxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
        https://mail.gna.org/listinfo/relax-users
or, via email, send a message with subject or body 'help' to
        relax-users-request@xxxxxxx

You can reach the person managing the list at
        relax-users-owner@xxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of relax-users digest..."


Today's Topics:

   1. Error propagation for duplicates, triplicates,
      quadriplicates... (Sebastien Morin)


----------------------------------------------------------------------

Message: 1
Date: Mon, 03 Sep 2007 16:38:40 -0400
From: Sebastien Morin <sebastien.morin.1@xxxxxxxxx>
Subject: Error propagation for duplicates, triplicates,
        quadriplicates...
To: relax-users@xxxxxxx
Message-ID: <46DC70D0.6040905@xxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Hi !

I recorded 4 sets of R1 and would like to use them all and, so, extract
a mean value and also an associated error...

I would like to get the opinion of someone maybe more used with
statistics than me...

I thought about :

1. calculating the mean error
2. calculating the standard error (should be the best way, no)
3. calculating the standard deviation
4. extracting an error by calculating the extremes the value can reach
in every dataset based on the error of each dataset

What would the best error to use in a statistical point of view, but
also in a model-free point of view..?

Also, is there a way to use both the errors in the datasets and a error
extrated for the observed deviation of data..?

Note that the errors from each datasets were calculated directly from
the fits, here using the 'autoFit.tcl' script from NMRPipe with data
processed as Gaussian lines.

Also, in the case of duplicates or triplicates, should one use the same
approcah ?

Thanks !


S?b  :)

There are several ways forward here that are less obvious

1. fit all the data together... even if you have points at duplicate times these will still add to the fitting (though note that if the data wern't all measured under the same conditions (i.e. the signal to noise differs) you will have to use a weighted least squares procedure) 2. add all the intensities at the same timepoints together (having wieghted by the noise intensity) this will give the traditional root 2 increase in s/n each time you double the number of points


both of these are easy and avoid the 'cominatorial statistics' problem (though of course they may not be the best methods (though I would do 1 it should be good))
regards
gary


--
-------------------------------------------------------------------
Dr Gary Thompson
Astbury Centre for Structural Molecular Biology,
University of Leeds, Astbury Building,
Leeds, LS2 9JT, West-Yorkshire, UK             Tel. +44-113-3433024
email: garyt@xxxxxxxxxxxxxxx                   Fax  +44-113-2331407
-------------------------------------------------------------------





Related Messages


Powered by MHonArc, Updated Mon Sep 17 23:40:39 2007