mailr24654 - in /trunk: ./ lib/dispersion/ target_functions/ test_suite/shared_data/dispersion/profiling/


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by edward on July 22, 2014 - 18:52:
Author: bugman
Date: Tue Jul 22 18:52:58 2014
New Revision: 24654

URL: http://svn.gna.org/viewcvs/relax?rev=24654&view=rev
Log:
Merged revisions 23745-23752,23754-23758 via svnmerge from 
svn+ssh://bugman@xxxxxxxxxxx/svn/relax/branches/disp_spin_speed

........
  r23745 | tlinnet | 2014-06-08 23:44:44 +0200 (Sun, 08 Jun 2014) | 8 lines
  
  Swith the looping from spin->frq to frq->spin.
  
  Since the number of dispersion points are the same for all spins, this
  allows to move the calculation of pA and kex array one level up.
  
  This saves alot of computation.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23746 | tlinnet | 2014-06-08 23:44:45 +0200 (Sun, 08 Jun 2014) | 3 lines
  
  Changed all the creation of special numpy arrays to be of float64 type.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23747 | tlinnet | 2014-06-08 23:54:41 +0200 (Sun, 08 Jun 2014) | 5 lines
  
  Moved the data filling of special numpy array errors and values, to 
initialization of Dispersion class.
  
  These values does not change, and can safely be stored outside.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23748 | tlinnet | 2014-06-08 23:56:36 +0200 (Sun, 08 Jun 2014) | 3 lines
  
  Just a tiny little more speed, by removing temporary storage of chi2 
calculation.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23749 | tlinnet | 2014-06-09 00:09:59 +0200 (Mon, 09 Jun 2014) | 25 lines
  
  Changed all calls to numpy np.X functions to just the numpy function.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
  
  Timing is now showing, 17% loss per single spin, but but 277 % gain on 100 
spin.
  3 fields, 1000 iterations.
  1 spin
          1    0.000    0.000    0.677    0.677 <string>:1(<module>)
          1    0.001    0.001    0.677    0.677 pf:419(single)
       1000    0.002    0.000    0.671    0.001 pf:405(calc)
       1000    0.009    0.000    0.669    0.001 
relax_disp.py:979(func_CR72_full)
       1000    0.102    0.000    0.655    0.001 
relax_disp.py:507(calc_CR72_chi2)
       1003    0.160    0.000    0.365    0.000 cr72.py:101(r2eff_CR72)
      23029    0.188    0.000    0.188    0.000 {numpy.core.multiarray.array}
       4003    0.119    0.000    0.182    0.000 numeric.py:1862(allclose)
  
  100 spin
          1    0.000    0.000   19.783   19.783 <string>:1(<module>)
          1    0.002    0.002   19.783   19.783 pf:441(cluster)
       1000    0.004    0.000   19.665    0.020 pf:405(calc)
       1000    0.013    0.000   19.661    0.020 
relax_disp.py:979(func_CR72_full)
       1000    6.541    0.007   19.634    0.020 
relax_disp.py:507(calc_CR72_chi2)
     916108   11.127    0.000   11.127    0.000 {numpy.core.multiarray.array}
       1300    1.325    0.001    2.026    0.002 cr72.py:101(r2eff_CR72)
       4300    0.495    0.000    0.634    0.000 numeric.py:1862(allclose)
........
  r23750 | tlinnet | 2014-06-09 00:49:14 +0200 (Mon, 09 Jun 2014) | 3 lines
  
  Removed unused import of numpy.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23751 | tlinnet | 2014-06-09 00:49:15 +0200 (Mon, 09 Jun 2014) | 3 lines
  
  Changed all calls to numpy np.X functions to just the numpy function in 
lib/dispersion/cr72.py.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23752 | tlinnet | 2014-06-09 00:49:18 +0200 (Mon, 09 Jun 2014) | 3 lines
  
  Removed unused import of numpy.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23754 | tlinnet | 2014-06-09 19:46:17 +0200 (Mon, 09 Jun 2014) | 3 lines
  
  Made copies of numpy arrays instead of creating from new.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23755 | tlinnet | 2014-06-09 19:46:19 +0200 (Mon, 09 Jun 2014) | 3 lines
  
  Added a self.frqs_a as a multidimensional numpy array.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23756 | tlinnet | 2014-06-09 19:46:20 +0200 (Mon, 09 Jun 2014) | 3 lines
  
  Small fix for the indicies to the errors and values numpy array.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23757 | tlinnet | 2014-06-09 19:46:22 +0200 (Mon, 09 Jun 2014) | 5 lines
  
  Lowered the number of iterations to the profiling scripts.
  
  This is to use the profiling script as bug finder.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........
  r23758 | tlinnet | 2014-06-09 19:46:25 +0200 (Mon, 09 Jun 2014) | 7 lines
  
  Moved the calculation of dw_frq out of spin and spectrometer loop.
  
  This is done by having a special 1/0 spin numpy array, which turns on or 
off the values in the numpy array multiplication.
  
  The multiplication needs to first axis expand dw, and then tile the arrays 
according to the numpy structure.
  
  Task #7807 (https://gna.org/task/index.php?7807): Speed-up of dispersion 
models for Clustered analysis.
........

Modified:
    trunk/   (props changed)
    trunk/lib/dispersion/cr72.py
    trunk/target_functions/relax_disp.py
    trunk/test_suite/shared_data/dispersion/profiling/profiling_cr72.py

Propchange: trunk/
------------------------------------------------------------------------------
--- svnmerge-integrated (original)
+++ svnmerge-integrated Tue Jul 22 18:52:58 2014
@@ -1 +1 @@
-/branches/disp_spin_speed:1-23718,23722-23742
+/branches/disp_spin_speed:1-23718,23722-23742,23745-23758

Modified: trunk/lib/dispersion/cr72.py
URL: 
http://svn.gna.org/viewcvs/relax/trunk/lib/dispersion/cr72.py?rev=24654&r1=24653&r2=24654&view=diff
==============================================================================
--- trunk/lib/dispersion/cr72.py        (original)
+++ trunk/lib/dispersion/cr72.py        Tue Jul 22 18:52:58 2014
@@ -92,8 +92,7 @@
 """
 
 # Python module imports.
-import numpy as np
-from numpy import arccosh, array, cos, cosh, isfinite, min, max, sqrt, sum
+from numpy import allclose, arccosh, array, cos, cosh, isfinite, min, max, 
ndarray, ones, sqrt, sum, zeros
 
 # Repetitive calculations (to speed up calculations).
 eta_scale = 2.0**(-3.0/2.0)
@@ -124,7 +123,7 @@
 
     # Determine if calculating in numpy rank-1 float array, of higher 
dimensions.
     rank_1 = True
-    if isinstance(num_points, np.ndarray):
+    if isinstance(num_points, ndarray):
         rank_1 = False
 
     # Catch parameter values that will result in no exchange, returning flat 
R2eff = R20 lines (when kex = 0.0, k_AB = 0.0).
@@ -135,7 +134,7 @@
             return
     # For higher dimensions, return same structure.
     else:
-        if np.allclose(dw, np.zeros(dw.shape)) or np.allclose(pA, 
np.ones(dw.shape)) or np.allclose(kex, np.zeros(dw.shape)):
+        if allclose(dw, zeros(dw.shape)) or allclose(pA, ones(dw.shape)) or 
allclose(kex, zeros(dw.shape)):
             back_calc[:] = r20a
             return
 
@@ -149,7 +148,7 @@
     k_AB = pB * kex
 
     # The Psi and zeta values.
-    if not np.allclose(r20a, r20b):
+    if not allclose(r20a, r20b):
         fact = r20a - r20b - k_BA + k_AB
         Psi = fact**2 - dw2 + 4.0*pA*pB*kex**2
         zeta = 2.0*dw * fact
@@ -199,6 +198,6 @@
         if rank_1:
             R2eff = array([1e100]*num_points)
         else:
-            R2eff = np.ones(R2eff.shape) * 1e100
+            R2eff = ones(R2eff.shape) * 1e100
 
     back_calc[:] = R2eff

Modified: trunk/target_functions/relax_disp.py
URL: 
http://svn.gna.org/viewcvs/relax/trunk/target_functions/relax_disp.py?rev=24654&r1=24653&r2=24654&view=diff
==============================================================================
--- trunk/target_functions/relax_disp.py        (original)
+++ trunk/target_functions/relax_disp.py        Tue Jul 22 18:52:58 2014
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
 
###############################################################################
 #                                                                            
 #
 # Copyright (C) 2013-2014 Edward d'Auvergne                                  
 #
@@ -27,7 +28,7 @@
 # Python module imports.
 from copy import deepcopy
 from math import pi
-from numpy import complex64, dot, float64, int16, zeros
+from numpy import array, asarray, complex64, dot, float64, int16, max, ones, 
sum, zeros
 
 # relax module imports.
 from lib.dispersion.b14 import r2eff_B14
@@ -398,27 +399,37 @@
             # Get the shape of back_calc structure.
             # If using just one field, or having the same number of 
dispersion points, the shape would extend to that number.
             # Shape has to be: [ei][si][mi][oi].
-            back_calc_shape = list( np.asarray(self.back_calc).shape )[:4]
+            back_calc_shape = list( asarray(self.back_calc).shape )[:4]
 
             # Find which frequency has the maximum number of disp points.
             # To let the numpy array operate well together, the broadcast 
size has to be equal for all shapes.
-            self.max_num_disp_points = np.max(self.num_disp_points)
+            self.max_num_disp_points = max(self.num_disp_points)
+
+            # Define the shape of all the numpy arrays.
+            self.numpy_array_shape = back_calc_shape + 
[self.max_num_disp_points]
+
+            # Create zero and one numpy structure.
+            self.zeros_a = zeros(self.numpy_array_shape, float64)
+            self.ones_a = ones(self.numpy_array_shape, float64)
 
             # Create numpy arrays to pass to the lib function.
             # All numpy arrays have to have same shape to allow to multiply 
together.
             # The dimensions should be [ei][si][mi][oi][di]. 
[Experiment][spins][spec. frq][offset][disp points].
             # The number of disp point can change per spectrometer, so we 
make the maximum size.
-            self.R20A_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.R20B_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.pA_a = np.zeros(back_calc_shape + 
[self.max_num_disp_points])
-            self.dw_frq_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.kex_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.cpmg_frqs_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.num_disp_points_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.back_calc_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.errors_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
-            self.values_a = np.ones(back_calc_shape + 
[self.max_num_disp_points])
+            self.R20A_a = deepcopy(self.ones_a)
+            self.R20B_a = deepcopy(self.ones_a)
+            self.pA_a = deepcopy(self.zeros_a)
+            self.dw_frq_a = deepcopy(self.ones_a)
+            self.kex_a = deepcopy(self.ones_a)
+            self.cpmg_frqs_a = deepcopy(self.ones_a)
+            self.num_disp_points_a = deepcopy(self.ones_a)
+            self.back_calc_a = deepcopy(self.ones_a)
+            self.errors_a = deepcopy(self.ones_a)
+            self.values_a = deepcopy(self.ones_a)
             self.has_missing = False
+            self.frqs_a = deepcopy(self.zeros_a)
+            self.spins_a = deepcopy(self.zeros_a)
+
 
             # Loop over the experiment types.
             for ei in range(self.num_exp):
@@ -435,6 +446,16 @@
                             
self.cpmg_frqs_a[ei][si][mi][oi][:num_disp_points] = 
self.cpmg_frqs[ei][mi][oi]
                             
self.num_disp_points_a[ei][si][mi][oi][:num_disp_points] = 
self.num_disp_points[ei][si][mi][oi]
 
+                            # Extract the errors and values to numpy array.
+                            self.errors_a[ei][si][mi][oi][:num_disp_points] 
= self.errors[ei][si][mi][oi]
+                            self.values_a[ei][si][mi][oi][:num_disp_points] 
= self.values[ei][si][mi][oi]
+
+                            # Extract the frequencies to numpy array.
+                            self.frqs_a[ei][si][mi][oi][:num_disp_points] = 
self.frqs[ei][si][mi]
+                            
+                            # Make a spin 1/0 file.
+                            self.spins_a[ei][si][mi][oi][:num_disp_points] = 
ones(num_disp_points)
+
                             for di in 
range(self.num_disp_points[ei][si][mi][oi]):
                                 if self.missing[ei][si][mi][oi][di]:
                                     self.has_missing = True
@@ -516,37 +537,38 @@
         @rtype:         float
         """
 
-        # Loop over the spins.
-        for si in range(self.num_spins):
-            # Loop over the spectrometer frequencies.
-            for mi in range(self.num_frq):
-                # Extract number of dispersion points.
-                num_disp_points = self.num_disp_points[0][si][mi][0]
-
-                 # The R20 index.
+        # Expand dw to number of axis.
+        dw_axis = dw[None,:,None,None,None]
+        # Tile tw according to dimensions.
+        dw_axis = np.tile(dw_axis, (self.numpy_array_shape[0], 
self.numpy_array_shape[2],self.numpy_array_shape[3], 
self.numpy_array_shape[4]))
+
+        # Convert dw from ppm to rad/s.
+        self.dw_frq_a = dw_axis*self.spins_a*self.frqs_a
+
+        # Loop over the spectrometer frequencies.
+        for mi in range(self.num_frq):
+            # Extract number of dispersion points. Always the same per sin.
+            num_disp_points = self.num_disp_points[0][0][mi][0]
+
+            # Calculate pA and kex per frequency.
+            pA_arr = array( [pA] * num_disp_points, float64)
+            kex_arr =  array( [kex] * num_disp_points, float64)
+
+            # Loop over the spins.
+            for si in range(self.num_spins):
+                # The R20 index.
                 r20_index = mi + si*self.num_frq
 
                 # Store r20a and r20b values per disp point.
-                self.R20A_a[0][si][mi][0][:num_disp_points] = np.array( 
[R20A[r20_index]] * num_disp_points, float64)
-                self.R20B_a[0][si][mi][0][:num_disp_points]  = np.array( 
[R20B[r20_index]] * num_disp_points, float64)
-
-                # Convert dw from ppm to rad/s.
-                dw_frq = dw[si] * self.frqs[0][si][mi]
-
-                # Store dw_frq per disp point.
-                self.dw_frq_a[0][si][mi][0][:num_disp_points] = np.array( 
[dw_frq] * num_disp_points, float64)
+                self.R20A_a[0][si][mi][0][:num_disp_points] = array( 
[R20A[r20_index]] * num_disp_points, float64)
+                self.R20B_a[0][si][mi][0][:num_disp_points]  = array( 
[R20B[r20_index]] * num_disp_points, float64)
 
                 # Store pA and kex per disp point.
-                self.pA_a[0][si][mi][0][:num_disp_points] = np.array( [pA] * 
num_disp_points, float64)
-                self.kex_a[0][si][mi][0][:num_disp_points] = np.array( [kex] 
* num_disp_points, float64)
-
-                # Extract the errors and values to numpy array.
-                self.errors_a[0][si][mi][0][:num_disp_points] = 
self.errors[0][si][mi][0]
-                self.values_a[0][si][mi][0][:num_disp_points] = 
self.values[0][si][mi][0]
+                self.pA_a[0][si][mi][0][:num_disp_points] = pA_arr
+                self.kex_a[0][si][mi][0][:num_disp_points] = kex_arr
 
         ## Back calculate the R2eff values.
         r2eff_CR72(r20a=self.R20A_a, r20b=self.R20B_a, pA=self.pA_a, 
dw=self.dw_frq_a, kex=self.kex_a, cpmg_frqs=self.cpmg_frqs_a, 
back_calc=self.back_calc_a, num_points=self.num_disp_points_a)
-
 
         ## For all missing data points, set the back-calculated value to the 
measured values so that it has no effect on the chi-squared value.
         if self.has_missing:
@@ -560,14 +582,8 @@
                             #self.back_calc[0][si][mi][0][di] = 
self.values[0][si][mi][0][di]
                             self.back_calc_a[0][si][mi][0][di] = 
self.values[0][si][mi][0][di]
 
-                    ## Calculate and return the chi-squared value.
-                    #chi2_sum += chi2(self.values[0][si][mi][0], 
self.back_calc[0][si][mi][0], self.errors[0][si][mi][0])
-
         ## Calculate the chi-squared statistic.
-        chi2_sum = np.sum((1.0 / self.errors_a * (self.values_a - 
self.back_calc_a))**2)
-
-        # Return the total chi-squared value.
-        return chi2_sum
+        return sum((1.0 / self.errors_a * (self.values_a - 
self.back_calc_a))**2)
 
 
     def calc_ns_cpmg_2site_3D_chi2(self, R20A=None, R20B=None, dw=None, 
pA=None, kex=None):

Modified: trunk/test_suite/shared_data/dispersion/profiling/profiling_cr72.py
URL: 
http://svn.gna.org/viewcvs/relax/trunk/test_suite/shared_data/dispersion/profiling/profiling_cr72.py?rev=24654&r1=24653&r2=24654&view=diff
==============================================================================
--- trunk/test_suite/shared_data/dispersion/profiling/profiling_cr72.py 
(original)
+++ trunk/test_suite/shared_data/dispersion/profiling/profiling_cr72.py Tue 
Jul 22 18:52:58 2014
@@ -57,7 +57,7 @@
         # Calc for single.
         s_filename = tempfile.NamedTemporaryFile(delete=False).name
         # Profile for a single spin.
-        cProfile.run('single(iter=1000)', s_filename)
+        cProfile.run('single(iter=100)', s_filename)
 
         # Read all stats files into a single object
         s_stats = pstats.Stats(s_filename)
@@ -75,7 +75,7 @@
         # Calc for cluster.
         c_filename = tempfile.NamedTemporaryFile(delete=False).name
         # Profile for a cluster of 100 spins.
-        cProfile.run('cluster(iter=1000)', c_filename)
+        cProfile.run('cluster(iter=100)', c_filename)
 
         # Read all stats files into a single object
         c_stats = pstats.Stats(c_filename)




Related Messages


Powered by MHonArc, Updated Tue Jul 22 19:00:02 2014