mailr27941 - in /trunk: ./ specific_analyses/frame_order/ test_suite/shared_data/frame_order/cam/rotor/


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by edward on October 02, 2015 - 11:57:
Author: bugman
Date: Fri Oct  2 11:57:33 2015
New Revision: 27941

URL: http://svn.gna.org/viewcvs/relax?rev=27941&view=rev
Log:
Merged revisions 24522-24524 via svnmerge from 
svn+ssh://bugman@xxxxxxxxxxx/svn/relax/branches/frame_order_cleanup

........
  r24522 | bugman | 2014-07-11 10:46:21 +0200 (Fri, 11 Jul 2014) | 12 lines
  
  Parallelised the frame order grid search to run on clusters or multi-core 
systems via OpenMPI.
  
  This involved the creation of the Frame_order_grid_command class which is 
the multi-processor
  Slave_command for performing the grid search.  This was created by 
duplicating the
  Frame_order_minimise_command class and then differentiating both classes.
  
  For the subdivision of the grid search, the new minfx 
grid.grid_split_array() function is used in
  the frame order grid() API method.  The grid() method no longer calls the 
minimise() method but
  instead obtains the processor box itself and adds the subdivided grid 
slaves to the processor.  The
  relax grid_search user function takes care of the rest.
........
  r24523 | bugman | 2014-07-11 12:01:20 +0200 (Fri, 11 Jul 2014) | 8 lines
  
  Fixes for the parallelised grid search for the frame order analysis.
  
  A chi-squared value check was added to the Frame_order_result_command.run() 
method to check if the
  value is lower than the current when the result is returned to the master.  
Without this check, each
  grid subdivision result will be stored as they are returned rather than 
storing the results from the
  global minimum of the entire grid search.
........
  r24524 | bugman | 2014-07-11 15:38:59 +0200 (Fri, 11 Jul 2014) | 7 lines
  
  Added a script for testing out the parameter nesting abilities of the frame 
order auto-analysis.
  
  This script attempts to find the dynamics solution without knowing where 
the pivot is located.
  Hence this will be as in the auto-analysis were this pivot point will be 
used as the base for all
  other models.
........

Added:
    
trunk/test_suite/shared_data/frame_order/cam/rotor/frame_order_free_start.py
      - copied unchanged from r24524, 
branches/frame_order_cleanup/test_suite/shared_data/frame_order/cam/rotor/frame_order_free_start.py
Modified:
    trunk/   (props changed)
    trunk/specific_analyses/frame_order/api.py
    trunk/specific_analyses/frame_order/optimisation.py

Propchange: trunk/
------------------------------------------------------------------------------
--- svnmerge-integrated (original)
+++ svnmerge-integrated Fri Oct  2 11:57:33 2015
@@ -1 +1 @@
-/branches/frame_order_cleanup:1-23195,23197-23205,23208-23646,23650-23673,23682-23709,23713-23783,23807-23812,23856-23887,23935-23983,23985-24322,24327-24346,24352-24411,24433-24509,24514-24520
+/branches/frame_order_cleanup:1-23195,23197-23205,23208-23646,23650-23673,23682-23709,23713-23783,23807-23812,23856-23887,23935-23983,23985-24322,24327-24346,24352-24411,24433-24509,24514-24520,24522-24524

Modified: trunk/specific_analyses/frame_order/api.py
URL: 
http://svn.gna.org/viewcvs/relax/trunk/specific_analyses/frame_order/api.py?rev=27941&r1=27940&r2=27941&view=diff
==============================================================================
--- trunk/specific_analyses/frame_order/api.py  (original)
+++ trunk/specific_analyses/frame_order/api.py  Fri Oct  2 11:57:33 2015
@@ -25,6 +25,7 @@
 # Python module imports.
 from copy import deepcopy
 from math import pi
+from minfx.grid import grid_split_array
 from numpy import array, dot, float64, zeros
 from warnings import warn
 
@@ -40,7 +41,7 @@
 from specific_analyses.api_common import API_common
 from specific_analyses.frame_order.checks import check_pivot
 from specific_analyses.frame_order.data import domain_moving
-from specific_analyses.frame_order.optimisation import Frame_order_memo, 
Frame_order_minimise_command, grid_row, store_bc_data, target_fn_setup
+from specific_analyses.frame_order.optimisation import 
Frame_order_grid_command, Frame_order_memo, Frame_order_minimise_command, 
grid_row, store_bc_data, target_fn_setup
 from specific_analyses.frame_order.parameter_object import Frame_order_params
 from specific_analyses.frame_order.parameters import assemble_param_vector, 
linear_constraints, param_num, update_model
 from specific_analyses.frame_order.variables import MODEL_ISO_CONE_FREE_ROTOR
@@ -484,23 +485,31 @@
                     warn(RelaxWarning("The '%s' model parameters are not 
constrained, turning the linear constraint algorithm off." % cdp.model))
                 constraints = False
 
-        # Eliminate all points outside of constraints (useful for the 
pseudo-ellipse models).
-        if constraints:
-            # Construct a new point array.
-            new_pts = []
-            for i in range(total_pts):
-                # Calculate A.x - b.
-                ci = dot(A, pts[i]) - b
-
-                # Only add the point if all constraints are satisfied.
-                if min(ci) >= 0.0:
-                    new_pts.append(pts[i])
-
-            # Convert to a numpy array.
-            pts = array(new_pts)
-
-        # Minimisation.
-        self.minimise(min_algor='grid', min_options=pts, 
scaling_matrix=scaling_matrix, constraints=constraints, verbosity=verbosity, 
sim_index=sim_index)
+
+        # Printout.
+        print("Parallelised grid search.")
+
+        # Get the Processor box singleton (it contains the Processor 
instance) and alias the Processor.
+        processor_box = Processor_box() 
+        processor = processor_box.processor
+
+        # Loop over each grid subdivision, with all points violating 
constraints being eliminated.
+        verbosity_init = True
+        for subdivision in 
grid_split_array(divisions=processor.processor_size(), points=pts, A=A, b=b):
+            # Set up the memo for storage on the master.
+            memo = Frame_order_memo(sim_index=sim_index, scaling=True, 
scaling_matrix=scaling_matrix)
+
+            # Set up the command object to send to the slave and execute.
+            command = Frame_order_grid_command(points=subdivision, 
scaling_matrix=scaling_matrix, sim_index=sim_index, verbosity=verbosity, 
verbosity_init=verbosity_init)
+
+            # Add the slave command and memo to the processor queue.
+            processor.add_to_queue(command, memo)
+
+            # Turn off the verbosity_init flag so that the target_fn_setup() 
call in Frame_order_grid_command only prints out the information once for the 
first subdivision.
+            verbosity_init = False
+
+        # Execute the queued elements.
+        processor.run_queue()
 
 
     def map_bounds(self, param, spin_id=None):
@@ -570,7 +579,7 @@
         algor = min_algor
         if min_algor == 'Log barrier':
             algor = min_options[0]
-        allowed = ['grid', 'simplex']
+        allowed = ['simplex']
         if algor not in allowed:
             raise RelaxError("Only the 'simplex' minimisation algorithm is 
supported for the relaxation dispersion analysis as function gradients are 
not implemented.")
 

Modified: trunk/specific_analyses/frame_order/optimisation.py
URL: 
http://svn.gna.org/viewcvs/relax/trunk/specific_analyses/frame_order/optimisation.py?rev=27941&r1=27940&r2=27941&view=diff
==============================================================================
--- trunk/specific_analyses/frame_order/optimisation.py (original)
+++ trunk/specific_analyses/frame_order/optimisation.py Fri Oct  2 11:57:33 
2015
@@ -705,25 +705,28 @@
     return param_vector, full_tensors, full_in_ref_frame, rdcs, rdc_err, 
rdc_weight, rdc_vect, rdc_const, pcs, pcs_err, pcs_weight, atomic_pos, temp, 
frq, paramag_centre, com, ave_pos_pivot, pivot, pivot_opt
 
 
-def unpack_opt_results(results, scaling_matrix=None, sim_index=None):
+def unpack_opt_results(param_vector=None, func=None, iter_count=None, 
f_count=None, g_count=None, h_count=None, warning=None, scaling=False, 
scaling_matrix=None, sim_index=None):
     """Unpack and store the Frame Order optimisation results.
 
-    @param results:             The results tuple returned by the minfx 
generic_minimise() function.
-    @type results:              tuple
+    @keyword param_vector:      The model-free parameter vector.
+    @type param_vector:         numpy array
+    @keyword func:              The optimised chi-squared value.
+    @type func:                 float
+    @keyword iter_count:        The number of optimisation steps required to 
find the minimum.
+    @type iter_count:           int
+    @keyword f_count:           The function count.
+    @type f_count:              int
+    @keyword g_count:           The gradient count.
+    @type g_count:              int
+    @keyword h_count:           The Hessian count.
+    @type h_count:              int
+    @keyword warning:           Any optimisation warnings.
+    @type warning:              str or None
     @keyword scaling_matrix:    The diagonal and square scaling matrices.
     @type scaling_matrix:       numpy rank-2, float64 array or None
     @keyword sim_index:         The index of the simulation to optimise.  
This should be None for normal optimisation.
     @type sim_index:            None or int
      """
-
-    # Disassemble the results.
-    if len(results) == 4:    # Grid search.
-        param_vector, func, iter_count, warning = results
-        f_count = iter_count
-        g_count = 0.0
-        h_count = 0.0
-    else:
-        param_vector, func, iter_count, f_count, g_count, h_count, warning = 
results
 
     # Catch infinite chi-squared values.
     if isInf(func):
@@ -798,6 +801,53 @@
         cdp.g_count = g_count
         cdp.h_count = h_count
         cdp.warning = warning
+
+
+
+class Frame_order_grid_command(Slave_command):
+    """Command class for relaxation dispersion optimisation on the slave 
processor."""
+
+    def __init__(self, points=None, scaling_matrix=None, 
sim_index=None,verbosity=None, verbosity_init=False):
+        """Initialise the base class, storing all the master data to be sent 
to the slave processor.
+
+        This method is run on the master processor whereas the run() method 
is run on the slave processor.
+
+        @keyword points:            The points of the grid search 
subdivision to search over.
+        @type points:               numpy rank-2 array
+        @keyword scaling_matrix:    The diagonal, square scaling matrix.
+        @type scaling_matrix:       numpy diagonal matrix
+        @keyword sim_index:         The index of the simulation to optimise. 
 This should be None if normal optimisation is desired.
+        @type sim_index:            None or int
+        @keyword verbosity:         The verbosity level.  This is used by 
the result command returned to the master for printouts.
+        @type verbosity:            int
+        @keyword verbosity_init:    The verbosity flag for the 
target_fn_setup() call so that it only prints out the information once for 
the first subdivision.
+        @type verbosity_init:       bool
+        """
+
+        # Store some arguments.
+        self.points = points
+        self.verbosity = verbosity
+        self.scaling_matrix = scaling_matrix
+
+        # Alias some data to be sent to the slave.
+        self.model = cdp.model
+        self.num_int_pts = cdp.num_int_pts
+
+        # Set up and store the data structures for the target function.
+        self.param_vector, self.full_tensors, self.full_in_ref_frame, 
self.rdcs, self.rdc_err, self.rdc_weight, self.rdc_vect, self.rdc_const, 
self.pcs, self.pcs_err, self.pcs_weight, self.atomic_pos, self.temp, 
self.frq, self.paramag_centre, self.com, self.ave_pos_pivot, self.pivot, 
self.pivot_opt = target_fn_setup(sim_index=sim_index, 
verbosity=verbosity_init)
+
+
+    def run(self, processor, completed):
+        """Set up and perform the optimisation."""
+
+        # Set up the optimisation target function class.
+        target_fn = Frame_order(model=self.model, 
init_params=self.param_vector, full_tensors=self.full_tensors, 
full_in_ref_frame=self.full_in_ref_frame, rdcs=self.rdcs, 
rdc_errors=self.rdc_err, rdc_weights=self.rdc_weight, rdc_vect=self.rdc_vect, 
dip_const=self.rdc_const, pcs=self.pcs, pcs_errors=self.pcs_err, 
pcs_weights=self.pcs_weight, atomic_pos=self.atomic_pos, temp=self.temp, 
frq=self.frq, paramag_centre=self.paramag_centre, 
scaling_matrix=self.scaling_matrix, com=self.com, 
ave_pos_pivot=self.ave_pos_pivot, pivot=self.pivot, pivot_opt=self.pivot_opt, 
num_int_pts=self.num_int_pts)
+
+        # Grid search.
+        results = grid_point_array(func=target_fn.func, args=(), 
points=self.points, verbosity=self.verbosity)
+
+        # Create the result command object on the slave to send back to the 
master.
+        
processor.return_object(Frame_order_result_command(processor=processor, 
memo_id=self.memo_id, results=results, A_5D_bc=target_fn.A_5D_bc, 
pcs_theta=target_fn.pcs_theta, rdc_theta=target_fn.rdc_theta, 
completed=completed))
 
 
 
@@ -901,13 +951,8 @@
         # Set up the optimisation target function class.
         target_fn = Frame_order(model=self.model, 
init_params=self.param_vector, full_tensors=self.full_tensors, 
full_in_ref_frame=self.full_in_ref_frame, rdcs=self.rdcs, 
rdc_errors=self.rdc_err, rdc_weights=self.rdc_weight, rdc_vect=self.rdc_vect, 
dip_const=self.rdc_const, pcs=self.pcs, pcs_errors=self.pcs_err, 
pcs_weights=self.pcs_weight, atomic_pos=self.atomic_pos, temp=self.temp, 
frq=self.frq, paramag_centre=self.paramag_centre, 
scaling_matrix=self.scaling_matrix, com=self.com, 
ave_pos_pivot=self.ave_pos_pivot, pivot=self.pivot, pivot_opt=self.pivot_opt, 
num_int_pts=self.num_int_pts)
 
-        # Grid search.
-        if search('^[Gg]rid', self.min_algor):
-            results = grid_point_array(func=target_fn.func, args=(), 
points=self.min_options, verbosity=self.verbosity)
-
         # Minimisation.
-        else:
-            results = generic_minimise(func=target_fn.func, args=(), 
x0=self.param_vector, min_algor=self.min_algor, min_options=self.min_options, 
func_tol=self.func_tol, grad_tol=self.grad_tol, maxiter=self.max_iterations, 
A=self.A, b=self.b, full_output=True, print_flag=self.verbosity)
+        results = generic_minimise(func=target_fn.func, args=(), 
x0=self.param_vector, min_algor=self.min_algor, min_options=self.min_options, 
func_tol=self.func_tol, grad_tol=self.grad_tol, maxiter=self.max_iterations, 
A=self.A, b=self.b, full_output=True, print_flag=self.verbosity)
 
         # Create the result command object on the slave to send back to the 
master.
         
processor.return_object(Frame_order_result_command(processor=processor, 
memo_id=self.memo_id, results=results, A_5D_bc=target_fn.A_5D_bc, 
pcs_theta=target_fn.pcs_theta, rdc_theta=target_fn.rdc_theta, 
completed=completed))
@@ -965,8 +1010,32 @@
         if memo.sim_index != None:
             print("Simulation %i" % (memo.sim_index+1))
 
+        # Disassemble the results.
+        if len(self.results) == 4:    # Grid search.
+            param_vector, func, iter_count, warning = self.results
+            f_count = iter_count
+            g_count = 0.0
+            h_count = 0.0
+        else:
+            param_vector, func, iter_count, f_count, g_count, h_count, 
warning = self.results
+
+        # Check if the chi-squared value is lower - required for a 
parallelised grid search to work.
+        if memo.sim_index == None:
+            if hasattr(cdp, 'chi2') and cdp.chi2 != None:
+                # No improvement, so exit the function.
+                if func > cdp.chi2:
+                    print("Discarding the optimisation results, the 
optimised chi-squared value is higher than the current value (%s > %s)." % 
(func, cdp.chi2))
+                    return
+
+                # New minimum.
+                print("Storing the optimisation results, the optimised 
chi-squared value is lower than the current value (%s < %s)." % (func, 
cdp.chi2))
+
+            # Just print something to avoid user confusion in parallelised 
grid searches.
+            else:
+                print("Storing the optimisation results, no optimised values 
currently exist.")
+
         # Unpack the results.
-        unpack_opt_results(self.results, memo.scaling, memo.scaling_matrix, 
memo.sim_index)
+        unpack_opt_results(param_vector, func, iter_count, f_count, g_count, 
h_count, warning, memo.scaling, memo.scaling_matrix, memo.sim_index)
 
         # Store the back-calculated data.
         store_bc_data(A_5D_bc=self.A_5D_bc, pcs_theta=self.pcs_theta, 
rdc_theta=self.rdc_theta)




Related Messages


Powered by MHonArc, Updated Fri Oct 02 12:00:43 2015