mailr27873 - in /branches/frame_order_cleanup: ./ auto_analyses/ multi/ pipe_control/ user_functions/


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by edward on September 25, 2015 - 14:13:
Author: bugman
Date: Fri Sep 25 14:13:22 2015
New Revision: 27873

URL: http://svn.gna.org/viewcvs/relax?rev=27873&view=rev
Log:
Merged revisions 27848 via svnmerge from 
svn+ssh://bugman@xxxxxxxxxxx/svn/relax/trunk

........
  r27848 | tlinnet | 2015-06-11 13:15:54 +0200 (Thu, 11 Jun 2015) | 45 lines
  
  Reverted r27840-r27845, related to Bug #23618, queuing system for multi 
processors is not well designed.
  
  The changes has not been proven to increase the minimisation time, and some 
of the fixes introduces unintentional bugs.
  
  The command used was:
  svn merge -r27845:r27839 .
  
  .....
      
------------------------------------------------------------------------------------------------------------------
      r27840 | tlinnet | 2015-05-27 03:09:48 +0200 (Wed, 27 May 2015) | 3 
lines
  
      Adding keyword for verbosity for multi processor mode.
  
      Task #7826 (https://gna.org/task/?7826): Write an python class for the 
repeated analysis of dispersion data.
      
------------------------------------------------------------------------------------------------------------------
      r27841 | tlinnet | 2015-05-27 03:09:50 +0200 (Wed, 27 May 2015) | 2 
lines
  
      Adding to user function minimise.execute() the keyword "mp_verbosity", 
to control the amount
      of information to print when running multi processors.
      
------------------------------------------------------------------------------------------------------------------
      r27842 | tlinnet | 2015-05-27 03:09:52 +0200 (Wed, 27 May 2015) | 1 line
  
      In multi.processor(), moving up the debugging print-out of running sets 
of calculatation.
      
------------------------------------------------------------------------------------------------------------------
      r27843 | tlinnet | 2015-05-27 03:09:55 +0200 (Wed, 27 May 2015) | 1 line
  
      In pipe_control of minimise, adding the possibility to control 
verbosity in multi processor mode.
      
------------------------------------------------------------------------------------------------------------------
      r27844 | tlinnet | 2015-05-27 03:09:57 +0200 (Wed, 27 May 2015) | 6 
lines
  
      Suggestion Fix 1, in multi.processor.run_queue().
  
      This fix changes, that the amount of simulations is not chunked up 
before sending each chunk to a CPU.
      Rather, all jobs are to be submitted after each other, and finished for 
themselves.
  
      Bug #23618: (https://gna.org/bugs/index.php?23618): queuing system for 
multi processors is not well designed.
      
------------------------------------------------------------------------------------------------------------------
      r27845 | tlinnet | 2015-05-27 03:09:59 +0200 (Wed, 27 May 2015) | 3 
lines
  
      Suggestion for fix 2, where jobs are continously replenished when other 
jobs are finished.
  
      Bug #23618: (https://gna.org/bugs/index.php?23618): queuing system for 
multi processors is not well designed.
      
------------------------------------------------------------------------------------------------------------------
  .....
........

Modified:
    branches/frame_order_cleanup/   (props changed)
    branches/frame_order_cleanup/auto_analyses/relax_disp_repeat_cpmg.py
    branches/frame_order_cleanup/multi/processor.py
    branches/frame_order_cleanup/pipe_control/minimise.py
    branches/frame_order_cleanup/user_functions/minimisation.py

Propchange: branches/frame_order_cleanup/
------------------------------------------------------------------------------
--- svnmerge-integrated (original)
+++ svnmerge-integrated Fri Sep 25 14:13:22 2015
@@ -1 +1 @@
-/trunk:1-27797,27800-27847
+/trunk:1-27797,27800-27848

Modified: branches/frame_order_cleanup/auto_analyses/relax_disp_repeat_cpmg.py
URL: 
http://svn.gna.org/viewcvs/relax/branches/frame_order_cleanup/auto_analyses/relax_disp_repeat_cpmg.py?rev=27873&r1=27872&r2=27873&view=diff
==============================================================================
--- branches/frame_order_cleanup/auto_analyses/relax_disp_repeat_cpmg.py      
  (original)
+++ branches/frame_order_cleanup/auto_analyses/relax_disp_repeat_cpmg.py      
  Fri Sep 25 14:13:22 2015
@@ -781,7 +781,7 @@
             print("Clustered spins are:", cdp.clustering)
 
 
-    def minimise_execute(self, verbosity=1, methods=None, model=None, 
model_from=None, analysis=None, analysis_from=None, list_glob_ini=None, 
force=False, mc_err_analysis=False, mp_verbosity=0):
+    def minimise_execute(self, verbosity=1, methods=None, model=None, 
model_from=None, analysis=None, analysis_from=None, list_glob_ini=None, 
force=False, mc_err_analysis=False):
         """Use value.set on all pipes."""
 
         # Set default
@@ -826,7 +826,7 @@
                     subsection(file=sys.stdout, text="Performing Monte-Carlo 
minimisations on %i simulations"%(getattr(cdp, "sim_number")), prespace=0)
 
                 # Do the minimisation.
-                self.interpreter.minimise.execute(min_algor=self.min_algor, 
func_tol=self.opt_func_tol, max_iter=self.opt_max_iterations, 
constraints=self.constraints, scaling=True, verbosity=verbosity, 
mp_verbosity=mp_verbosity)
+                self.interpreter.minimise.execute(min_algor=self.min_algor, 
func_tol=self.opt_func_tol, max_iter=self.opt_max_iterations, 
constraints=self.constraints, scaling=True, verbosity=verbosity)
 
                 # Do Monte-Carlo error analysis
                 if mc_err_analysis:

Modified: branches/frame_order_cleanup/multi/processor.py
URL: 
http://svn.gna.org/viewcvs/relax/branches/frame_order_cleanup/multi/processor.py?rev=27873&r1=27872&r2=27873&view=diff
==============================================================================
--- branches/frame_order_cleanup/multi/processor.py     (original)
+++ branches/frame_order_cleanup/multi/processor.py     Fri Sep 25 14:13:22 
2015
@@ -585,8 +585,6 @@
 
         running_set = set()
         idle_set = set([i for i in range(1, self.processor_size()+1)])
-        all_jobs = list(reversed(xrange(1, len(queue)+1)))
-        completed_jobs = []
 
         if self.threaded_result_processing:
             result_queue = Threaded_result_queue(self)
@@ -606,26 +604,18 @@
 
             # Loop until the queue of calculations is depleted.
             while len(running_set) != 0:
+                # Get the result.
+                result = self.master_receive_result()
+
                 # Debugging printout.
                 if verbosity.level():
-                    print('\n')
-                    print('Running nr of jobs: %i' % len(running_set))
-                    print('Completed jobs: %s' % len(completed_jobs))
-
-                # Get the result.
-                result = self.master_receive_result()
+                    print('\nIdle set:    %s' % idle_set)
+                    print('Running set: %s' % running_set)
 
                 # Shift the processor rank to the idle set.
                 if result.completed:
                     idle_set.add(result.rank)
                     running_set.remove(result.rank)
-                    completed_jobs.append(all_jobs.pop())
-                    if len(queue) != 0:
-                        # Add new to que
-                        command = queue.pop()
-                        dest = result.rank
-                        self.master_queue_command(command=command, dest=dest)
-                        running_set.add(dest)
 
                 # Add to the result queue for instant or threaded processing.
                 result_queue.put(result)
@@ -643,8 +633,8 @@
         """
 
         #FIXME: need a finally here to cleanup exceptions states
-        #lqueue = self.chunk_queue(self.command_queue)
-        self.run_command_queue(self.command_queue)
+        lqueue = self.chunk_queue(self.command_queue)
+        self.run_command_queue(lqueue)
 
         del self.command_queue[:]
         self.memo_map.clear()

Modified: branches/frame_order_cleanup/pipe_control/minimise.py
URL: 
http://svn.gna.org/viewcvs/relax/branches/frame_order_cleanup/pipe_control/minimise.py?rev=27873&r1=27872&r2=27873&view=diff
==============================================================================
--- branches/frame_order_cleanup/pipe_control/minimise.py       (original)
+++ branches/frame_order_cleanup/pipe_control/minimise.py       Fri Sep 25 
14:13:22 2015
@@ -31,7 +31,6 @@
 from lib.float import isNaN
 from lib.io import write_data
 from multi import Processor_box
-from multi.misc import Verbosity; mverbosity = Verbosity()
 from pipe_control.mol_res_spin import return_spin, spin_loop
 from pipe_control import pipes
 from pipe_control.pipes import check_pipe
@@ -429,7 +428,7 @@
     cdp.grid_zoom_level = level
 
 
-def minimise(min_algor=None, line_search=None, hessian_mod=None, 
hessian_type=None, func_tol=None, grad_tol=None, max_iter=None, 
constraints=True, scaling=True, verbosity=1, mp_verbosity=0, sim_index=None):
+def minimise(min_algor=None, line_search=None, hessian_mod=None, 
hessian_type=None, func_tol=None, grad_tol=None, max_iter=None, 
constraints=True, scaling=True, verbosity=1, sim_index=None):
     """Minimisation function.
 
     @keyword min_algor:         The minimisation algorithm to use.
@@ -452,8 +451,6 @@
     @type scaling:              bool
     @keyword verbosity:         The amount of information to print.  The 
higher the value, the greater the verbosity.
     @type verbosity:            int
-    @keyword mp_verbosity:      The amount of information to print from the 
multi processor module.  The higher the value, the greater the verbosity.
-    @type mp_verbosity:         int
     @keyword sim_index:         The index of the simulation to optimise.  
This should be None if normal optimisation is desired.
     @type sim_index:            None or int
     """
@@ -490,9 +487,6 @@
     processor_box = Processor_box() 
     processor = processor_box.processor
 
-    # Store the verbosity level for the multiprocessor.
-    mverbosity.set(mp_verbosity)
-
     # Single Monte Carlo simulation.
     if sim_index != None:
         # Reset the minimisation statistics.
@@ -517,8 +511,8 @@
             api.minimise(min_algor=min_algor, min_options=min_options, 
func_tol=func_tol, grad_tol=grad_tol, max_iterations=max_iter, 
constraints=constraints, scaling_matrix=scaling_matrix, 
verbosity=verbosity-1, sim_index=i)
 
             # Print out.
-            if verbosity and processor.is_queued():
-                print("Queueing Simulation nr:" + repr(i+1))
+            if verbosity and not processor.is_queued():
+                print("Simulation " + repr(i+1))
 
         # Unset the status.
         if status.current_analysis:

Modified: branches/frame_order_cleanup/user_functions/minimisation.py
URL: 
http://svn.gna.org/viewcvs/relax/branches/frame_order_cleanup/user_functions/minimisation.py?rev=27873&r1=27872&r2=27873&view=diff
==============================================================================
--- branches/frame_order_cleanup/user_functions/minimisation.py (original)
+++ branches/frame_order_cleanup/user_functions/minimisation.py Fri Sep 25 
14:13:22 2015
@@ -217,13 +217,6 @@
     desc_short = "verbosity level",
     desc = "The amount of information to print to screen.  Zero corresponds 
to minimal output while higher values increase the amount of output.  The 
default value is 1."
 )
-uf.add_keyarg(
-    name = "mp_verbosity",
-    default = 0,
-    py_type = "int",
-    desc_short = "multi processor verbosity level",
-    desc = "The amount of information to print to screen when running multi 
processors.  Zero corresponds to minimal output while higher values increase 
the amount of output.  The default value is 0."
-)
 # Description.
 uf.desc.append(Desc_container())
 uf.desc[-1].add_paragraph("This will perform an optimisation starting from 
the current parameter values.  This is only suitable for data pipe types 
which have target functions and hence support optimisation.")




Related Messages


Powered by MHonArc, Updated Fri Sep 25 14:20:25 2015