mailRe: r3237 - in /branches/multi_processor: multi/mpi4py_processor.py relax


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Gary S. Thompson on March 20, 2007 - 14:21:
Edward d'Auvergne wrote:

>> >> do i modify interpreter.run to take a quit varaiable set to False so
>> >> that run_script can be
>> >> run with quit = false?
>> >
>> >
>> > Avoid the interpreter and wait at the optimisation steps - the only
>> > serious number crunching code in relax.
>>
>>
>> I agree! interpeters are not required on the slave just the relax data
>> structures in a clean and useable state
>
>
> If you're working at the level of the model-free 'minimise()'
> function, don't bother with the relax data structures! See my
> previous post mentioned above.


I follow now, my only worry is that the processing will be fairly fine
grained so causing a greater ratio of network traffic to processing


The level of the minimise() method is coarser than splitting up a grid
search.  That is because the main loop of the method executes the grid
search for each optimisation instance.


does this limit the number of processors I can use? also i am not following what you mean by each optimization instance is this a model type if so that is too (!) course grained... The original aim for the optimisation during minimisation was to split up residue ranges as this gives coarse grains but a reasonable number of them.... (e.g. if you have a 120 aa protein you have upto 120 grains available (though by that stage your communication overhead may get too much) so actually I was looking at the relax_fit specific function.... plus passing parameters to restrict the residue range or using the select function pluse some saving and restoring of state in the master. If i have this upside down and back to front i apolergise I am still learning ;-)



>> One other quetion: how well behaved are the relax functions with not
>> gratuitously modifying global state. e.g could I share one relax
>> instance between several threads? The reason I ask is that in the
>> case that they are well behaved  many of the data transfer operations
>> in a threaded environment with a single  memory space would become
>> noops ;-) nice!
>
>
> I wouldn't share the state.  Again if you work at the 'minimise()'
> model-free method level, copying it and renaming it to
> 'minimise_mpi()', that new function could be made to not touch the
> relax data storage object.  Maybe there should be a
> 'minimise_mpi_master()' that contains the setup code and a
> 'minimise_mpi_slave()' which contains the optimisation code and the
> unpacking code.  This should be very simple to copy and modify from
> the current code!
>
> Cheers,
>
> Edward
>

actually there shouldn't be anything labelled mpi outside the specific
instance of a processor (that uses mpi) everything else whould be generic


create remote command
send remote command
work with results from remote command...


But wouldn't there be code that only the slave executes?  I don't
understand how you can avoid it.  It needs to return the results.  And
if you don't touch the model-free code, how can you define the
granularity of the calculations to send to the nodes?


YepI agree that I have to touch the minimsation code, but it will all call the super class of mpi4py_processor (yet to be written) so that any communication mechanism can play


Cheers,

Edward

.



--
-------------------------------------------------------------------
Dr Gary Thompson
Astbury Centre for Structural Molecular Biology,
University of Leeds, Astbury Building,
Leeds, LS2 9JT, West-Yorkshire, UK             Tel. +44-113-3433024
email: garyt@xxxxxxxxxxxxxxx                   Fax  +44-113-2331407
-------------------------------------------------------------------





Related Messages


Powered by MHonArc, Updated Tue Mar 20 15:00:42 2007