mailRe: r3237 - in /branches/multi_processor: multi/mpi4py_processor.py relax


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Edward d'Auvergne on March 20, 2007 - 13:34:
>> >> do i modify interpreter.run to take a quit varaiable set to False  so
>> >> that run_script can be
>> >> run with quit = false?
>> >
>> >
>> > Avoid the interpreter and wait at the optimisation steps - the only
>> > serious number crunching code in relax.
>>
>>
>> I agree!  interpeters are not required on the slave just the relax data
>> structures in a clean and useable state
>
>
> If you're working at the level of the model-free 'minimise()'
> function, don't bother with the relax data structures!  See my
> previous post mentioned above.

I follow now, my only worry is that the processing will be fairly fine
grained so causing a greater ratio of network traffic to processing

The level of the minimise() method is coarser than splitting up a grid search. That is because the main loop of the method executes the grid search for each optimisation instance.


>> One other quetion: how well behaved are the relax functions with not
>> gratuitously modifying global state. e.g could I share one relax
>> instance between several threads? The reason I ask is that in the
>> case that they are well behaved  many of the data transfer operations
>> in a threaded environment with a single  memory space would become
>> noops ;-) nice!
>
>
> I wouldn't share the state.  Again if you work at the 'minimise()'
> model-free method level, copying it and renaming it to
> 'minimise_mpi()', that new function could be made to not touch the
> relax data storage object.  Maybe there should be a
> 'minimise_mpi_master()' that contains the setup code and a
> 'minimise_mpi_slave()' which contains the optimisation code and the
> unpacking code.  This should be very simple to copy and modify from
> the current code!
>
> Cheers,
>
> Edward
>

actually there shouldn't be anything labelled mpi outside the specific
instance of a processor (that uses mpi) everything else whould be generic

create remote command
send remote command
work with results from remote command...

But wouldn't there be code that only the slave executes? I don't understand how you can avoid it. It needs to return the results. And if you don't touch the model-free code, how can you define the granularity of the calculations to send to the nodes?

Cheers,

Edward



Related Messages


Powered by MHonArc, Updated Tue Mar 20 14:40:33 2007