mailRe: r3237 - in /branches/multi_processor: multi/mpi4py_processor.py relax


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Gary S. Thompson on March 20, 2007 - 15:52:
Edward d'Auvergne wrote:

On 3/21/07, Gary S. Thompson <garyt@xxxxxxxxxxxxxxx> wrote:

Edward d'Auvergne wrote:

> My personal experience with the coding of the grid computing code, I
> would assume that there are a number of differences.  For example the
> algorithm relax currently uses to handle computers of different
> speeds.

This should not make a difference you just divide more finely and weight
the size of the job by computer (you can even send off the finer grained
task one at a time and then send new ones as tasks are completed... the
same way as you do in the thread code)


The model-free problem places a lower bound on the minimum granularity
size. That limit is the optimisation instance.

so an optimisation instance is one of the minimsation instances from here?

There are 4 different classes of model-free model supported by relax.
These are found by the 'determine_param_set_type()' method and are
handled differently by the 'minimise()' method and the 'maths_fns.mf'
module.  The four types are:
  'mf' - one model-free model.
  'local_tm' - one model-free model together with the local tm parameter.
  'diff' - solely the diffusion tensor (the model-free model
parameters are held constant).
  'all' - all model-free models of all spin systems together with
the Brownian rotational diffusion parameters.

The minimise() method loops over the minimisation instances which for
the four models total n, n, 1, and 1 instances respectively.

For almost all
minimisation techniques (the grid search excluded) parallelisation is
not possible. Say you have a calculation which would take 20 min on a
3 GHz machine but it is sent to a 500 MHz old clunker in the basement.
These are serious performance issues in a grid that add up - they
need to be handled properly and not solely by making the calculations
more fine grained.


no I wasn't suggesting that fine graining is the answer but to allow for various speeds of machines the way you do in the thread code you have to fine grain more so that a slow mchine can do one work unit in the same time it takes a fast machine to do three ..

And a couple of those old machines in addition to
the fast ones could significantly speed up model-free analysis.


> Grid computing is designed for a heterogeneous environment
> whereas MPI is not.  I'm not saying one is superior to the other but
> that they have different applications in different computing
> environments.


mpi can cope with a hetrogeneous environment as explained above


Not well though. And a likely scenario - a Windows user turning off
their machine when they go home rather than just logging out - is not
something MPI is designed for.

It is a tool which can be fitted into
these situations, but in this case it isn't the best tool for the job.
Grid computing is the best tool.


Indeed and that is why the framework is designed to integrate a number of different possibilities ;-)


regards gary

In the situation of a roughly
homogeneous fault-free environment or a cluster, grid computing is not
the best tool, MPI is.

Cheers,

Edward

.



--
-------------------------------------------------------------------
Dr Gary Thompson
Astbury Centre for Structural Molecular Biology,
University of Leeds, Astbury Building,
Leeds, LS2 9JT, West-Yorkshire, UK             Tel. +44-113-3433024
email: garyt@xxxxxxxxxxxxxxx                   Fax  +44-113-2331407
-------------------------------------------------------------------





Related Messages


Powered by MHonArc, Updated Tue Mar 20 16:20:32 2007