Hi,
Just a quick point, it would be good to either start a new thread for
these types of questions or changing the subject field (unless you use
the Gmail web interface like I am at the moment). People may miss
important discussions with such scary subject lines!
On 3/20/07, Gary S. Thompson <garyt@xxxxxxxxxxxxxxx> wrote:
garyt@xxxxxxxxxxxxxxx wrote:
Dear Ed
d this is a good enough point to tell you how to run things and what to
install for the mpi version i am testing with
I have installed mpipy and lam
Are both, together with mpi4py, essential for MPI operation? As MPI
is solely for those who are very serious and have access to clusters,
the user should be able handle installing these dependencies (or at
least be able to get someone to install it for them).
lam should be available in your linux distribution ??? ;-)
I'm using Mandriva so I would assume that it is within it's 'contrib'
repositories. You are talking about http://www.lam-mpi.org/ aren't
you? As OpenMPI (http://www.open-mpi.org/) seems to be the future of
that project, wouldn't this be the better option? It's more likely to
be supported in future linux distros.
mpi4py came from http://www.python.org/pypi/mpi4py (there is an mpi4py
website but it is out of date, however, mpi4py is under recent development)
Do you think mpi4py 0.4 or below will be stable enough? Are there
alternatives?
follow the instructions to install mpi4py
create the file 'test_multi1.py'
-------------------8<-----------------------
import multi
cmd = multi.mpi4py_processor.Get_name_command()
self.relax.processor.run_command(cmd)
-------------------8<-----------------------
then type lamboot
and to run the test type:
mpirun -np 6 python relax --multi mpi4py test_mult1.py
to get:
relax repository checkout
Protein dynamics by NMR relaxation data analysis
Copyright (C) 2001-2006 Edward d'Auvergne
This is free software which you are welcome to modify and redistribute
under the conditions of the
GNU General Public License (GPL). This program, including all modules,
is licensed under the GPL
and comes with absolutely no warranty. For details type 'GPL'.
Assistance in using this program
can be accessed by typing 'help'.
script = 'test_mult1.py'
----------------------------------------------------------------------------------------------------
import sys
import multi
cmd = multi.mpi4py_processor.Get_name_command()
self.relax.processor.run_command(cmd)
----------------------------------------------------------------------------------------------------
1 fbsdpcu156-9377
2 fbsdpcu156-9378
3 fbsdpcu156-9379
4 fbsdpcu156-9380
5 fbsdpcu156-9381
I'll have to play with this tomorrow.
hope this is useful!
I'm starting to get a better idea of how this will be implemented!
Now a question what is the best way to get an etrenally running relax
interpreter I can just fire commands at (for the slaves)?
The prompt based interface (as well as the script interface) is only
one way of invoking relax. An important question is how should we
present relax to the user when using MPI. Should the parent process
present a functional interpreter or should operation be solely
dictated by a script? Or should a completely different mechanism of
operation be devised for the interface of the parent. For the grid
computing code the parent UI is either the prompt or the script while
the slaves use the interface started by 'relax --thread'. The slave
use none of the user functions and only really invoke the number
crunching code.
For the MPI slaves (I'm assuming these are separate processes with
different PIDs running on different nodes) we should avoid the
standard UI interfaces as these are superfluous. In this case a
simple MPI interface should probably be devised - it accepts the MPI
commands and returns data back to the parent. Is this though stdin,
stdout, and stderr in MPI? My knowledge of MPI is very limited.
basically I just need to
1. do some setup
2. send a command object
but without quitting at the end of the script
If the prompt and scripting UI interface are not used by the slaves,
this shouldn't be an issue. The parent should hang and wait at the
'grid_search()' and 'minimise()' user functions until these complete.
No other code needs to be executed by MPI.
All that needs to be done is to send the minimal amount of data to the
slave (see the minimise() method of the specific_fns.model_free code
for the objects required in this case), run the specific optimisation
code, and then return the parameter vector and minimisation stats back
to the parent.
do i modify interpreter.run to take a quit varaiable set to False so
that run_script can be
run with quit = false?
Avoid the interpreter and wait at the optimisation steps - the only
serious number crunching code in relax.
I hope this helps,
Edward