mailRe: Finding the eigen values/matrix exponential for higher dimensional data


Others Months | Index by Date | Thread Index
>>   [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Header


Content

Posted by Edward d'Auvergne on June 18, 2014 - 13:36:
Hi,

You are aware of the lib.linear_algebra.matrix_exponential and
lib.linear_algebra.matrix_power modules right?

Regards,

Edward


On 18 June 2014 13:33, Edward d'Auvergne <edward@xxxxxxxxxxxxx> wrote:
Hi,

If you would like to tackle this one, that would be legendary.  Though
you must decide if it is worth your time.  There are also still a few
things to tidy up for the disp_spin_speed branch to prepare it for
merger.  For your idea, I would suggest considering the following:

1)  Creating a simple timeit script such as in
http://svn.gna.org/viewcvs/relax/branches/frame_order_cleanup/test_suite/shared_data/frame_order/timings/
to show the timing of one function where you loop over NS and NM,
calculating the matrix exponential each time, and one where you do it
all in one operation.  You can get initial matrices by running one of
the system tests in debugging mode and adding a print statement in an
appropriate place, then replicate this matrix over NS and NM.  This
will allow you to debug the multi-matrix exponential calculation.

2)  Before making changes in lib.dispersion, make sure that the system
and unit test coverage is reasonable for each model to be speed up.

3)  Create a profiling script in
test_suite/shared_data/dispersion/profiling so we can measure and see
the improvements.  If the numpy operation is costly, there is a slight
chance that the change will not speed things up.  Point 1) will show
that though.  You've probably seen that the system tests are useless
for seeing your huge speed ups.

If this could be done, I think relax will then be the fastest
dispersion implementation out there.  Unfortunately I don't have the
source code to all software, so I don't know if this would be true.
And relax uses minfx and different optimisation to the competition,
avoiding numerical gradients, so there are many other factors
affecting the speed of relax vs. the rest.

Regards,

Edward



On 18 June 2014 13:00, Troels Emtekær Linnet <tlinnet@xxxxxxxxx> wrote:
The worst bottleneck for the numerical analysis, is the looping over spins.

For the nmerical models, a propagator that will evolve the magnetization is
calculated.

This is the matrix exponential of the matrix R that contains all the
contributions to the evolution, i.e. relaxation, exchange and chemical
shift evolution.
The shape of R, can be (7, 7) or (9,9).

Finding the matrix exponential is done over looping of spins,
and spectrometer frequencies.
NS and NM.

If NS is 100, and NM is 2, maybe R, could get packed to:
(100, 2, 7, 7)

The matrix exponential is:
    # The eigenvalue decomposition.
    W, V = eig(A)

    # Calculate the exact exponential.
    eA = dot(dot(V, diag(exp(W))), inv(V))

But according to:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html

This should be possible for an array of matrixes.


#####
Compute the eigenvalues and right eigenvectors of a square array.

Parameters :
a : (..., M, M) array
Matrices for which the eigenvalues and right eigenvectors will be computed
Returns :
w : (..., M) array
The eigenvalues, each repeated according to its multiplicity. The
eigenvalues are not necessarily ordered. The resulting array will be always
be of complex type. When a is real the resulting eigenvalues will be real
(0 imaginary part) or occur in conjugate pairs
v : (..., M, M) array
The normalized (unit “length”) eigenvectors, such that the column v[:,i] is
the eigenvector corresponding to the eigenvalue w[i].
Raises :
LinAlgError
If the eigenvalue computation does not converge.

-------
Broadcasting rules apply, see the numpy.linalg documentation for details.

This is implemented using the _geev LAPACK routines which compute the
eigenvalues and eigenvectors of general square arrays.

The number w is an eigenvalue of a if there exists a vector v such that
dot(a,v) = w * v. Thus, the arrays a, w, and v satisfy the equations
dot(a[:,:], v[:,i]) = w[i] * v[:,i] for i \in \{0,...,M-1\}.
------
####

R is:
R = rcpmg_3d(R1A=r10a, R1B=r10b, R2A=R2A_si_mi, R2B=R2B_si_mi, pA=pA,
pB=pB, dw=dw_si_mi, k_AB=k_AB, k_BA=k_BA)

So that would need a new way to expand this, since we have R20 and dw as
specific for spin and frequency.



Troels Emtekær Linnet
_______________________________________________
relax (http://www.nmr-relax.com)

This is the relax-devel mailing list
relax-devel@xxxxxxx

To unsubscribe from this list, get a password
reminder, or change your subscription options,
visit the list information page at
https://mail.gna.org/listinfo/relax-devel



Related Messages


Powered by MHonArc, Updated Wed Jun 18 14:00:14 2014