# How do I efficiently solve a linear differential equation with purely imaginary coefficients: dydt=iAy \frac{dy}{dt} = i A y (A real, symmetric, sparse)?

I would like to solve a system of differential equations of the following form: dydt=iAy \frac{dy}{dt} = i A y . Here, A∈R A \in \mathbb{R} is a large, sparse and symmetric matrix and ii is the imaginary number.

In the case where AA is diagonal, the solutions are simply complex exponentials. As the magnitude of the off-diagonal coefficients increase, the solutions are still oscillatory in nature but interactions between the different components of yy cause the amplitudes to change over time (beating patterns).

I am interested in finding an efficient numerical method to accurately find y(T)y(T) when y(0)y(0) is given, where TT is a time span much larger than the typical period of the oscillations, and y(t)y(t) is a very large vector (more than 5000 elements).

I have tried solving this problem using matrix exponentials and ODE solvers.
Matrix exponentials (MATLAB implementation using Padأ© approximants) do not scale well to large systems and take a prohibitively long time to run. ODE solvers tend to select very small step sizes (due to the oscillations), and take even more time to run (I tried stiff and non-stiff solvers).

Is there a way to solve this system more efficiently for large vector sizes by exploiting the structure of the equation? What is the relevant literature for this particular problem?

I note that there are related questions (1, 2) on this site, but I am hoping to find a solution that would work well specifically for the problem described above.

=================

Have you considered an implicit method, like Crank-Nicolson?
– user8960
2 days ago

@user8960: Crank-Nicolson is a PDE solver. The time evolution of it is the implicit trapezoidal rule. Which incidentally results in powers of the 1/1 Pade approximation (I+i·dt/2·A)(I−i·dt/2·A)(I+i·dt/2·A)(I-i·dt/2·A) of exp(i·dt·A)\exp(i·dt·A).
– LutzL
2 days ago

I know it’s a PDE solver, but I don’t see why the ODE cannot be regarded as a degenerate PDF for this purpose. The scheme can be written out exactly, and the main chore is then to invert I−i△t2A I – {i \triangle t \over 2} A once. This inversion can be attempted approximately and without the necessity to approximate the exponential itself. Besides, I suggested looking at the entire family of implicit methods, not just CN.
– user8960
2 days ago

=================

2 Answers
2

=================

If uju_j are orthonormal eigenvectors of AA corresponding to eigenvalues λj\lambda_j, the solution for initial value y(0)y(0) will be
y(T)=∑jcjexp(iTλj)ujcj=ujTy(0) \eqalign{y(T) &= \sum_j c_j \exp(iT \lambda_j) u_j \cr
c_j &= {u_j}^T y(0)}
Unfortunately the matrix formed by the eigenvectors will not be sparse, so there’s a large amount of data to compute, but if you’re lucky you might have only relatively few large cjc_j’s and everything else small enough to ignore.
Matlab (for example) should be able to handle eigenvalues and eigenvectors for a 5000×50005000 \times 5000 matrix.

Based on the comments made here and elsewhere on this site, I decided to try a number of different algorithms and compare their performances. Beware, this might not generalize to any type of data.

The algorithms I tried are:

MATLAB’s expm function (which works by scaling and squaring)
Eigenvalue decomposition (eiAT=VeiDTV−1 e^{i A T} = V e^{i D T} V^{-1} where the exponential in eiDTe^{i D T} acts only on the diagonal)
A toolbox called Expokit, which has a sparse method to directly calculate the solution y(T)y(T) without forming the exponential matrix.
Another such sparse method found in literature (code here, paper reference 2 below, denoted “amh” in the table below)

The results in seconds are as follows, for matrices AA of size nn by nn with NN nonzero elements.

+———+——-+——-+——–+——–+——–+———+
| n | 1126 | 1762 | 2526 | 3446 | 4490 | 5678 |
| N | 43778 | 85718 | 145394 | 231730 | 342258 | 487946 |
+———+——-+——-+——–+——–+——–+———+
| expm | 3.95 | 18.92 | 68.48 | 201.16 | 488.22 | 1059.60 |
| eig | 0.35 | 1.22 | 3.49 | 8.94 | 18.97 | 38.05 |
| amh | 0.14 | 0.24 | 0.34 | 0.56 | 0.68 | 1.15 |
| expokit | 0.18 | 0.28 | 0.56 | 0.76 | 1.02 | 1.52 |
+———+——-+——-+——–+——–+——–+———+

Fitting a power law profile reveals that the runtime for expm grows as n3.3n^{3.3}, the eigenvalue method grows as n2.9n^{2.9}, and the two other methods’ runtimes seem to grow approximately linearly with the number of nonzero elements NN. They also grow linearly with the number of vectors to transform (not shown above). All methods yield the same results within 10−1210^{-12} on my data. I did not include ODE solvers in the above comparison, because they take forever to run on my machine for these problem sizes.

So, the answer is this: for a problem which is very large but sparse, use one of the two latter methods listed above, to calculate the result of multiplication with a matrix exponential without actually forming that matrix.

References:

C. Moler and C. Van Loan, “Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later,” SIAM Rev. 45, 3â€“49 (2003).
A. H. Al-Mohy and N. J. Higham, “Computing the Action of the Matrix Exponential, with an Application to Exponential Integrators,” SIAM Journal on Scientific Computing 33, 488â€“511 (2011).