In my problem I have a third order algebraic equation for the variable sigma, all other letters are parameters. Here is it’s right-hand side, left hand side is zero (you can copy it into your programm):

eq = Er k^2 Î³ Î¾ (1 + Î» Cos[2 Ï•]) (3 I k Cos[Ï•] + 4 Ïƒ Cos[2 Ï•] +

I k Cos[3 Ï•]) Subscript[Î±, ac] + (Î¾ + Ïƒ) (Ïƒ + I k Cos[Ï•]) (8 k^4 +

16 k^2 tf1 + 2 k^4 Î³ + k^4 Î³ Î»^2 + 8 Er k^2 Î³ Ïƒ + 16 Er tf1 Î³ Ïƒ +

4 k^4 Î³ Î» Cos[2 Ï•] + k^4 Î³ Î»^2 Cos[4 Ï•] +

4 k^2 (k^2 + Er Î³ Ïƒ) Sin[2 Ï•]^2 Subscript[Î¼, 1] +

4 (k^2 + Er Î³ Ïƒ) (k^2 + tf1 – tf1 Cos[2 Ï•]) Subscript[Î¼, 2])

My purpose here is to get Taylor expansion in parameter k for all the roots near k=0:

sol = Solve[eq == 0, Ïƒ]

sol = Ïƒ /. sol

Series[sol[[1]], {k, 0, 2}

The thing is, that evalution of the line sol = Solve[eq == 0, Ïƒ] takes really long time. It takes couple minutes here, but when I am trying to solve similar equation of the 4th order, it takes forever. Note, that in the line sol = Solve[eq == 0, Ïƒ], I didn’t ask to simplify. But looks like mathematica silently simplifies it, since it is running for a long time. Note, that for the third and forth order equations exact algebraic expressions for the solutions are availible, and indeed, if I type something like Solve[a*x^3+b*x^2+c*x+d==0,x], it gives me the answer instantly. It should be really fast procedure to substitute corresponding expressions into the final formula.

Is there a way to get Taylor expansion I need faster? Solving the equations takes much longer time, than Taylor series.

In case someone needs, this is the 4th order equation, which I cannot solve even in 30 minutes:

-Er k^2 Î³ Î¾ (1 + Î» Cos[2 Ï•]) (4 (k^2 + 2 Ïƒ (Ïƒ + Ï„1)) Cos[2 Ï•] + k^2 (3 + Cos[4 Ï•]))

Subscript[Î±, ac] – (Î¾ + Ïƒ) (k^2 + 2 Ïƒ (Ïƒ + Ï„1) + k^2 Cos[2 Ï•]) (8 k^4 + 16 k^2 tf1 +

2 k^4 Î³ + k^4 Î³ Î»^2 + 8 Er k^2 Î³ Ïƒ + 16 Er tf1 Î³ Ïƒ + 4 k^4 Î³ Î» Cos[2 Ï•] +

k^4 Î³ Î»^2 Cos[4 Ï•] + 4 k^2 (k^2 + Er Î³ Ïƒ) Sin[2 Ï•]^2 Subscript[Î¼, 1] +

4 (k^2 + Er Î³ Ïƒ) (k^2 + tf1 – tf1 Cos[2 Ï•]) Subscript[Î¼, 2])

Evaluation of sol = Solve[eq == 0,Ïƒ] takes forever.

I found out, that Defer can be used to supress evalution, but I need a different thing.

Thanks,

MIkhail

=================

I find that sol = Solve[eq == 0, Ïƒ] takes just seconds, although the result is rather large. By the way, avoid using Subscript.

– bbgodfrey

Dec 9 ’15 at 0:20

1

Furthermore, solving the fourth order equation takes only a minute or so. How much memory do you have on your computer?

– bbgodfrey

Dec 9 ’15 at 0:32

1

sol = Solve[eq == 0, Ïƒ] took 2 minutes on my computer. The method I presented below takes around 3 seconds.

– JHM

Dec 9 ’15 at 1:01

bbgodfrey, what PC do you have? I have laptop with 8GB RAM, quadcore intel i7. Using device manager I found out, that during the calculations ~20% of CPU and ~40% of RAM is used, so I doubt, that the issue is in memory. Thanks for the edit, btw

– Mikhail Genkin

Dec 9 ’15 at 1:26

@MikhailGenkin I have an essentially identical computer running 10.3.0 for Microsoft Windows (64-bit) (October 9, 2015). The calculations complete so quickly that I cannot accurately determine CPU and memory usage with Task Manager for the cubic case. The quartic case takes less than a minute with 17% CPU usage and about 200 MB memory usage.

– bbgodfrey

Dec 9 ’15 at 1:36

=================

2 Answers

2

=================

Here is a workaround:

eq = Er k^2 Î³ Î¾ (1 + Î» Cos[2 Ï•]) (3 I k Cos[Ï•] + 4 Ïƒ Cos[2 Ï•] +

I k Cos[3 Ï•]) Subscript[Î±, ac] + (Î¾ + Ïƒ) (Ïƒ + I k Cos[Ï•]) (8 k^4 +

16 k^2 tf1 + 2 k^4 Î³ + k^4 Î³ Î»^2 + 8 Er k^2 Î³ Ïƒ + 16 Er tf1 Î³ Ïƒ +

4 k^4 Î³ Î» Cos[2 Ï•] + k^4 Î³ Î»^2 Cos[4 Ï•] +

4 k^2 (k^2 + Er Î³ Ïƒ) Sin[2 Ï•]^2 Subscript[Î¼, 1] +

4 (k^2 + Er Î³ Ïƒ) (k^2 + tf1 – tf1 Cos[2 Ï•]) Subscript[Î¼, 2]);

coeffs = Reverse@CoefficientList[eq, Ïƒ];

sol = Ïƒ /. Solve[FromDigits[Array[#[] &, Length[coeffs], 1],

Ïƒ] == 0, Ïƒ] /. n_Integer[] :> coeffs[[n]];

Series[sol[[1]], {k, 0, 2}]

Instead of putting eq in Solve, I put a polynomial with dummy variables (1[], 2[], etc.) and then replaced the dummies with the corresponding coefficients in eq.

It seems that this method takes about 3 seconds on my computer when I first ran it; 0.03 seconds for subsequent iterations.

The bottom 3 lines (from coeffs = to the end of code) should work for any equation eq.

Thanks a lot, you method really works fast on my laptop! Can you briefly explain, why? But this thing definitely works for me. I will accept your answer it 2 days.

– Mikhail Genkin

Dec 9 ’15 at 1:23

1

Nevermind, I got it. The reason I didn’t figure it out by myself, is because I thought, that Solve[eq==0,s] works like this. It seems that @bbgodfrey has a new version, which is doing it by default. (He wrote, that his mathematica is solving it very fast). I assume, that you too have an older version of Mathematica, like I do. My is 10.0.1.0

– Mikhail Genkin

Dec 9 ’15 at 1:43

Nice but might be overly complicated. To see what I mean, compare constant terms with what you get from Solve[(eq /. k -> 0) == 0, \[Sigma]]. Much smaller and in agreement with the larger result after some random substitution.

– Daniel Lichtblau

Dec 9 ’15 at 15:12

I would go about this differently. You have, in effect, a curve parametrized by sigma and k (regard the other parameters as fixed for the moment). This curve is the zero set for eq. You are interested in expanding sigma as a series in k, around k=0. This can be done by setting up equations for derivatives of sigma regarded explicitly as a function of k; these equations arise from differentiating the original one. As we are evaluating derivatives at k=0 we can make that substitution after differentiating but before solving. Here is the process.

eq = Er k^2 \[Gamma] \[Xi] (1 + \[Lambda] Cos[

2 \[Phi]]) (3 I k Cos[\[Phi]] + 4 \[Sigma] Cos[2 \[Phi]] +

I k Cos[3 \[Phi]]) Subscript[\[Alpha],

ac] + (\[Xi] + \[Sigma]) (\[Sigma] + I k Cos[\[Phi]]) (8 k^4 +

16 k^2 tf1 + 2 k^4 \[Gamma] + k^4 \[Gamma] \[Lambda]^2 +

8 Er k^2 \[Gamma] \[Sigma] + 16 Er tf1 \[Gamma] \[Sigma] +

4 k^4 \[Gamma] \[Lambda] Cos[2 \[Phi]] +

k^4 \[Gamma] \[Lambda]^2 Cos[4 \[Phi]] +

4 k^2 (k^2 + Er \[Gamma] \[Sigma]) Sin[

2 \[Phi]]^2 Subscript[\[Mu], 1] +

4 (k^2 + Er \[Gamma] \[Sigma]) (k^2 + tf1 –

tf1 Cos[2 \[Phi]]) Subscript[\[Mu], 2]);

eqk = eq /. \[Sigma] -> \[Sigma][k];

derivsys = Table[D[eqk, {k, j}], {j, 0, 3}];

derivvars = Table[D[\[Sigma][k], {k, j}], {j, 0, 3}];

newvars = Array[d, 4, 0];

subs = Thread[derivvars -> newvars];

sys = (derivsys /. subs) /. k -> 0;

soln = Solve[sys == 0, newvars];

During evaluation of In[425]:= Solve::svars: Equations may not give solutions for all “solve” variables. >>

In[436]:= Most[newvars].(k^Range[0, 2]/Factorial[Range[0, 2]]) /. soln

(* Out[436]= {(k^2 Sec[\[Phi]] (16 tf1 Cos[\[Phi]] +

3 Er \[Gamma] Cos[\[Phi]] Subscript[\[Alpha], ac] +

3 Er \[Gamma] \[Lambda] Cos[\[Phi]] Cos[

2 \[Phi]] Subscript[\[Alpha], ac] +

Er \[Gamma] Cos[3 \[Phi]] Subscript[\[Alpha], ac] +

Er \[Gamma] \[Lambda] Cos[2 \[Phi]] Cos[

3 \[Phi]] Subscript[\[Alpha], ac] +

4 tf1 Cos[\[Phi]] Subscript[\[Mu], 2] –

4 tf1 Cos[\[Phi]] Cos[2 \[Phi]] Subscript[\[Mu],

2]))/(4 Er tf1 \[Gamma] (-4 – Subscript[\[Mu], 2] +

Cos[2 \[Phi]] Subscript[\[Mu], 2])), -I k Cos[\[Phi]] + (

k^2 (1 + \[Lambda] Cos[2 \[Phi]]) (-3 Cos[\[Phi]] +

4 Cos[\[Phi]] Cos[2 \[Phi]] –

Cos[3 \[Phi]]) Sec[\[Phi]] Subscript[\[Alpha], ac])/(

4 tf1 (-4 – Subscript[\[Mu], 2] +

Cos[2 \[Phi]] Subscript[\[Mu], 2])), -\[Xi] – (

k^2 Cos[2 \[Phi]] (1 + \[Lambda] Cos[2 \[Phi]]) Subscript[\[Alpha],

ac])/(tf1 (-4 – Subscript[\[Mu], 2] +

Cos[2 \[Phi]] Subscript[\[Mu], 2]))} *)

Astute readers will notice there was a wrinkle. Due to multiplicity and/or factors of sigma in the reduced derivative equations (by reduced I mean with the k->0 substitution), we get underdetermined systems. A way around this is to take more derivatives than we actually need, enough to get solutions for the lower derivatives that we require in the expansion. Above I used one extra derivative. I’m sure there is a theory behind this need for extra derivatives (some aspect of prolongation, maybe) but I do not know the details at all.

Thanks for the interesting method. Your code seems to be incorrect: my result is different. You never use variable derivs

– Mikhail Genkin

Dec 9 ’15 at 19:04

(1) Thanks for spotting that mistake. I had some earlier definitions and decided to change notation; what got posted was a bit of a mixture. I believe I have corrected it now.

– Daniel Lichtblau

Dec 9 ’15 at 19:41

(2) It is possible that the solution sets are equivalent. One way to check is to substitute values in for most or all parameters (other than k of course) and see if the sets of three numeric series agree.

– Daniel Lichtblau

Dec 9 ’15 at 19:42

I checked it against direct method. Solution are equivalent, as should be. I also got the results really fast for 4th order equation, which is not the case for direct method. You helped me a lot.

– Mikhail Genkin

Dec 9 ’15 at 20:10

I still accepted other answer. Even though your answer is usefull, the other one directly answers my question, whether yours is more workaround. Thanks again

– Mikhail Genkin

Dec 11 ’15 at 17:39