Error In polynomial Root Finding

I have a polynomial in y like so:

2.00855*10^20 + 6.89796*10^20 x y + (5.17347*10^20 + 5.92241*10^20 x^2) y^2 – 1.4806*10^21 x y^3 + 7.77316*10^20 y^4 == 0

I vary x and find y, and then plot :

PolyPlot[xmin_,xmax_,dx_] := (
biglist = Flatten[Reap[Table[y /.Solve[2.008550963847368`*^20+6.897959183673469`*^20 Sow[x] y+(5.173469387754917`*^20+5.92240896`*^20 x^2) y^2-1.4806022400000304`*^21 x y^3+7.773161759999733`*^20 y^4==0,y],
{x,xmin,xmax,dx}]],1];
pairs =Flatten[ Table[{biglist[[-1]][[i]] , biglist[[i]][[j]]},{i,1,Length[biglist] – 1},{j,1,4}],1];
plot = ListPlot[pairs,PlotRange->All];
Show[plot]
)

Which seems to work, when I run it I get two real “branches”, which is what I expected. However, when I plug the ordered pairs of {x,y} back into the polynomial, I get don’t get zero. How do I increase the accuracy?

I’ve tried setting the WorkingPrecision to 200, but I still see large errors.

(Also, some realistic input values should be from ~10^5 to ~10^8 or higher)

=================

  

 

(1) It is difficult to increase accuracy of output without increasing accuracy of input. (2) One can use derivatives to estimate expected residuals.
– Daniel Lichtblau
Jun 18 at 3:18

=================

2 Answers
2

=================

The precision of the numbers in the equation is such that near its roots when you change the input value just slightly, rounding errors turn out to have a significant impact. The same happens in this simpler example with a smooth function: Plot[(Cos[1 + h] – Cos[1])/h, {h, 0, 1/100000000}].

You could use exact numbers and solve the equation just once, and then substitute different values for x.

eq = 2.008550963847368`*^20 + 6.897959183673469`*^20 Sow[x] y + (5.173469387754917`*^20 +
5.92240896`*^20 x^2) y^2 – 1.4806022400000304`*^21 x y^3 + 7.773161759999733`*^20 y^4;
eq2 = SetPrecision[eq, \[Infinity]];
sol = y /. Solve[eq2 == 0, y];
PolyPlot[xmin_, xmax_, dx_] := (
pairs = Flatten[Table[Thread[{i, sol /. x -> N[i, 150]}], {i, xmin, xmax, dx}], 1];
ListPlot[pairs, PlotRange -> All])

PolyPlot[10, 10^10 + 10, 10^8]
Table[eq2 /. {x -> pairs[[k, 1]], y -> pairs[[k, 2]]}, {k, Length[pairs]}]

Replacing eq2 by eq in the last line is pointless, because of roundoff error sensitivity.

First some references that are somewhat related. In all of them the large degree of the polynomials is a factor, in addition to the large coefficients. In this question, only the large coefficients are relevant.

This may be a duplicate Q&A:
NSolve for high degree univariate polynomials
Related: Funny behaviour when plotting a polynomial of high degree and large coefficients
A good read: James H. Wilkinson, “The Perfidious Polynomial,” Studies in Numerical Analysis, 1-28, MAA Stud. Math., 24, Math. Assoc. America, Washington, DC, 1984

On large and small residuals

When measuring the size of a residual, one should consider what scale is appropriate. For machine-precision work, the inherent relative error is usually about 10^-16 (since IEEE double-precision reals are the common floating-point numbers used by Mathematica). So when one finds a root x=x0x = x0 of f(x)=0f(x) = 0,
a residual of about |f(x0)|≈|f′(x0)|×10−16\big|\,f(x0)\,\big| \approx \big|\,f'(x0)\,\big|\, \times \, 10^{-16} would not be uncommon, assuming the root x0x0 was successfully computed and all its bits correct. In a numerical procedure there may also be rounding or other sources of error, that means only most of the bits are correct. The best one could hope for is that |f(x0)/f′(x0)|\big|\,f(x0)/f'(x0)\,\big| is not more than 10−1610^{-16} or so.

The following shows the relative error of the computed roots. There are about 400 that are exactly 0. (the RealExponent of 0. is Log10[$MinMachineNumber] or -307.653). And there seems to be a large group that are off by 1 bit or less. Finally, there is a third group which have a more significant error.

PolyPlot[-400., 400, 1.];
errordata =
Table[poly/Norm@Grad[poly, {x, y}] /. Thread[{x, y} -> xy0], {xy0,
pairs}] // Abs // RealExponent;
MinMax[errordata]
ListPlot[errordata]

(* {-307.653, -12.6089} *)

So most of the solutions returned by Solve are highly accurate.

Count[errordata, x_ /; x > -16]
Count[errordata, x_ /; x <= -16] (* 1243 1961 *) On the proper use of SetPrecision I hope you will forgive me for preaching a little bit. I have some disdain for the use of SetPrecision[expr, Infinity] (or increased precision) when it is for the purpose showing that the apparent error can be made to disappear by converting the problem to an exact problem, which exact solvers will solve exactly. The original problem is given in approximate form, and I assume that means the numbers were derived from data and are only approximately known. (Indeed, one should be careful about assuming the double-precision coefficients are as accurate as their precision. In fact, Mathematica has ways to deal with the known accuracy of data, but I rarely see it used on this site.) Converting an approximate to an exact problem might make one feel that Mathematica magically found the exact solution. If so, one has been taken in. The uncertainty in inputs of a problem should be allowed to propagate through the computed solution and show up in the answers and checks. This is what I tried to show above. Increasing the precision of the expression(s) in a problem seems to me to be falsely claiming to know more about the inputs than one does. It seems better to me to understand the results and the underlying numerics, especially in this cases, where, I would claim, the results are correct. On the other hand, I think it is okay to set the precision to infinity to analyze an approximate problem, as I did in the second linked answer above. Another exception is when the original problem had exact coefficients to begin with (say, from a mathematical problem, not a scientific one). In that case, one should probably not have converted them to machine-precision numbers. Another way to use SetPrecision[] to analyze this problem is to use SetPrecision[expr, $MachinePrecision]. This converts the machine-precision coefficients to arbitrary-precision numbers that are declared to be accurate to machine precision. It also turns on Mathematica's built-in precision tracking, so that one can estimate the error by examining the computed results. Below we take the MachinePrecision results and check the residuals after converting them to $MachinePrecision. (Note MachinePrecision and $MachinePrecision are not the same.) residualsMP = Table[SetPrecision[poly, $MachinePrecision] /. Thread[{x, y} -> xy0], {xy0, SetPrecision[pairs, $MachinePrecision]}] // Abs;

residualsMP // Short[#, 5] &

AllTrue[residualsMP, PossibleZeroQ]
(* True *)

They all turn out to be “zero.” In fact, the form “0. x 10^16” means that the rounding error is around 10^16 and the computed value is closer to zero than that. We can examine these in closer detail. Let’s look at the first and third residuals, which have different results displayed above.

residualsMP[[{1, 3}]] // InputForm
(*
{0“-16.998893852353977,
0“-5.4477916593052464}
*)

The computed (approximate) errors are 10^16.998893852353977 and 10^5.4477916593052464 (the negative sign is just how Accuracy works).
These should be bounds on the computed solutions. As we can see below, the computed machine-precision residuals are less than that:

poly /. Thread[{x, y} -> pairs[[1]]] // Abs // Log10
poly /. Thread[{x, y} -> pairs[[3]]] // Abs // Log10
(*
16.1307
3.86662
*)