I know that I’m not a Jedi in solving Equations and minimizing functions; but now I’ve this problem, and the solution is far to me.

I want to solve this equation:

der1=(a (0.50984 + 2.75322 b))/(0.0649842 + 0.70185 b – 0.871367 b^2)^2 – (

54393.7 (1 + 0.9807 b + 0.961772 b^2 – 0.94321 b^3))/(1 –

0.9807 b)^3 – (

160032. b (1 + 0.9807 b + 0.961772 b^2 – 0.94321 b^3))/(1 –

0.9807 b)^4 + (

13866. (-3.84709 b – 7.54568 b^2 + 11.1001 b^3))/(1 – 0.9807 b)^3

It is not simple, but not so difficult.

I want to find the solutions of der1==0, or it is better to fix the tolerance of the solution, e.g. epsilon=10^(-8), namely, I want to fix the objective function

der1-solution

sol2 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a,

b}]

Results:

(*{0.0000140716, {a -> 1600., b -> 0.0698124}}*)

{0.000423697, {a -> 1999.91, b -> 0.0860871}}

But I know that the solution is

{a -> 1611.14715490848, b -> 0.0702993597886862}

In fact:

der1 /. {a -> 1611.14715490848, b -> 0.0702993597886862}

(*1.20053*10^-10*)

What’s wrong in my algorithm?

Thanks in advance.

=================

1

In my machine der1 /. {a -> 1611.14715490848, b -> 0.0702993597886862} gives 0.0304727 instead of your last statement. Perhaps you made a copy/paste error

– Dr. belisarius

Sep 22 ’14 at 16:10

5

You have 12 questions now and never accepted an answer. It’s time to start! meta.stackexchange.com/q/5234/152358

– Dr. belisarius

Sep 22 ’14 at 16:14

I suggest you take a look at the guidance here: reference.wolfram.com/language/tutorial/…

– blochwave

Sep 22 ’14 at 16:18

=================

1 Answer

1

=================

Introduction

My first suggestion is to learn a little more about optimization. A good tutorial can be found here from Wolfram: http://reference.wolfram.com/language/tutorial/ConstrainedOptimizationGlobalNumerical.html

Analysis

Now let’s have a closer look at your problem.

der1 = (a (0.50984 + 2.75322 b))/(0.0649842 + 0.70185 b –

0.871367 b^2)^2 – (54393.7 (1 + 0.9807 b + 0.961772 b^2 –

0.94321 b^3))/(1 – 0.9807 b)^3 – (160032. b (1 + 0.9807 b +

0.961772 b^2 – 0.94321 b^3))/(1 – 0.9807 b)^4 + (13866.

(-3.84709 b – 7.54568 b^2 + 11.1001 b^3))/(1 – 0.9807 b)^3;

First I’ll try NMinimize with Method -> Automatic:

sol1 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a, b},

Method -> Automatic]

(* {0.000415114, {a -> 1999.69, b -> 0.0860787}} *)

Now Ill try it specifying some of the methods available to NMinimize, which are described in the link above, and also under the Options tab in the documentation.

sol2 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a, b},

Method -> “DifferentialEvolution”]

(* {0.00024717, {a -> 1357.61, b -> 0.0586481}} *)

sol3 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a, b},

Method -> “NelderMead”]

(* {0.000415114, {a -> 1999.69, b -> 0.0860787}} *)

sol4 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a, b},

Method -> “SimulatedAnnealing”]

(* {-7.66704*10^-10, {a -> 1955.55, b -> 0.0843917}} *)

sol5 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a, b},

Method -> “RandomSearch”]

(* {9.00162*10^-7, {a -> 1684.55, b -> 0.0734537}} *)

Using Method -> “SimulatedAnnealing” gets pretty close to your desired tolerance, but with a solution different to the one you’ve given.

But then if you start to specify some of the parameters available to each of the methods, DifferentialEvolution also performs well.

sol6 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a, b},

Method -> {“DifferentialEvolution”, “ScalingFactor” -> 2}]

(* {-7.7307*10^-10, {a -> 1862.61, b -> 0.0807573}} *)

Again, with different parameters compared to your solution.

Also, are you sure about your solution? Because:

sol7 = NMinimize[{der1, der1 >= 0 && 1000 < a < 2000 && b > 0}, {a,b},

Method -> “SimulatedAnnealing”,

WorkingPrecision -> 30]

{5.45696821063756942749023437500*10^-11,

{a -> 1532.94819887066811794693992515,

b -> 0.0668370411594986600919874388732}}

der1 /. Last@sol7

(* 5.45697*10^-11 *)

Which is smaller than your solution. That said, this does throw out a warning about the precision of the argument function being less than the working precision (NMinimize::precw), so this may be wrong.

In short, optimization (particularly global optimization, which is what you appear to be after) can be quite tricky. I found the tutorial I’ve linked to at the top of this post to be very helpful, particularly in selecting appropriate options and methods.

1

Thanks a lot. I’ll read the tutorial. It is strange that DifferentialEvolution (sol6), gives a negative solutions, even if we put that the der1>=0. Is it normal? Then when I made a minimization I don’t know if it is a local or global minimum, and this is a problem!

– Mary

Sep 22 ’14 at 16:53