Optimizing Numerical Analysis: Finding the Smallest Maximum of g(x) on [-1,1]

In summary, the conversation discusses how to find the points x1, x2, and x3 so that the maximum of |g(x)| for x in the interval [-1,1] is as small as possible. The listener is unsure of how to approach the problem and asks for suggestions. The speaker explains that the maximum can be minimized by finding the local maxima of g(x)^2 and solving for the corresponding values of x1, x2, and x3. By exploiting the symmetry of the graph of g(x)^2, the problem becomes simpler and does not require the use of the Remez algorithm. This problem is an example of a minimax polynomial and has applications in game theory and function design.
  • #1
buzzmath
112
0
Let g(x)=(x-x1)(x-x2)(x-x3) -1<=x1<x2<x3<=1 find the points x1,x2,x3 so that
max |g(x)| for x an element of [-1,1] is as small as possible.

This is the only problem I have left and I have no idea how to do it. We've been covering Newtons method and rootfinding but I don't think it relates and I don't know of a method to solve this problem. Any suggestions?

Thanks
 
Physics news on Phys.org
  • #2
First thing to do is get rid of that nasty absolute value. You want to minimize the maximum of |g(x)|. You should be able to convince yourself that this is the same as minimizing the maximum of g(x)^2.
 
  • #3
But how would you go about minimizing a function for 3 unknown constants and and variable between -1 and 1. If I take g(x)^2 and then take the derivative of that I get
2(x^3-x^2x3-x^2x2+xx2x3-x^2+xx3+xx1x2-x1x2x3)(3x^2-2xx3-2xx2+x2x3-2x+x3+x1x2) and then set the equal to zero but that won't tell me the x1 x2 x3 values but the x value for specif x1 x2 x3 points. This is the way I've minimized functions in the past. How would you minimize a function for x in an interval for many unknown constants where what your minimizing is also a max? It doesn't really make that much sense to me to minimize a maximum value?
 
Last edited:
  • #4
A couple of pointers. First, don't call it just g(x). Better would be g(x;x1,x2,x3). You have four parameters here.

Second, don't be so quick to expand. For example, instead of expanding [itex]g(x;x1,x2,x3)^2[/itex] first and then differentiating, do it the other way around:

[tex]\frac{\partial}{\partial x} g(x;x1,x2,x3)^2 =
2g(x;x1,x2,x3)\;\frac{\partial}{\partial x} (g(x;x1,x2,x3))[/tex]

Setting this to zero yields

[tex]g(x;x1,x2,x3) \;\frac{\partial}{\partial x} (g(x;x1,x2,x3)) = 0[/tex]

The points [itex]x[/itex] at which [itex]g(x;x1,x2,x3)=0[/itex] represent the local minima in [itex]g(x;x1,x2,x3)^2[/itex]. You want the local maxima, not the local minima. You can ignore the solutions [itex]g(x;x1,x2,x3)=0[/itex]. The local maxima are the solutions to

[tex]\frac{\partial}{\partial x} g(x;x1,x2,x3) = 0[/tex]

This is a quadratic function with potentially two solutions. Find these. This will give you the points at which [itex]|g(x)|[/itex] reaches local minimum. The function [itex]|g(x)|[/itex] might also reach a maximum at the boundary points (0 and 1). The function will reach its maximum value over [0,1] at one these four points.

Now you want the partial derivatives of [itex]g(x;x1,x2,x3)^2[/itex] wrt each of the xi evaluated at this maximal x to be zero. That's three simultaneous equations in three variables.

Minimizing the maximum (or maximizing the minimum) is used in many places in mathematics. One area is game theory. Here the goal is to find the move that maximizes the "me versus the other guy" score even if the other guy finds the move that minimizes this same score.

Another area is function design. Suppose you are asked to design an approximation g(x) to some hard-to-calculate function f(x) over some range. Most people will use a least squares approximation. This minimizes the root mean square error. A user of this function is more likely concerned with the worst case error, not the root mean square error. The worst case error is max(|f(x)-g(x)|). This is exactly what you are asked to do in this problem.
 
  • #5
Another hint: You should be able to show the graph of g(x)^2 needs to be symmetric about x=1/2. This simplifies the problem immensely.
 
  • #6
Just one more note (I promise!) ...

This problem is a nice introduction to the concept of a minimax polynomial. One way to solve for such a beast is the Remez exchange algorithm. The wikipedia article is pretty good, the mathworld article is mediocre. There is no need to use the Remez algorithm on this problem. Exploiting the symmetry makes the answer fall out.
 

FAQ: Optimizing Numerical Analysis: Finding the Smallest Maximum of g(x) on [-1,1]

1. What is the purpose of optimizing numerical analysis?

The purpose of optimizing numerical analysis is to find the most efficient and accurate method for solving mathematical problems using numerical techniques. This allows scientists and researchers to obtain more precise results in a shorter amount of time.

2. How do you define "smallest maximum" in the context of g(x) on [-1,1]?

In this context, the "smallest maximum" refers to the minimum value of g(x) within the given interval of [-1,1]. This means finding the lowest point on the graph of g(x) within that range.

3. What is the importance of finding the smallest maximum of g(x) on [-1,1]?

Finding the smallest maximum of g(x) on [-1,1] is important because it allows us to determine the most optimal value for x that will yield the lowest output for g(x). This can be useful in various fields such as engineering, physics, and economics where finding the most efficient solution is crucial.

4. How is numerical analysis used to optimize the smallest maximum of g(x) on [-1,1]?

Numerical analysis uses various methods such as root-finding algorithms and optimization techniques to approximate the value of x that will give the smallest maximum of g(x) on [-1,1]. These methods involve using numerical calculations and algorithms to find the most accurate solution.

5. Can the smallest maximum of g(x) on [-1,1] be found analytically?

In most cases, the smallest maximum of g(x) on [-1,1] cannot be found analytically. This is because the function g(x) may be complex or have multiple local minima and maxima within the given interval. Therefore, numerical methods are often required to approximate the solution.

Similar threads

Replies
9
Views
2K
Replies
4
Views
1K
Replies
10
Views
2K
Replies
3
Views
1K
Replies
4
Views
2K
Replies
28
Views
470
Replies
2
Views
2K
Replies
3
Views
2K
Replies
1
Views
3K
Back
Top