I Finding the Largest (or smallest) Value of a Function - given some constant symmetric errors

  • I
  • Thread starter Thread starter erobz
  • Start date Start date
AI Thread Summary
The discussion centers on determining the appropriate sign for symmetric errors in a function to find its maximum or minimum values. It begins with a simple linear function and explores how to apply this concept to more complex functions, emphasizing the importance of using first-order derivatives to guide the choice of error signs. The participants agree that selecting the sign of the error based on the sign of the partial derivatives is a more efficient method than brute force checking of all combinations. However, concerns are raised about potential pitfalls with higher-order derivatives and the behavior of the function near peaks, which could lead to misleading results. Ultimately, evaluating the derivative at the perturbed point is suggested as a way to account for possible sign changes in the function's behavior.
erobz
Gold Member
Messages
4,445
Reaction score
1,839
Just wondering if there is a mathematical way to find which sign (##\pm##) to take on symmetric measured error in a function ##f## of some variables. An example, lets say formulaically we find that ##f = k x##, with ##k>0## we measure ##x## and append some symmetric error ##\pm \epsilon_x##. So we say:

$$ (f + \epsilon_f) - f = k( x + (\pm \epsilon_x) ) - kx $$

$$ \implies ( \epsilon_f ) = k (\pm \epsilon_x) $$

So by inspection if we want to increase ##f## we don't want its change to be negative, thus we select ##+\epsilon_x##. And visa-versa if we wish to find the smallest ##f##.

Now, lets increase the complexity of ##f## with more measured variables that have symmetric error, for example:

$$f = \frac{kx}{y+1}$$

$$ \implies \epsilon_f = \frac{k(\pm \epsilon_x) ( y+1)-kx (\pm \epsilon_y) }{(y+1)^2+(y+1)(\pm \epsilon_y)}$$

Now I can still reason out this one, if we want ##f## to be its largest value we make numerator largest and denominator smallest:

$$ \implies \epsilon_f = \frac{k(+ \epsilon_x) ( y+1)-kx (- \epsilon_y) }{(y+1)^2+(y+1)(- \epsilon_y)}$$

What do you do if the function is complex enough such that it's not at all clear what combination will produce the upper/lower bound for the function?

Checking every sign combination by brute force will work, but it feels like an optimization of some kind.
 
Last edited:
Mathematics news on Phys.org
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
 
pasmith said:
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
Thats propbably more efficient than checking each combination. :smile:

So you just look at the first order change, and that will always correctly decide how to arrange the signs to get a minimum or maximum. So for a minimum we choose opposite signs of the partial derivative.

Then you just plug in the accordingly signed errors into the actual function and we are good to go?

You say to first order, are there caveats where the higher order derivatives will bungle this up?
 
Another thing, what if the function had a peak around the measured variable. Imagine our measurement is on 1 side of a peak, we evaluate this expression, it tells us to select the positive error. However, when I input the finite error, I could end up lower than I would have otherwise if it crosses the peak.

Does evaluating the derivative ## \left. \frac{\partial f }{\partial x } \right|_{x+\epsilon_x} ## cover the possible sign change?
 
Last edited:
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Thread 'Unit Circle Double Angle Derivations'
Here I made a terrible mistake of assuming this to be an equilateral triangle and set 2sinx=1 => x=pi/6. Although this did derive the double angle formulas it also led into a terrible mess trying to find all the combinations of sides. I must have been tired and just assumed 6x=180 and 2sinx=1. By that time, I was so mindset that I nearly scolded a person for even saying 90-x. I wonder if this is a case of biased observation that seeks to dis credit me like Jesus of Nazareth since in reality...
Back
Top