Why Avoid Low Values of h in Derivative Estimations?

In summary: However, if you had a computer that kept numbers to 10 decimal places then the approximation would be ##(f(1.000+ 0.001) - f(1.000))/0.001 = 0.000001## because the truncation error would only make (0.5)(1.000 + .001) indistinguishable from (0.500, .0001). So, as the number of decimal places increases, the error becomes smaller and smaller.
  • #1
Ethan Singer
19
1
So I just began a course on Linear Algebra, and was curious about how we can estimate derivatives using centered differences: After a few minutes of Research, I find the proof involving something about a truncation error, which led me to the conclusion that when estimating derivatives, the rate of change may determine how accurate the estimation is... so my question is: Why?

That is to say, within the mentioned proof, they say that it's best to avoid low values of "h" when estimating derivatives, because if the derivative doesn't change rapidly, the value may be too close to zero... So in summation-

Why is it important to avoid zeroes in calculation? (In the sense that when estimating derivatives, if a particular value is too small, errors may ensue)

And What characterizes a function that changes "too dramatically"?
 
Physics news on Phys.org
  • #2
Hi,

Bit hard to answer in general: a concrete example wold be easier to comment on.

First comment on 'a few minutes of research' : read on in your text.

Generally this is about computing. In a computer, numbers are represented up to a certain relative precision, e.g. 10-6 or 10-15. For a derivative you need the difference between two numbers that are almost equal (depending on step size). That means relative uncertainty in the difference can become untolerably high.

Functions that change too dramatically have high values for derivatives. E.g: For a unit step function in your x, x+h interval you get 1/h as a derivative.
 
Last edited:
  • #3
Ethan Singer said:
That is to say, within the mentioned proof, they say that it's best to avoid low values of "h" when estimating derivatives, because if the derivative doesn't change rapidly, the value may be too close to zero...

Consider the function ##f(x) = .5 x##. If you had a computer that kept numbers to only 3 decimal places then the approximation for ##f'(1)## using ##h = 0.001## would be ##(f(1.000+ 0.001) - f(1.000))/0.001 = 0.000## because the truncation error makes (0.5)(1.000 + .001) indistinguishable from (0.500).
 
  • Like
Likes jim mcnamara

FAQ: Why Avoid Low Values of h in Derivative Estimations?

What is a truncation error?

A truncation error is a type of numerical error that occurs when a mathematical operation is approximated using a finite number of steps. It is the difference between the exact value and the approximate value obtained through the approximation process.

What causes truncation errors?

Truncation errors are caused by rounding or truncating numbers during a mathematical operation. They can also occur when using numerical methods to approximate a solution to a problem.

How can truncation errors be minimized?

Truncation errors can be minimized by increasing the precision of the calculations, using smaller steps in the approximation process, or by using more accurate numerical methods.

What is the difference between truncation errors and round-off errors?

Truncation errors and round-off errors are both types of numerical errors, but they occur at different stages of a calculation. Truncation errors occur during an approximation process, while round-off errors occur when rounding or truncating numbers in the final result.

How can truncation errors affect the accuracy of calculations?

Truncation errors can accumulate and lead to significant differences between the exact solution and the approximate solution. This can result in a loss of accuracy in the final result of a calculation.

Similar threads

Back
Top