Which function is more accurate?

  • Thread starter blalien
  • Start date
  • Tags
    Function
In summary, the conversation discusses the accuracy of two functions, x^2-y^2 and (x-y)(x+y), in a base 10 number system with a precision of 5 digits. While intuition may suggest that (x-y)(x+y) is more accurate, a specific example shows that x^2-y^2 is actually more accurate. This may be due to floating point errors and accumulated rounding errors.
  • #1
blalien
32
0

Homework Statement


You have a number system in base 10 with a precision of 5 digits. Which function is more accurate: x^2-y^2 or (x-y)(x+y)?

Homework Equations


None really.

The Attempt at a Solution


My intuition would tell me that (x-y)(x+y) is more accurate, since multiplication is less accurate than addition or subtraction (unless the two numbers are very close). But look at x = 1.0000, y = 0.00001:

Actual value: 0.999999999

x^2 = 1, y^2 = 1e-10 ~ 0
x^2-y^2=1

x-y = 0.99999
x+y = 1.00001 ~ 1
(x-y)(x+y) = 0.99999

So x^2-y^2 is more accurate, at least in this situation. I have no idea why, though.
 
Physics news on Phys.org
  • #2
blalien said:
So x^2-y^2 is more accurate, at least in this situation. I have no idea why, though.
Which class is this for? If it's a programming class/computer architecture class/etc. the answer probably has to do with floating point errors and accumulated rounding errors.
 
  • #3


it is important to understand that accuracy and precision are two different concepts. Accuracy refers to how close a measurement or calculation is to the true or expected value, while precision refers to how consistent or repeatable the measurement or calculation is. In this case, both functions have the same level of precision since they have the same number of significant digits (5). However, accuracy can vary depending on the values of x and y.

In the given example, x^2-y^2 may be more accurate because it avoids the potential for rounding errors that can occur when multiplying two numbers with different magnitudes. This is because the difference between x^2 and y^2 is smaller than the difference between x and y. On the other hand, (x-y)(x+y) involves both multiplication and addition, which can introduce more opportunities for rounding errors.

However, this does not mean that x^2-y^2 is always more accurate than (x-y)(x+y). In different scenarios, the accuracy may vary depending on the values of x and y. it is important to carefully consider the potential sources of error and choose the most appropriate function based on the specific situation at hand.
 

FAQ: Which function is more accurate?

What is the difference between accuracy and precision?

Accuracy refers to how close a measurement is to the true or accepted value, while precision refers to how close multiple measurements are to each other. A function can be accurate but not precise, or precise but not accurate.

How do you determine which function is more accurate?

The most common way to determine accuracy is by comparing the results of a function to a known or accepted value. The function with the result closest to the known value is considered more accurate.

Can accuracy be improved in a function?

Yes, accuracy can be improved in a function by making adjustments or improvements to the method used for measurement or calculation. This can involve using more precise instruments or fine-tuning the algorithm used in the function.

Is it possible for a function to be both accurate and imprecise?

Yes, it is possible for a function to have a result that is very close to the true value, but also have a wide range of values in repeated measurements. This would make the function accurate but not precise.

What factors can affect the accuracy of a function?

There are several factors that can affect the accuracy of a function, including human error, limitations of measuring instruments, and flaws in the mathematical model used in the function. Other external factors such as environmental conditions can also impact accuracy.

Back
Top