- #1
0kelvin
- 50
- 5
I hope I can make this question clear enough.
When we have a function such as f(x) = 1/x and calculate the side limits at x = 0, the right side goes to positive infinity. The left side goes to negative infinity. In calculus we are pluggin in values closer and closer to zero and seeing what the value of f(x) is. For ex: 1/10, then plug 1/100, then 1/500 so on. Is there a more rigorous way to prove that the function is in fact going to infinity?
another example: f(x) = 1/[sqrt(9 + x) - 3]. If x = 0 we have a division by zero. Now if I plug in something small such as 10^(-10), it's not zero but we are going beyond the precision of a hand calculator. If I plot this graph and zoom in enough, at some point google warns that the graph may be wrong due to precision errors. Is there some theory behind such precision errors and may lead us to think that the graph is increasing or decreasing when it's not?
When we have a function such as f(x) = 1/x and calculate the side limits at x = 0, the right side goes to positive infinity. The left side goes to negative infinity. In calculus we are pluggin in values closer and closer to zero and seeing what the value of f(x) is. For ex: 1/10, then plug 1/100, then 1/500 so on. Is there a more rigorous way to prove that the function is in fact going to infinity?
another example: f(x) = 1/[sqrt(9 + x) - 3]. If x = 0 we have a division by zero. Now if I plug in something small such as 10^(-10), it's not zero but we are going beyond the precision of a hand calculator. If I plot this graph and zoom in enough, at some point google warns that the graph may be wrong due to precision errors. Is there some theory behind such precision errors and may lead us to think that the graph is increasing or decreasing when it's not?