- #1
3.141592654
- 85
- 0
I am having a problem with the definition of divergence in improper integrals. My understanding of the logic behind convergence and divergence is that, for example, as a improper integral approaches infinity the area under the function will be approaching but never reach zero. This implies that the area may be unbounded (the area under the curve is approaching infinity as the function approaches infinity) or the area may be bounded (the area under the curve - although never actually reaching a finite number - is approaching a finite number as the function approaches infinity).
However, the definition seems to be at odds with this conception in certain instances. The definition states that a function is divergent if any part of that function is divergent. This would include improper integral problems that introduce limits of this form: lim x-->a f(x) + lim x-->b g(x), where plugging a and b into their respective equations results in the term (infinity)-(infinity). (In some cases a=-infinity and b=infinity and in others a is a number approached from the negative side and b is that same number approached from the positive side.)
My question is, why is it that when dealing with improper integrals the term (infinity)-(infinity) is not an indeterminate form, which then may or may not be divergent or convergent? Take as an example the improper integral of 1/x from -1 to 1. Intuition would tell you that the negative and positive space under the curve on either side of zero would cancel each other out, so that the integral would converge to 0.
Is there an logical definition for why the divergence is defined the way it is, and can this be found in any of the literature? Thank you.
However, the definition seems to be at odds with this conception in certain instances. The definition states that a function is divergent if any part of that function is divergent. This would include improper integral problems that introduce limits of this form: lim x-->a f(x) + lim x-->b g(x), where plugging a and b into their respective equations results in the term (infinity)-(infinity). (In some cases a=-infinity and b=infinity and in others a is a number approached from the negative side and b is that same number approached from the positive side.)
My question is, why is it that when dealing with improper integrals the term (infinity)-(infinity) is not an indeterminate form, which then may or may not be divergent or convergent? Take as an example the improper integral of 1/x from -1 to 1. Intuition would tell you that the negative and positive space under the curve on either side of zero would cancel each other out, so that the integral would converge to 0.
Is there an logical definition for why the divergence is defined the way it is, and can this be found in any of the literature? Thank you.