- #36
mathwonk
Science Advisor
Homework Helper
- 11,802
- 2,051
this is for @wolly , in particular the question in post #30.
sorry for the confusion. i forgot that in many elementary books the word "integrate" is used as a synonym for the word "antidifferentiate", and "integral" is equated with "antiderivative". this in my opinion is very harmful to the student. the reason of course for this practice is the theorem that if a function f is continuous, then its integral, in the correct sense of a limit of sums, is a differentiable function of the upper limit, and that derivative is f. An integral however is by definition a limit of sums, and the antiderivative is merely a tool, or trick, for calculating it. For this reason, we all start out early using only this trick and forgetting largely the definition of the integral as a limit. The one exception is the excellent book of Apostol where integrals are treated first and at length, before introducing the derivative and its use in computing integrals.
The harm comes for several reasons. First of all, the theorem has a hypothesis, continuity of f. So what do we do when f is not continuous?. In that case the integral may not be differentiable as a function of the upper limit. E.g. it is a basic theorem following from the mean value theorem that if f is a derivative, then it has the intermediate value property. In particular a step function is not a derivative, hence its integral is not an antiderivative, at least not in the usual sense, but the integral of a step function is easily computed using sums. Thus the most basic Riemann sums used to approximate integrals, although certainly integrable, are not themselves directly antidifferentiable.
This problem raises its head again in complex variables, where again one defines path integrals in terms of limits of sums, and then proceeds to prove many fundamental theorems like the Cauchy integral theorem and residue theorem, which may be lost on most students who think of integrals merely in terms of antiderivatives. I.e. in complex variables most integrands do not have antiderivatives, or else all path integrals would equal zero. E.g. the first interesting integral one meets is that of dz/z taken around the unit circle. One wants the antiderivative to equal the logarithm but there is no way to define the log function on any set containing the unit circle. This same situation comes up in vector calculus, since most differentials are not exact, and even closed differentials are exact only locally. indeed dz/z is a closed, locally exact, differential that is not exact on any neighborhood of the origin. The problem of course is that given a (starting point p and a) point q, the antiderivative can only make sense if it has a unique value at q, whereas the path integral is defined in terms of a path from p to q. The integral makes sense for all paths, but the antiderivative only makes sense if the values for all choices of paths are the same. I think now this is why my complex class just stared uncomprehendingly, throughout the discussion of path integrals and their properties. The word "integral" may have just had no meaning for them without the crutch of antiderivatives.
Another matter, which really is at the heart of the question here I think, is that differentiation is an operation that strictly shrinks the class of functions one is working on, i.e. it takes functions and tends to make them more elementary. Thus a very abstract and sophisticated function, like the logarithm, can have a very elementary looking function like 1/x. This makes it hard to go backwards, since the antiderivative of an easy function tends to be more difficult, or more abstract. Indeed according to the fundamental theorem quoted above, the only reason we believe that a continuous function should have an antiderivative is that one can be constructed, or at least approximated, by its limiting sums. Thus this direction, starting from the integral as a limit of sums, and using that to try to find an antiderivative, is the only direction that will always work. I.e. trying to work backwards, and just guess or somehow cook up an antiderivative, and use that to compute an integral, will only work in special very easy cases.
oh yes, the presence of the C that is worrying the student comes from the theorem that the derivative of a constant C is zero, so the antiderivative of zero is only pinned down at best to being some constant C. Thus for any continuous function f, since f = f+0, its antiderivative can only be pinned down to within a constant C. I.e. if g is one antiderivative, then g+C is another one for every constant C.
Notice this only applies to continuous functions, so e.g. if a book claims that the general antiderivative of 1/x is ln(|x|) +C, this is wrong, since 1/x is not continuous. I.e.
we could take ln(|x|) + C1 for x<0, and ln(|x|) +C2, for x>0, where C1 and C2 are different constants.
Note too that in the complex domain this C is what saves you in some cases. I.e. for the various different choices of a logarithm on the punctured plane, any 2 differ by a constant, so they all have the same derivative! Thus even though the antiderivative is not well defined, its derivative is! Or backwards, even though the integrand is well defined, hence also its (path) integral, nonetheless the antiderivative may not be.
so this is roughly what i was thinking of, and i hope it helps someone, apologies if it does not.Remark: If you are aware of Lebesgue integration you know that continuity of the integrand can be dispensed with at the cost of dealing with functions which are differentiable almost everywhere. e.g. in the case of the step functions, we can "antidifferentiate them" by using piecewise linear functions, in the sense that a suitable piecewise linear function is continuous, and has a derivative at all but a finite set of points, where it is still continuous, and at all other points it is differentiable and its derivative equals the step function. Such an "almost everywhere" antiderivative can then be used to compute the (definite) integral of a step function. For example, the absolute value function is a good antiderivative of the function that equals -1 for negative x and +1 for positive x, and anything at x=0. (We don't care about the value at zero since it cannot affect the value of the integral.) I.e. that step function has a definite integral on any interval and the absolute value function can be used to calculate it. But the point again is that one really uses here the definition of the integral as a limit of sums to find the antiderivative, not the other way around.Svein's post too is very interesting, to me especially the point that the integral is a "smoothing operation".
As some of you probably know, integrals can even be used to define derivatives of functions that are not differentiable in the usual sense anywhere! i.e. if f is any locally integrable function, then it acts on smooth (infinitely differentiable) compactly supported functions g by integrating their product fg. And integration is so sensitive that knowing these integrals for all g determines f almost everywhere. So even if f is nowhere differentiable in the usual sense, we can still determine what Df should be by telling what its value on every smooth compactly supported g is. For by the formula for integration by parts, we should have that the integral of gDf + fDg should be zero (since g and hence Dg are suported on a finite interval), hence we can define the integral of gDf to be minus the integral of fDg. This is called the "distribution derivative" of f. We don't get it immediately as a function, but we do know how a function representing it should act on all smooth (compactly supported) functions. This is useful even in the case of functions f that do have a derivative, in fact one can solve some differential equations in two stages, first by finding the distribution derivative or distribution solution, and then proving that solution is actually represented by a function.
Note that the basis for this use of integrals to define derivatives is that precisely the opposite of the original complaint is true, i.e. integrals are far easier, at least theoretically, than derivatives, e.g. far larger classes of functions can be integrated than can be differentiated, and as Svein observed, the integrals have better properties. E.g. a locally integrable function has of course an integral by definition, even though it may be very rough or noisy, but it takes this very clever stratagem to even begin to define its derivative.
I will try now to stop adding to this very pregnant discussion topic. But I suggest for wolly, that a perusal, or better, a careful study, of the first part of Apostol, where he does integrals before derivatives, could be very instructive, since you seem to seek understanding as opposed to memorizing.
sorry for the confusion. i forgot that in many elementary books the word "integrate" is used as a synonym for the word "antidifferentiate", and "integral" is equated with "antiderivative". this in my opinion is very harmful to the student. the reason of course for this practice is the theorem that if a function f is continuous, then its integral, in the correct sense of a limit of sums, is a differentiable function of the upper limit, and that derivative is f. An integral however is by definition a limit of sums, and the antiderivative is merely a tool, or trick, for calculating it. For this reason, we all start out early using only this trick and forgetting largely the definition of the integral as a limit. The one exception is the excellent book of Apostol where integrals are treated first and at length, before introducing the derivative and its use in computing integrals.
The harm comes for several reasons. First of all, the theorem has a hypothesis, continuity of f. So what do we do when f is not continuous?. In that case the integral may not be differentiable as a function of the upper limit. E.g. it is a basic theorem following from the mean value theorem that if f is a derivative, then it has the intermediate value property. In particular a step function is not a derivative, hence its integral is not an antiderivative, at least not in the usual sense, but the integral of a step function is easily computed using sums. Thus the most basic Riemann sums used to approximate integrals, although certainly integrable, are not themselves directly antidifferentiable.
This problem raises its head again in complex variables, where again one defines path integrals in terms of limits of sums, and then proceeds to prove many fundamental theorems like the Cauchy integral theorem and residue theorem, which may be lost on most students who think of integrals merely in terms of antiderivatives. I.e. in complex variables most integrands do not have antiderivatives, or else all path integrals would equal zero. E.g. the first interesting integral one meets is that of dz/z taken around the unit circle. One wants the antiderivative to equal the logarithm but there is no way to define the log function on any set containing the unit circle. This same situation comes up in vector calculus, since most differentials are not exact, and even closed differentials are exact only locally. indeed dz/z is a closed, locally exact, differential that is not exact on any neighborhood of the origin. The problem of course is that given a (starting point p and a) point q, the antiderivative can only make sense if it has a unique value at q, whereas the path integral is defined in terms of a path from p to q. The integral makes sense for all paths, but the antiderivative only makes sense if the values for all choices of paths are the same. I think now this is why my complex class just stared uncomprehendingly, throughout the discussion of path integrals and their properties. The word "integral" may have just had no meaning for them without the crutch of antiderivatives.
Another matter, which really is at the heart of the question here I think, is that differentiation is an operation that strictly shrinks the class of functions one is working on, i.e. it takes functions and tends to make them more elementary. Thus a very abstract and sophisticated function, like the logarithm, can have a very elementary looking function like 1/x. This makes it hard to go backwards, since the antiderivative of an easy function tends to be more difficult, or more abstract. Indeed according to the fundamental theorem quoted above, the only reason we believe that a continuous function should have an antiderivative is that one can be constructed, or at least approximated, by its limiting sums. Thus this direction, starting from the integral as a limit of sums, and using that to try to find an antiderivative, is the only direction that will always work. I.e. trying to work backwards, and just guess or somehow cook up an antiderivative, and use that to compute an integral, will only work in special very easy cases.
oh yes, the presence of the C that is worrying the student comes from the theorem that the derivative of a constant C is zero, so the antiderivative of zero is only pinned down at best to being some constant C. Thus for any continuous function f, since f = f+0, its antiderivative can only be pinned down to within a constant C. I.e. if g is one antiderivative, then g+C is another one for every constant C.
Notice this only applies to continuous functions, so e.g. if a book claims that the general antiderivative of 1/x is ln(|x|) +C, this is wrong, since 1/x is not continuous. I.e.
we could take ln(|x|) + C1 for x<0, and ln(|x|) +C2, for x>0, where C1 and C2 are different constants.
Note too that in the complex domain this C is what saves you in some cases. I.e. for the various different choices of a logarithm on the punctured plane, any 2 differ by a constant, so they all have the same derivative! Thus even though the antiderivative is not well defined, its derivative is! Or backwards, even though the integrand is well defined, hence also its (path) integral, nonetheless the antiderivative may not be.
so this is roughly what i was thinking of, and i hope it helps someone, apologies if it does not.Remark: If you are aware of Lebesgue integration you know that continuity of the integrand can be dispensed with at the cost of dealing with functions which are differentiable almost everywhere. e.g. in the case of the step functions, we can "antidifferentiate them" by using piecewise linear functions, in the sense that a suitable piecewise linear function is continuous, and has a derivative at all but a finite set of points, where it is still continuous, and at all other points it is differentiable and its derivative equals the step function. Such an "almost everywhere" antiderivative can then be used to compute the (definite) integral of a step function. For example, the absolute value function is a good antiderivative of the function that equals -1 for negative x and +1 for positive x, and anything at x=0. (We don't care about the value at zero since it cannot affect the value of the integral.) I.e. that step function has a definite integral on any interval and the absolute value function can be used to calculate it. But the point again is that one really uses here the definition of the integral as a limit of sums to find the antiderivative, not the other way around.Svein's post too is very interesting, to me especially the point that the integral is a "smoothing operation".
As some of you probably know, integrals can even be used to define derivatives of functions that are not differentiable in the usual sense anywhere! i.e. if f is any locally integrable function, then it acts on smooth (infinitely differentiable) compactly supported functions g by integrating their product fg. And integration is so sensitive that knowing these integrals for all g determines f almost everywhere. So even if f is nowhere differentiable in the usual sense, we can still determine what Df should be by telling what its value on every smooth compactly supported g is. For by the formula for integration by parts, we should have that the integral of gDf + fDg should be zero (since g and hence Dg are suported on a finite interval), hence we can define the integral of gDf to be minus the integral of fDg. This is called the "distribution derivative" of f. We don't get it immediately as a function, but we do know how a function representing it should act on all smooth (compactly supported) functions. This is useful even in the case of functions f that do have a derivative, in fact one can solve some differential equations in two stages, first by finding the distribution derivative or distribution solution, and then proving that solution is actually represented by a function.
Note that the basis for this use of integrals to define derivatives is that precisely the opposite of the original complaint is true, i.e. integrals are far easier, at least theoretically, than derivatives, e.g. far larger classes of functions can be integrated than can be differentiated, and as Svein observed, the integrals have better properties. E.g. a locally integrable function has of course an integral by definition, even though it may be very rough or noisy, but it takes this very clever stratagem to even begin to define its derivative.
I will try now to stop adding to this very pregnant discussion topic. But I suggest for wolly, that a perusal, or better, a careful study, of the first part of Apostol, where he does integrals before derivatives, could be very instructive, since you seem to seek understanding as opposed to memorizing.
Last edited: