How Can I Visualize the Exterior Derivative 'd' in Differential Geometry?

  • Thread starter r16
  • Start date
In summary, the exterior derivative is a 1-form that is related to the rate of change of a function. It can be used to determine the density of perpindicular lines on a function, and it also has an anti-derivation law.
  • #36
ObsessiveMathsFreak said:
My definition up to this point has been Bachman's. Namely;

[tex]d\omega(V^1, \ldots,V^{n+1}) = \sum_{i=1}^{n+1} (-1)^{i+1} \nabla_{V^i} \omega(V^1, \ldots, V^{i-1},V^{i+1}, \ldots ,V^{n+1})[/tex]

Which wasn't very helpful. I didn't find "d" very helpful either as it didn't really make clear that the order of the form was being increased, as well as the fact that this "d" means something completely different to those in "dx" and "dy". With [tex]d\omega = \nabla\wedge\omega[/tex] you can see where the additional wedge product is coming from in things like [tex]\nabla\wedge \omega (f dx) \equiv d(f dx) = df \wedge dx[/tex]

I think that the definition [tex]d\omega = \nabla\wedge\omega[/tex]
is (almost) perfectly fine. That's the way *I* think about it anyway.
(only one thing, though: I find it misleading to use the nabla symbol there. Normally, we use nabla to represent the gradient operator which is not d. For example, for "f" a scalar function, df is not the gradient [itex] \nabla f [/itex] that we learn about in introductory calculus. I think a more clear expression is to simply use [itex] dx^i \partial_i [/itex] for "d". Then aplied on any differential form, [itex] d \wedge \omega [/itex] works. For a viual interpretation, applying d basically gives the "boundary" of the form. Thinking of a one-form as a series of surfaces, if the surfaces never terminate (because they extend to infinity or they close up on themselves) then applying the extrerior derivative gives zero. )
 
Physics news on Phys.org
  • #37
What I was thinking, and have been thinking for about a week now, is that forms should really be distinguished in some way, beyond their current, IMHO, rather loose fashion, to make it clear what they are.

You could say, put an accent onto a form, like [tex]\acute{\omega}[/tex] instead of just plain [tex]\omega[/tex]. Then the exterior derivative would be [tex]\acute{\nabla}\wedge\acute{\omega}[/tex].

I started doing this a while ago, as the regular notation was driving me ballistic, especially when it came to the final integration computations, when "dx" and "dx" would get mixed up.
 
  • #38
ObsessiveMathsFreak said:
Well I suppose it's a matter of personal preference. I prefer to define things indepenently and then show how unexpected relationships emerge from simple definitions. That way, you don't really feel like you're hemming yourself in.

Edit:
On an aside, differential forms notation is terrible. Everything is just so lax!
I agree with you about the notation!
Whats is your background by the way?
I have been trained a physicist in phenomenology (not a mathematical physicist) so all this stuff is pretty new to me. It's difficult not necessarily because it's new but because I have to "unlearn" a lot of things I had learned before (for example, what things I used to think of vectors before are actually differential forms, etc etc).

The main difficulties I have encountered are twofold.
First, the lack of consistency in what people call what (coming from mathematicians, that has surprised me). One example is the meaning of "dx". I keep hearing that infinitesimals don't exist and that whenever I see this symbol it is a differential form. And yet, whenever books define intergrations over differential forms, they awlways get to the point where they define an integration over differential forms as an integral in the "usual" sense of elementary calculus. These expressions *do* contain the symbols dx, dy etc. So what do they mean *there*, if not "infinitesimals"!

Another example: I has seen often df called the gradient. It has confused me immensely. Until I read a post here on the forums that clarified this: df is NOT the gradient we learn about in elementary calculus. This has been further clarified for me by reading Frankel where he emphasizes that on page 41.

My second source of difficulty is the difficulty in finding explicit examples taken from physics, with everything shown clearly. And I mean something as simple as ordinary mechanics of a point particle (no need to jump to relativistic systems or curved manifolds right away!). If I am supposed to think of the momentum of a particle as a covector, I would like to see the reasoning behind this and to see why the usual idea of a vector does not work and what is the metric in that context etc etc etc.

Anyway, just my two cents
 
Last edited:
  • #39
ObsessiveMathsFreak said:
What I was thinking, and have been thinking for about a week now, is that forms should really be distinguished in some way, beyond their current, IMHO, rather loose fashion, to make it clear what they are.

You could say, put an accent onto a form, like [tex]\acute{\omega}[/tex] instead of just plain [tex]\omega[/tex]. Then the exterior derivative would be [tex]\acute{\nabla}\wedge\acute{\omega}[/tex].

I started doing this a while ago, as the regular notation was driving me ballistic, especially when it came to the final integration computations, when "dx" and "dx" would get mixed up.

I agree with you. Usually it is not too bad because books usually use lower case greek letters for forms and lower case latin letters for vectors. Bu the case of dx vs dx and so on does bother me quite a bit. I have objected to that before but the reaction I have had has usually been "but there is no such things as infinitesimals! That's all archaic. The modern view is that dx, etc are one-forms!" Which has confused me enormously since integrations over forms are always, in the end, identified with integrals in the "usual" sense which *do* contain products of dx, dy, etc. And nobody seems to want to talk about *those*, which are clearly not differential forms.


And when a physicist is confused about all those issues, the assumption from the more mathematically savvy people seems to often be that it's because the physicist is being narrow-minded and is clinging to old ideas, instead of realizing that the notation and vagueness of some concepts and the lack of explicit examples make things quite difficult to learn.
 
  • #40
On notation, I agree that forms need a mark that should also denote their order. I usually write underrightarrows, like this for a 2-form:
[tex]
\underrightarrow{\underrightarrow{F}} = \frac{1}{2}F_{ij} \underrightarrow{dx^i} \underrightarrow{dx^j}
[/tex]
This works great, and has the similar notation for vectors,
[tex]
\vec{v} = v^i \vec{\partial_i}
[/tex]
Also, I don't write the wedge, but assume that, algebraicly, 1-forms always anti-commute. This obviates the problem with the exterior derivative, which is simply
[tex]
\underrightarrow{d} = \underrightarrow{dx^i} \frac{\partial}{\partial x^i}
[/tex]
and works on forms as
[tex]
\underrightarrow{d} \underrightarrow{f}
[/tex]

There's a lot more on this notation here on my wiki:
http://deferentialgeometry.org/
as well as on another PF thread.
 
  • #41
nrqed said:
I have objected to that before but the reaction I have had has usually been "but there is no such things as infinitesimals! That's all archaic. The modern view is that dx, etc are one-forms!" Which has confused me enormously since integrations over forms are always, in the end, identified with integrals in the "usual" sense which *do* contain products of dx, dy, etc. And nobody seems to want to talk about *those*, which are clearly not differential forms.

I don't know about infinitesimals, but I do tend to insist on my measures being present, because without them, the integral isn't well defined. For example;

[tex]\int_{\sigma} \acute{w}[/tex]

...is not a well defined quantity, because you havn't specified any orientation!

[tex]\int_{\sigma} \acute{w} d \sigma [/tex]

Is well defined, because, [tex]d\sigma[/tex], though abstract, still means that you've given the integral a measure. As you say, it's all moot anyway as to get a final answer you must include a measure, or "infinitesimal" of some kind, if only to be able to perform the integration at all! By itself, the form does not specify a measure.

I'm an applied mathematician by the way.

Edit:
Actually, I think the above should be more correctly written as perhaps:

[tex]\int_{ \sigma} \acute{w}(T_{\sigma}) d \sigma [/tex]

Where [tex]T_{\sigma}[/tex] denotes the tangent vectors with respect to the measure [tex]\sigma[/tex], to which of course the form must be applied in order for the form to mean anything.

Actually, on top of that I really think the point at which the form is evaluated should be included too. So
[tex]\acute{\omega} \equiv \acute{\omega}(P,V^1,\ldots,V^n)[/tex]
But I digress.

And perhaps this thread needs a fork.
 
Last edited:
  • #42
garrett said:
On notation, I agree that forms need a mark that should also denote their order. I usually write underrightarrows, like this for a 2-form:
[tex]
\underrightarrow{\underrightarrow{F}} = \frac{1}{2}F_{ij} \underrightarrow{dx^i} \underrightarrow{dx^j}
[/tex]
This works great, and has the similar notation for vectors,
[tex]
\vec{v} = v^i \vec{\partial_i}
[/tex]
Also, I don't write the wedge, but assume that, algebraicly, 1-forms always anti-commute. This obviates the problem with the exterior derivative, which is simply
[tex]
\underrightarrow{d} = \underrightarrow{dx^i} \frac{\partial}{\partial x^i}
[/tex]
and works on forms as
[tex]
\underrightarrow{d} \underrightarrow{f}
[/tex]

There's a lot more on this notation here on my wiki:
http://deferentialgeometry.org/
as well as on another PF thread.
EDIT: A typo with under and over arrows was corrected.


I have to say that I like this notation very much:smile:
(I would personally still like to see the wedge products shown explicitly but I realize it's only because I am not completely fluent with all this stuff and that they are not necessary).

Garrett, I am still a bit confused by the fact that [tex]
\underrightarrow{\omega} {\vec v}= - \underrightarrow{\omega} {\vec v}
[/tex]
if I understood you correctly from the other thread. Could you tell me wheer Frankel discusses this (or Baez, or Felsager or Nakahara)? I need to assimilate this.

Thanks!
 
  • #43
ObsessiveMathsFreak said:
I don't know about infinitesimals, but I do tend to insist on my measures being present, because without them, the integral isn't well defined. For example;

[tex]\int_{\partial \sigma} \acute{w}[/tex]

...is not a well defined quantity, because you havn't specified any orientation!

[tex]\int_{\partial \sigma} \acute{w} d \sigma [/tex]

Is well defined, because, [tex]d\sigma[/tex], though abstract, still means that you've given the integral a measure. As you say, it's all moot anyway as to get a final answer you must include a measure, or "infinitesimal" of some kind, if only to be able to perform the integration at all! By itself, the form does not specify a measure.

I'm an applied mathematician by the way.

Edit:
Actually, I think the above should be more correctly written as perhaps:

[tex]\int_{\partial \sigma} \acute{w}(T_{\sigma}) d \sigma [/tex]

Where [tex]T_{\sigma}[/tex] denotes the tangent vectors with respect to the measure [tex]\sigma[/tex], to which of course the form must be applied in order to mean anything.

Actually, on top of that I really think the point at which the form is evaluated should be included too. So
[tex]\acute{\omega} \equiv \acute{\omega}(P,V^1,\ldots,V^n)[/tex]
But I digress.

And perhaps this thread needs a fork.
I think that our views are convergent. The question is then what you mean by dsigma. It's clearly not a differential form here (right?). Which then shows how confusing the notation can be, as you pointed out (because I have had the feeling on these boards that whatever was written as d"something" *had* to be a differential form. That did not make sense to me but I have been chastised for this :wink: ).

So what do you mean by dsigma? I mean, there are vectors, there are differnential forms, and we can "feed" vectors to one-forms or vice-versa to get numbers. And if there is the additional structure of a metric, more can be done. So where does dsigma stand in this? Or do you see it as something completely different?

the way *I* think about this (but I have had a hard time getting people to either agree or to tell me it's wrong and why it's worng) is that there is a differential form we are integrating over. Then, in order to actually get an integral in the conventional sense, one must "feed" a vector to that one-form. The vector we feed is actually of the form [itex] dx^i \partial_i [/itex], i.e. it's a vector with components being *infinitesimals* in the usual sense.

But I think this is too ismple-mined although I don't know what's wrong with this. and I don't know why books have to *define* inetgrals over forms as integrals in the usual sense instead of simply feeding "infinitesimal" vectors.
 
  • #44
nrged said:
So what do you mean by dsigma?

Basically what I mean is that [tex]d\sigma[/tex] is the variable, or variables, of integration. i.e. [tex]d\sigma \equiv dx_1dx_2 \ldots dx_n[/tex], in the sense we are normally used to it. So one example of [tex]d\sigma[/tex] would be [tex]dV[/tex] for volume.

It should be mentioned that on its own, [tex]d\sigma[/tex] is rather meaningless. Just as [tex]\int_{\sigma}[/tex] is meaningless. The two must be combined to mean anything. [tex]\int_{\sigma} \ldots d\sigma[/tex]. When you are integrating you must give variables of integration and boundaries (limits) if you want to get an answer.

Some authors write integrals like this [tex]\int_{\sigma} d\sigma f(\sigma)[/tex], placing the variable of integration anf the limits right next to each other to empahise their closeness. So they would write [tex]\int_0^1 f(x) dx \equiv \int_0^1 dx f(x)[/tex].

I've even seen some leave out the "d" altogether and place the variable of integration in the limits, like this.
[tex]\int_{x=0}^{x=1} f(x)[/tex]

nrged said:
the way *I* think about this (but I have had a hard time getting people to either agree or to tell me it's wrong and why it's worng) is that there is a differential form we are integrating over. Then, in order to actually get an integral in the conventional sense, one must "feed" a vector to that one-form. The vector we feed is actually of the form [itex] dx^i \partial_i [/itex], i.e. it's a vector with components being *infinitesimals* in the usual sense.

Hmmm... not to sure what you're getting at, but my current understanding is that the forms are being "fed" normal vectors, not infinitesimal ones. When integrating, the vectors they are fed are derivatives, but they are nonetheless regular vectors. If your asking where the variable of integration, i.e. [tex]dx_i[/tex] comes from, the answer is, and this is what infuriates me, you have to throw in it yourself. There's no formality, and it's basically up in the air until you decide to chuck it in.

Lax! Lax I tell you!
 
Last edited:
  • #45
ObsessiveMathsFreak said:
On an aside, differential forms notation is terrible. Everything is just so lax!
I agree!


nrqed said:
Normally, we use nabla to represent the gradient operator which is not d.
The funny thing, there are two different usages of the nabla operator. In Spivak, volume I, he defines:

[tex]\nabla = \sum_{i=1}^n D_i \frac{\partial}{\partial x^i}[/tex]

and that [itex]\mathop{\mathrm{grad}} f = \nabla f[/itex]

On the other hand, in volume II, we have the (Koscul) connection for which [itex]\nabla T[/itex] is, by definition, the map [itex]X \rightarrow \nabla_X T[/itex]. In particular, for a scalar field, we have [itex]\nabla_X f = X(f)[/itex] so that [itex]\nabla f = df[/itex].


The funny thing is -- when I was taking multivariable calculus, I got into the habit of writing my vectors as column vectors, and my gradients as row vectors... so in effect, what I learned as the gradient was a 1-form!


nrqed said:
For a viual interpretation, applying d basically gives the "boundary" of the form. Thinking of a one-form as a series of surfaces, if the surfaces never terminate (because they extend to infinity or they close up on themselves) then applying the extrerior derivative gives zero. )
There is supposed to be a duality between the exterior derivative and the boundary operator. (In fact, the exterior derivative is also called a "coboundary operator") But I think you're taking it a little too literally! I like to try and push the picture that forms "measure" things, and the (n+1)-form dw measures an (n+1)-dimensional region by applying w to the boundary of the region.


ObsessiveMathFreak said:
What I was thinking, and have been thinking for about a week now, is that forms should really be distinguished in some way, beyond their current, IMHO, rather loose fashion, to make it clear what they are.
Using the Greek alphabet, instead of the Roman one, isn't enough? :smile:


ObsessiveMathFreak said:
especially when it came to the final integration computations, when "dx" and "dx" would get mixed up.
How can they get mixed up?


nrqed said:
And yet, whenever books define intergrations over differential forms, they awlways get to the point where they define an integration over differential forms as an integral in the "usual" sense of elementary calculus. These expressions *do* contain the symbols dx, dy etc. So what do they mean *there*, if not "infinitesimals"!
The usual sense of elementary calculus doesn't have infinitessimals either. Depending on the context, it might be a formal symbol indicating with respect to which variable integration is to be performed, or it might be denoting which measure to be used... but certainly not an infinitessimal.

Even in nonstandard analysis, which does have infinitessimals, dx are still not used to denote infinitessimals. (Though you would use honest-to-goodness nonzero infinitessimals to actually compute the integral)


ObsessiveMathsFreak said:
I don't know about infinitesimals, but I do tend to insist on my measures being present, because without them, the integral isn't well defined. For example;

[tex]\int_{\sigma} \acute{w}[/tex]

...is not a well defined quantity, because you havn't specified any orientation!

...

By itself, the form does not specify a measure.
Yes you have! Remember that you don't integrate over n-dimensional submanifolds -- you integrate over n-dimensional surfaces (or formal sums of surfaces). Surfaces come equipped with parametrizations, and thus have a canonical orientation and choice of n-dimensional volume measure.

If c is our surface, then by definition:

[tex]
\int_c \omega = \int_{[0, 1]^n} \omega(
\frac{\partial c}{\partial x^1}, \cdots, \frac{\partial c}{\partial x^n})
\, dV
[/tex]

where dV is the usual volume form on Rn. This is, of course, also equal to

[tex]\int_{[0, 1]^n} c^*(\omega)[/tex]

on the parameter space, and there we could just take the obvious correspondence between n-forms and measures.


The properties of forms allow you to get away without fully specifying which parametrization to use... but you still have to specify the orientation when you write down the thing over which you're integrating.
 
Last edited:
  • #46
Hurkyl said:
The usual sense of elementary calculus doesn't have infinitessimals either. Depending on the context, it might be a formal symbol indicating with respect to which variable integration is to be performed, or it might be denoting which measure to be used... but certainly not an infinitessimal.

Even in nonstandard analysis, which does have infinitessimals, dx are still not used to denote infinitessimals. (Though you would use honest-to-goodness nonzero infinitessimals to actually compute the integral)

My apologies. I realize that I am missing something here (and the more I ask questions the grumpier I make people!) so if this is too dumb a question ignore it (insted of getting grumpier :-) ).
I haev to admit that I don't know what a "measure" is.
What *I* mean by "infinitesimals" is through the usual Riemann sum definition
[tex] \int f(x) dx \rightarrow {\crm limit}_{\Delta x \rightarrow 0} \sum f(x) \Delta x [/tex]
(you know what I mean).

This is what I have in mind when I call the dx on the left side an infinitesimal. And of course, this "dx" is in the general sense, it may have nothing to do with coordinates. For example I might be calculating the electric potential due to some charge distribution in which case dx = dq.

I know that thinking of these as "infinitesimals" is considered very bad and uneducated. But if I have a continuous charge distribution and I am calculating the electric potential, say, I find it useful to think of an infinitesimal charge because then I can use the equation for the electric potential of a point charge and then sum over all those infinitesimal point charges. If this is totally wrong then I would be really interested in learning how I should go about setting up the same problem without ever thinking of infinitesimals charges and using the language of "measures" instead.

I am not being flippant at all, I admit my ignorance and lack of sphistication. I would really want to understand what a "measure" is and to see what is the correct way to think about a specific physical problem like the above one (or finding the E field of a continuous charge distribution, etc).

Regards

Patrick
 
Last edited:
  • #47
nrqed said:
Garrett, I am still a bit confused by the fact that [tex]
\underrightarrow{\omega} {\vec v}= - {\vec v} \underrightarrow{\omega}
[/tex]
if I understood you correctly from the other thread. Could you tell me wheer Frankel discusses this (or Baez, or Felsager or Nakahara)? I need to assimilate this.

They don't discuss it. And, really, I've never had a good reason to write a vector operating on a form from the right. But, if you do want to, that's the sign change you'd have to give it.

Frankel and others write the same inner product between a vector and form as
[tex]
\bf{i}_v \omega
[/tex]
It's really just a matter of notation.
 
  • #48
This is hard to believe until you play with it, but in differential geometry integration really is nothing but the evaluation of Stokes theorem:
[tex]
\int_{V} \underrightarrow{d} \underbar{\omega} =
\int_{\partial V} \underbar{\omega}
[/tex]
Think about how that works in one dimension and you'll see it's the same as the usual notion of integration. :) First you find the anti-derivative, then evaluate it at the boundary.
 
  • #49
It was a light-hearted grumpy face, not a grumpy grumpy. :smile:


When we're doing a Riemann integral, the "right" imagry is that:

"I've divided my region into sufficiently small cubes, computed a value for each cube, and added them up to get something close enough to the true answer".

Even if we're doing nonstandard analysis, it's still more right to this imagry -- it's just that we have infinitessimal numbers to use (which are automatically "sufficiently small"), and are capable of adding transfinitely many of them, getting something infintiessimally close to the true answer.


The way infinitessimals are usually imagined is just a sloppy way of imagining the above -- we want to invoke something so small that it will automatically be "sufficiently close", and then promptly forget about the approximations and imagine we're computing an exact value on each cube, can add all the exact values, and the result is exactly the answer.


I've seen someone suggest a different algebraic approach to an integral that might be more appropriate for physicists, that's based on the mean value theorem. I think it works out to the following:

For any "integrable" function f, we require that for any a < b < c:

[tex]I_a^b(f) + I_b^c(f) = I_a^c(f)[/tex]

and

[tex]\min_{x \in [a, b]} f(x) \leq \frac{1}{b-a} I_a^b(f) \leq \max_{x \in [a, b]} f(x)[/tex]

These axioms are equivalent to Riemann integration:

[tex]I_a^b(f) = \int_a^b f(x) \, dx[/tex]

And you could imagine the whole Riemann limit business as simply being a calculational tool that uses the above axioms to actually "compute" a value for the value. (at least, if you count taking a limit as a "computation")

(Hey! This goes back to the "define things in terms of the properties it should have, then figure out how to calculate" vs. the "define things via a calculation, then figure out what properties it has" debate. :smile:)



So, for your electric potential problem, I guess this suggests that you should imagine this:

You make the guess that the potential should be, say, the integral of f(x) over your region. You then observe that:

(1) The contribution to potential from two disjoint regions is simply added together.
(2) The average contribution to the potential from any particular region lies between the two extremes of f(x).

Therefore, that integral computes the potential. (2) is intuitively obvious if you have the right f(x), but I don't know how easy it would be to check rigorously. This check can probably be made easier.


To be honest, I haven't really tried thinking much this way. (Can you tell? :wink:) I'm content with the "sufficiently close" picture.
 
Last edited:
  • #50
the definition of dw is the adjoint of the boundary operator poitwise. but the stokes theorem is the global adjointness.

you have to do some thinking about it yourself.
 
  • #51
Hurkyl said:
It was a light-hearted grumpy face, not a grumpy grumpy. :smile:
ok! I am freally glad to hear that!

When we're doing a Riemann integral, the "right" imagry is that:

"I've divided my region into sufficiently small cubes, computed a value for each cube, and added them up to get something close enough to the true answer".

Even if we're doing nonstandard analysis, it's still more right to this imagry -- it's just that we have infinitessimal numbers to use (which are automatically "sufficiently small"), and are capable of adding transfinitely many of them, getting something infintiessimally close to the true answer.


The way infinitessimals are usually imagined is just a sloppy way of imagining the above -- we want to invoke something so small that it will automatically be "sufficiently close", and then promptly forget about the approximations and imagine we're computing an exact value on each cube, can add all the exact values, and the result is exactly the answer.


I've seen someone suggest a different algebraic approach to an integral that might be more appropriate for physicists, that's based on the mean value theorem. I think it works out to the following:

For any "integrable" function f, we require that for any a < b < c:

[tex]I_a^b(f) + I_b^c(f) = I_a^c(f)[/tex]

and

[tex]\min_{x \in [a, b]} f(x) \leq \frac{1}{b-a} I_a^b(f) \leq \max_{x \in [a, b]} f(x)[/tex]

These axioms are equivalent to Riemann integration:

[tex]I_a^b(f) = \int_a^b f(x) \, dx[/tex]

And you could imagine the whole Riemann limit business as simply being a calculational tool that uses the above axioms to actually "compute" a value for the value. (at least, if you count taking a limit as a "computation")

(Hey! This goes back to the "define things in terms of the properties it should have, then figure out how to calculate" vs. the "define things via a calculation, then figure out what properties it has" debate. :smile:)



So, for your electric potential problem, I guess this suggests that you should imagine this:

You make the guess that the potential should be, say, the integral of f(x) over your region. You then observe that:

(1) The contribution to potential from two disjoint regions is simply added together.
(2) The average contribution to the potential from any particular region lies between the two extremes of f(x).

Therefore, that integral computes the potential. (2) is intuitively obvious if you have the right f(x), but I don't know how easy it would be to check rigorously. This check can probably be made easier.


To be honest, I haven't really tried thinking much this way. (Can you tell? :wink:) I'm content with the "sufficiently close" picture.

Ok...This language I can relate to. It makes sense to me (I guess that I use the word "infinitesimal because I imagine using some average value in a region and add the results from all the regions to get an approximate answer. But then I imagine going back, subdividing into smaller regions, using an average value in those regions, doing the sum, and keep going like this and see if the sum converges to a certain value. In that limit I imagine the regions becoming "infnitesimally small". Is it wrong to call them infinitesimals because one never really take the exact limit as the regions vanish?

In any case, in the language used above, what is a "measure"?

Regards

Patrick
 
  • #52
A measure is something that tells you how big (measurable) subsets of your space are. For a plain vanilla measure, you have:

The size of any (measurable) subset is nonnegative.
The size of the whole is the sum of the sizes of its parts. (For up to countably many parts)

To integrate something with respect to a measure, instead of partitioning the domain, we instead partition the range! The picture is:

We divide R into sufficiently small intervals. For each interval, we compute the size of the set {x | f(x) is in our interval}, and multiply by a number in our interval. Add them all up, and we get something sufficiently close to the true value.
 
  • #53
Hurkyl said:
Using the Greek alphabet, instead of the Roman one, isn't enough? :smile:
In my case, I've been using the greek alphabet in mathematics for so long that there is really no distinction. In fact, a lot of greek letters get used more than latin ones. I'm probably not alone here! I get the feeling this is some kind of carry over from the days when, perhaps, greek letters were harder to typeset.

Hurkyl said:
How can they get mixed up?
One is a form, one is a variable of integration. It's a pretty big difference.

Hurkyl said:
Yes you have! Remember that you don't integrate over n-dimensional submanifolds -- you integrate over n-dimensional surfaces (or formal sums of surfaces). Surfaces come equipped with parametrizations, and thus have a canonical orientation and choice of n-dimensional volume measure.

Surfaces don't always come with parameterisations, and the notation [tex]\int_{\sigma} \omega[/tex] implies that [tex]\sigma[/tex] is a surface with a parametrization as yet unspecified. It could be [tex]\sigma \equiv \{ (x,y,z) : x^2 +y^2 +z^2 = r^2 \}[/tex] which is a well defined surface without parametrisation.

Hurkyl said:
The properties of forms allow you to get away without fully specifying which parametrization to use... but you still have to specify the orientation when you write down the thing over which you're integrating.

That's my point entirely. [tex]\int_{\sigma} \omega[/tex] is simply a lax way of specifying something. There's no parameterisation, but in order to actually get down to it and evaluate the integral, you must specify a paramterisation. One can talk about orientation as well, but that's effectively a change in the parameterisation, or pull-back if you will.

This laxity really comes into focus when you come to the presentation of Stokes's Theorem, namely;
[tex]\int_{\sigma} d\omega = \int_{\partial \sigma} \omega[/tex]
This notation is a potential minefield. Example:
[tex]\sigma \equiv \{ (x,y) : x^2 + y^2 \leq 1 \}[/tex]
[tex]d\sigma \equiv \{ (x,y) : x^2 + y^2 = 1 \}[/tex]

But of course, two people can evaluate each integral and come up with an answer that differs in sign. One might say that the paramterisation of one surface determines that of the other, but hold on! Atomically, each integral leaves one free to specify a parameterisation. If I give each side of the equation to two people, assuming they choose random orientations, there is only a one in two chance that their answers will agree, and only a one in four chance, that I will obtain answers congruent with my own.

In short, the essential problem here is that, using standard notation, a computer will be unable to evaluate the intergral of a form. If you wish it do do so, then you must give a surface complete with parameterisation. In short, you must ask it to evaluate;
[tex]\int_{\sigma} \omega d\sigma[/tex]
Or, more correctly;
[tex]\int_{\phi(X)} \omega(D_X \phi(X)) dX = \int_{X} \phi^*\omega dX[/tex]

Where [tex]\phi[/tex] is the pullback to [tex]X[/tex] that parameterises the surface. Even this is not strictly correct, as the vectors that the pullback [tex]\phi^*\omega[/tex] acts on in the [tex]X[/tex] domain are not specified. You can generally assume that they are the canonical directions, but again it is really too ambiguous, as the pull back need not have pulled back to such a straighforward domain at all. It should really be wriiten as

[tex]\int_{X} \phi^*\omega(\mathbf{e}_1^X, \ldots, \mathbf{e}_n^X ) dX[/tex]
To make clear what you are evaluating.

Honestly, the standard notation of differential forms is like some of the roughwork scribbles you would find in the back of someones notes! Understandable only by the author, and only at the time, and only in the correct context. It's no wonder why people don't use them. They're simply not mature enough for practical application.
 
Last edited:
  • #54
the complicated notation is only used to teach all the details. in practice differential forms are more succinct than what they replace. look at maxwells equations e.g. or stokes thm in form notation as opposed to then old way


as to exact meaning of the notation in stokes,
it is in the hypothesis of stokes thm, which mathematicians should always state, that the theorem takes place on an oriented manifold, so the orientation is taken as given. that means the patametrization must use a compatible orientation.

then the theorem as stated says that the two sides of the equation are equal under ANY choice of parametrization, such that it is compatible with the given orientation, and where the orientation on the boundary is asumed compatible with that of the manifold.

what this means is also specified in the hypotheses, namely that when an oriented basis for the boundary space is given , then supplementing it by an outward (or inward) vector (it must be specified which, and I forget if it matters), then the result is an oriented basis for the manifold space.

these details are completely given in careful standard treatments such as spivak, calculus on manifolds.

if you are reading only say bachman, and if he may omit a few details, then i think it is because his goal was to introduce the main ideas to beginners, undergraduates, as gently as possible, without overburdening them with the level of precision desired by experts.

the students greatly enjoyed the exercise and got a lot out of reading it.

but if you are a professional, you need to read a profesional treatment.
 
  • #55
i am also a picky expert and if you followed the thread earlier on this book you know bachmans imprecision and errors drove me right up the wall.

but his book was a terrific success for its intended audience, namely uncritical undergrads.
 
  • #56
mathwonk said:
if you are reading only say bachman, and if he may omit a few details, then i think it is because his goal was to introduce the main ideas to beginners, undergraduates, as gently as possible, without overburdening them with the level of precision desired by experts.

I have at least one other book, Differential forms and connections by R.W.R. Darling. This one is, to say the least, unhelpful. To be fair to Bachmann, his is the only book I've seen so far which gives a geometric explanation of forms, and the only one so far that has actually explained to me what a form is. The others have various definitions that seem to go nowhere.

I was thinking about getting Spivak's book, but I don't know whether I need just Calculus on Manifolds, or the full blown set of A Comprehensive Introduction to Differential Geometry.

Edit:
The notation I was griping about above isn't at all exclusive to Bachmann. It's the standard fair as far as I can tell.
 
Last edited:
  • #57
ObsessiveMathFreak said:
One is a form, one is a variable of integration. It's a pretty big difference.
But the question is if the difference makes... er... a difference. :wink:


Surfaces don't always come with parameterisations
I'm using surface here as the higher dimensional analog of a curve.

But let's ignore the semantics -- as far as I can tell in Spivak, integrals of forms are only defined where the region of integration is built out of maps from the n-cube into your manifold.

You can generally assume that they are the canonical directions
And in Spivak that this is not an assumption -- it is part of the definition of the integral of a form.


Since the study of manifolds is just the globalization of the study of R^n, I see no problem with leaving implicit that we are using the standard structures on R^n.

It's just like how we talk about the ring R, rather than the ring (R, +, *, 0, 1)... and how we talk about the ring (R, +, *, 0, 1) without explicitly specifying what we mean by R, +, *, 0, 1, and by the parentheses notation. :smile:
 
  • #58
Hurkyl said:
But let's ignore the semantics -- as far as I can tell in Spivak, integrals of forms are only defined where the region of integration is built out of maps from the n-cube into your manifold.
...
And in Spivak that this is not an assumption -- it is part of the definition of the integral of a form.
...
Since the study of manifolds is just the globalization of the study of R^n, I see no problem with leaving implicit that we are using the standard structures on R^n.

You're absolutely right, and so is Spivak. There is no point in talking about overly general vectors, and manifolds and variables. Ultimately, we have to compute things using the standard basis in R^n, so everything is perfectly well defined using that space.

The terrible truth is, my first introduction to forms, and the main reason I'm studying them, was from Fourier Integral Operators by Duistermaat. I still haven't fully recovered, as you can tell.

mathwonk said:
it is in the hypothesis of stokes thm, which mathematicians should always state, that the theorem takes place on an oriented manifold, so the orientation is taken as given. that means the patametrization must use a compatible orientation.

By the way, thanks for that. Now I get it. The manifold has to have an orientation. But I still think, in my own mind, that including the [tex]d\sigma[/tex] makes this more explicit.
 
Last edited:
  • #59
well you might want to write up your own acount of the stuff. i did that in 1972 or so when i taught advanced calc the first time. i wrote ti all, out by hand at elast 2-3 tiems, and it began to make sense to me. i had so many copies in fact i could practically give each class member his own original set of notes.

i then applied stokes to prove the brouwer fixed point therem and the vector fields on spheres theorem of hopf. i learned a lot that way.
 
  • #60
then qwe had s eminar out of spivak's vol 1 of diff geom, the one giving background on manifolds.

i think calc on manifolds is a good place to start. and its cheaper. the whole kaboodle is a bit long for me. but volume 2 is a classic. and vol 1 is nice too especially for the de rham theory. i don't know what's in the rest as I do not own them, but gauss bonnet is appealing sounding.

but i always like to begin on the easiest most elementary version of a thing.

guillemin pollack is nice but kind of a cheat as they define thigns in special ways to make the proofs easier, so as i recall their gauss bonnet theorem is kind of a tautology. i forget but maybe thbey define curvature in a "begging the question" kind of way
 
  • #61
garrett said:
This is hard to believe until you play with it, but in differential geometry integration really is nothing but the evaluation of Stokes theorem:
[tex]
\int_{V} \underrightarrow{d} \underbar{\omega} =
\int_{\partial V} \underbar{\omega}
[/tex]
Think about how that works in one dimension and you'll see it's the same as the usual notion of integration. :) First you find the anti-derivative, then evaluate it at the boundary.

This statement was a little opaque, so I'll flesh it out a bit. Integrate an arbitrary 1-form, [itex]f(x)\underrightarrow{dx}[/itex], in one dimension over the region, V, from [itex]x_1[/itex] to [itex]x_2[/itex]. Stokes' theorem says this can be done by finding a 0-form, [itex]\omega[/itex], that is the anti-derivative of f:
[tex]
f(x) \underrightarrow{dx} = \underrightarrow{d} \omega = \underrightarrow{dx} \frac{d}{d x} \omega
[/tex]
and "integrating" it at the boundary, which for a zero dimensional integral is simply evaluation at [itex]x_2[/itex] minus at [itex]x_1[/itex]:
[tex]
\int_{V} f(x) \underrightarrow{dx} =
\int_{V} \underrightarrow{d} \omega =
\int_{\partial V} \omega = \omega(x_2) - \omega(x_1)
[/tex]

This is why integrating over forms is the same as the integrals you're used to from physics problems -- the hard part, as always, is finding the anti-derivative, [itex]\frac{d}{d x} \omega = f(x)[/itex].
 
Last edited:
  • #62
garrett said:
This statement was a little opaque, so I'll flesh it out a bit. Integrate an arbitrary 1-form, [itex]f(x)\underrightarrow{dx}[/itex], in one dimension over the region, V, from [itex]x_1[/itex] to [itex]x_2[/itex]. Stokes' theorem says this can be done by finding a 0-form, [itex]\omega[/itex], that is the anti-derivative of f:
[tex]
f(x) \underrightarrow{dx} = \underrightarrow{d} \omega = \underrightarrow{dx} \frac{d}{d x} \omega
[/tex]
and "integrating" it at the boundary, which for a zero dimensional integral is simply evaluation at [itex]x_2[/itex] minus at [itex]x_1[/itex]:
[tex]
\int_{V} f(x) \underrightarrow{dx} =
\int_{V} \underrightarrow{d} \omega =
\int_{\partial V} \omega = \omega(x_2) - \omega(x_1)
[/tex]

This is why integrating over forms is the same as the integrals you're used to from physics problems -- the hard part, as always, is finding the anti-derivative, [itex]\frac{d}{d x} \omega = f(x)[/itex].


Since you have a very pedagogical way of explaining things, I can't resist the temptation of asking you to now explain the integral of a two form over a "surface", say. I have seen this given in several books and discussed here but I would really appreciate to see your way of presenting this (and the connection with the usual calculus definition).
I would appreciate it.
 
  • #63
nrqed said:
Since you have a very pedagogical way of explaining things, I can't resist the temptation of asking you to now explain the integral of a two form over a "surface", say. I have seen this given in several books and discussed here but I would really appreciate to see your way of presenting this (and the connection with the usual calculus definition).
I would appreciate it.

Sure. Say we want to integrate a 2-form, [itex]\underrightarrow{\underrightarrow{F}}[/itex] over a little patch, V, of a two dimensional manifold, with two patch coordinates [itex](x^1,x^2)[/itex] each going from 0 to 1 over the extent of the patch. The hard part is guessing a 1-form "anti-derivative" satisfying
[tex]
\underrightarrow{\underrightarrow{F}} = \underrightarrow{d}\underrightarrow{\omega}
[/tex]
I say "a" anti-derivative rather than "the" because you can add a closed form to the anti-derivative and it will still be another good anti-derivative
[tex]
\underrightarrow{\omega} \rightarrow \underrightarrow{\omega'} = \underrightarrow{\omega}
+ \underrightarrow{d} g
[/tex]

Once a good anti-derivative 1-form,
[tex]
\underrightarrow{\omega} = \underrightarrow{dx^1} \omega_1(x^1,x^2) + \underrightarrow{dx^2} \omega_2(x^1,x^2)
[/tex]
is found, Stokes' theorem says you can just integrate it counter-clockwise along the one dimensional patch boundary curve and that will give you the integral of the 2-form over the patch. For the coordinate patch we chose,
[tex]
\int_V \underrightarrow{\underrightarrow{F}} =
\int_{\partial V} \underrightarrow{\omega} =
\int_{(0,0)}^{(1,0)} \underrightarrow{dx^1} \omega_1
+\int_{(1,0)}^{(1,1)} \underrightarrow{dx^2} \omega_2
+\int_{(1,1)}^{(0,1)} \underrightarrow{dx^1} \omega_1
+\int_{(0,1)}^{(0,0)} \underrightarrow{dx^2} \omega_2
[/tex]
which we can evaluate by using Stokes theorem again for each leg around the curve, equivalent to the way we're used to.

For example, take the 2-form to be
[tex]
\underrightarrow{\underrightarrow{F}} =
\frac{1}{2} \underrightarrow{dx^i} \underrightarrow{dx^j} F_{ij} =
\underrightarrow{dx^1} \underrightarrow{dx^2} x^1
[/tex]
A good anti-derivative is
[tex]
\underrightarrow{\omega} =
- \underrightarrow{dx^1} x^1 x^2
[/tex]
And integrating this around the patch gives one non-zero contribution:
[tex]
\int_{(1,1)}^{(0,1)} - \underrightarrow{dx^1} x^1 x^2 = \frac{1}{2}
[/tex]
which equals the integral of our 2-form over our patch.
 
Last edited:
  • #64
a 2 form assigns an area to a parallelogram. so parametrize your surface by a map from a rectangle. then subdivide the rectangle into little rectangles.

map each little recatangle into the tangent space to your surface by the derivative of your parameter map.

you get a finite family of little rectangles in a finite set of tangent spaces to your surface, whic give a piecewise polygonal approximation to your surface.

the 2 form asigns to each of these parallelograms, an area. add those up and that appriximates the area of your surface. keep doing it with finer and finer subdivisiobs of your parametrizing rectangle and it converges to the integral of the 2 form over the surface.
 
  • #65
Yep, these two ways of integrating forms are equivalent.
 
  • #66
garrett said:
Sure. Say we want to integrate a 2-form, [itex]\underrightarrow{\underrightarrow{F}}[/itex] over a little patch, V, of a two dimensional manifold, with two patch coordinates [itex](x^1,x^2)[/itex] each going from 0 to 1 over the extent of the patch. The hard part is guessing a 1-form "anti-derivative" satisfying
[tex]
\underrightarrow{\underrightarrow{F}} = \underrightarrow{d}\underrightarrow{\omega}
[/tex]
I say "a" anti-derivative rather than "the" because you can add a closed form to the anti-derivative and it will still be another good anti-derivative
[tex]
\underrightarrow{\omega} \rightarrow \underrightarrow{\omega'} = \underrightarrow{\omega}
+ \underrightarrow{d} g
[/tex]

Once a good anti-derivative 1-form,
[tex]
\underrightarrow{\omega} = \underrightarrow{dx^1} \omega_1(x^1,x^2) + \underrightarrow{dx^2} \omega_2(x^1,x^2)
[/tex]
is found, Stokes' theorem says you can just integrate it counter-clockwise along the one dimensional patch boundary curve and that will give you the integral of the 2-form over the patch. For the coordinate patch we chose,
[tex]
\int_V \underrightarrow{\underrightarrow{F}} =
\int_{\partial V} \underrightarrow{\omega} =
\int_{(0,0)}^{(1,0)} \underrightarrow{dx^1} \omega_1
+\int_{(1,0)}^{(1,1)} \underrightarrow{dx^2} \omega_2
+\int_{(1,1)}^{(0,1)} \underrightarrow{dx^1} \omega_1
+\int_{(0,1)}^{(0,0)} \underrightarrow{dx^2} \omega_2
[/tex]
which we can evaluate by using Stokes theorem again for each leg around the curve, equivalent to the way we're used to.

For example, take the 2-form to be
[tex]
\underrightarrow{\underrightarrow{F}} =
\frac{1}{2} \underrightarrow{dx^i} \underrightarrow{dx^j} F_{ij} =
\underrightarrow{dx^1} \underrightarrow{dx^2} x^1
[/tex]
A good anti-derivative is
[tex]
\underrightarrow{\omega} =
- \underrightarrow{dx^1} x^1 x^2
[/tex]
And integrating this around the patch gives one non-zero contribution:
[tex]
\int_{(1,1)}^{(0,1)} - \underrightarrow{dx^1} x^1 x^2 = \frac{1}{2}
[/tex]
which equals the integral of our 2-form over our patch.
Thank you for taking the time to write this. It makes complete sense. Except for the very last step which I am not sure I follow. It looks as if it is simply using that the antiderivative of [itex] dx_1 x_ 1 x_2 [/itex] is[itex] {1 \over 2} x_1^2 x_2[/itex] and if I was thinking in terms of "dumb physicist calculus", that's what I would do given that x_2 is kept constant along this "line".

However, if I think in terms of the formalism of forms and the equation
[tex]\int_{V} \underrightarrow{d} \omega =\int_{\partial V} \omega = \omega(x_2) - \omega(x_1) [/tex]
then it's not clear to me how to proceed. I mean that [itex] d( {1 \over 2}\, x_1^2 \,x_2 ) [/itex] does not give [itex] dx_1 \,x_1\, x_2 [/tex].
Am I supposed to use the fact that the value of x_2 is kept fixed to "set" dx_2 equal to zero here?

In other words, could you give me the explicit zero-form "omega" that you use in the last step (before even plugging in the boundary points)?
I know that this is a trivial step but it still confuses me.

I keep thinking that when integrating over differential forms, one actually "feeds" vectors along the region of integration (single vector along a line for a one-dimensional integration, pairs of vectors for an integration over a two forms, etc) and I would see why in this case feeding a vector tangent to the line going from (1,1) to (0,1) to the one-form dx_2 would give zero. But I keep being told that one does not feed any vectors to the differential forms when one integrates forms.

Thank you again for your patience!

Patrick
 
  • #67
You are right that [itex]x^2[/itex] is constant, 1, along the relevant curve. That's pretty much all there is to it. Plug in 1 for [itex]x^2[/itex], as you thought, and then it works as you think for a 1D integral.

What you say about [itex]\underrightarrow{dx^2}[/itex] being zero along the curve is fine. A slightly more precise way of saying this is that the integral of the [itex]\underrightarrow{dx^2}[/itex] component of [itex]\underrightarrow{\omega}[/itex] is zero along the curve. I suppose it doesn't hurt to think of it as feeding the curve's tangent vector to the form and getting zero.
 
  • #68
remember too, not only is hard to find an antiderivative to use in calculating an integral, but sometimes they do not exist.

i.e.not all forms are "exact". exact forms, i.e. those with antiderivatives, are always "closed", i.e. d of them is zero, and the converse holds locally.
but not all forms are even closed.

exact one forms are those such that integration alonga path depends only on the endpoints, i.e. these are "conservative". these are the ones stokes thm applies to.

but for closed forms, path integration is only a homology invariant, i.e. you get the same integral if you change the path by one which is the boundary of a parametrized surface.

but for general one forms, the path integral changes when the path changes in any way. stokes is useless on these. but my description above, involving feeding pairs of vectors into, in that case a 2 form, still applies. in fact it is the definition of the integral.
 
  • #69
mathwonk said:
a 2 form assigns an area to a parallelogram. so parametrize your surface by a map from a rectangle. then subdivide the rectangle into little rectangles.

map each little recatangle into the tangent space to your surface by the derivative of your parameter map.

you get a finite family of little rectangles in a finite set of tangent spaces to your surface, whic give a piecewise polygonal approximation to your surface.

the 2 form asigns to each of these parallelograms, an area. add those up and that appriximates the area of your surface. keep doing it with finer and finer subdivisiobs of your parametrizing rectangle and it converges to the integral of the 2 form over the surface.

Thanks. Ok, that makes perfect sense to me (and as you pointed out, that works even if the antiderivative does not exact, i.e. the two forms integrated over is not exact).

This is exactly the way I have always pictured the integration of differential forms (i.e. as feeding vectors with components smaller and smaller until the sum converges) but I never understoof why books don't seem to ever say this when they get to the point of actually evaluating integrals over differential forms, they simply state that the integrals are *defined* to be the "usual" expressions of elementary calculus. They need to introduce a *definition*.

That does not seem to be necessary to me. Proceeding the way Mathwonk did, one is naturally led from the integral over a two-form (say) to the usual expression for the integral as seen in elementary calculus. It follows, without the need to introduce a definition, it seems to me. That has always left me puzzled when it seems to follow from saying that an integral over an n-form simply corresponds to "feeding" vectors to evaluate the area (or volume, etc) spanned by the vectors and subdividing until the sum converges.




another point: I know that I have been scoffed at for using the expression "infinitesimal" but to me, an infinitesimal quantity is simply the division one gets once one reaches the point where the integral converges. *That*'s what I call an infinitesimal. So the above procedure (feeding tangent vectors corresponding to finer and finer subdvisions until the integral converges) is what I have always meant by doing an integral over a two-form by feeding it vectors with "infinitesimals" components and summing over. But I have always been told that I was completely wrong in saying this. Now it seems to me that Mathwonk is describing the integration of a two-forms exactly the way I was visualizing it.
Maybe it's because people think about something else when using the word "infinitesimals"? I have been trying for months to figure out what was wrong with my reasoning. And books were unhelpful because when they get to the point of getting a number out of an integral over a differential form, they introduce a definition, without ever explaining the process descrived by Mathwonk, and the process that I had in mind.

Thanks for the comments.
 
  • #70
When thinking the standard way, I mainly just think of infinitessimals as a lazy way of dealing with tangent vectors, etc.

e.g. to be suggestive, I could use the notation:

P + v

for the tangent vector v at the point P. Then, things formally look like I'm using v as an "infinitessimal" and neglecting things at the second order. For example, I can "evaluate" a differentiable map f:

[tex]f(P + v) = f(P) + f'(P) v[/tex]

and in this notation, it looks like an ordinary differential approximation.
 
Last edited:

Similar threads

Replies
6
Views
6K
Replies
11
Views
5K
Replies
8
Views
3K
Replies
9
Views
3K
Replies
2
Views
1K
Replies
16
Views
6K
Replies
2
Views
6K
Replies
14
Views
4K
Back
Top