A Geometric Approach to Differential Forms by David Bachman

In summary, David Bachman has written a book on differential forms that is accessible to beginners. He recommends using them to prove theorems in advanced calculus, and advises starting with Chapter 2. He has started a thread at PF for others to ask questions and discuss the material.
  • #141
Chapter 3: Forms


Exercise 3.19

Let [itex]||V_1 \times V_2|| \equiv A[/itex], the area of the parallelogram spanned by [itex]V_1[/itex] and [itex]V_2[/itex].

Now look at [itex]\omega (V_1,V_2)[/itex].

[itex]\omega (V_1,V_2)= w_1(a_1b_2-a_2b_1)+w_2(a_2b_3-a_3b_2)+w_3(a_3b_1-a_1b_3)[/itex]

Recalling that [itex]V_3=<w_2,w_3,w_1>[/itex] we have the following.

[itex]\omega (V_1,V_2)=V_3 \cdot (V_1 \times V_2)[/itex]
[itex]\omega (V_2,V_2)=||V_3||A cos( \theta )[/itex],

where [itex]\theta[/itex] is the angle between [itex]V_3[/itex] (and therefore [itex]l[/itex]) and both [itex]V_1[/itex] and [itex]V_2[/itex]. Noting that this dot product is maximized when [itex]\theta[/itex] is 90 degrees, we have our result.

Exercise 3.20

Let [itex]N \equiv V_1 \times V_2[/itex].

Recalling the action of [itex]\omega[/itex] on [itex]V_1[/itex] and [itex]V_2[/itex] from the last Exercise, we have the following.

[itex]\omega (V_1,V_2)=V_3 \cdot (V_1 \times V_2)[/itex]

Noting the definition of [itex]N[/itex] we see that we can immediately identify [itex]V_3[/itex] with [itex]V_{\omega}[/itex], and the desired result is obtained.

Exericise 3.21

Start by manipulating the expression given in the Exercise.

[itex]\omega= F_x dy \wedge dz - F_y dx \wedge dz + F_z dx \wedge dy[/itex]
[itex]\omega = F_z dx \wedge dy + F_x dy \wedge dz - F_y dx \wedge dz[/itex]
[itex]\omega = F_z dx \wedge dy + F_x dy \wedge dz + F_y dz \wedge dx[/itex]

I used commutativity of 2-forms under addition to get to line 2, and anticommutativity of 1-forms under the wedge product to get to line 3.

Noting that [itex]V_3=<c_1,c_2,c_3>=<w_2,w_3,w_1>[/itex] (Exercise 3.18) and noting that [itex]V_3=V_{\omega}[/itex] (Exercise 3.20), it can be seen that [itex]V_{\omega}=<F_x,F_y,F_z>[/itex]
 
Last edited:
Physics news on Phys.org
  • #142
well on the positive side, some people actually learn more,by correcting errors of an imprecise book, than by plodding thriough one where all the i's are dotted for you. I think that may the case here. you seem to be learning a lot.
 
  • #143
Too true. I sometimes hand out fallacious arguments to my students and ask them to find the errors.

Notes on Section 3.5 will be forthcoming shortly, and then we can finally get on to differential forms and integration.

Yahoo!
 
  • #144
Is it safe to say this thread is dead? I'm working through Bachman on my own and the discussion here has been pretty helpful.
 
  • #145
Calculation with differntial forms

Tom Mattson said:
Hello folks,

I found a lovely little book online called A Geometric Approach to Differential Forms by David Bachman on the LANL arXiv. I've always wanted to learn this subject, and so I did something that would force me to: I've agreed to advise 2 students as they study it in preparation for a presentation at a local mathematics conference. :eek:

Since this was such a popular topic when lethe initially posted his Differential Forms tutorial, and since it is so difficult for me and my advisees to meet at mutually convenient times, I had a stroke of genius: Why not start a thread at PF? :cool:

Here is a link to the book:

http://xxx.lanl.gov/PS_cache/math/pdf/0306/0306194.pdf

As Bachman himself says, the first chapter is not necessary to learn the material, so I'd like to start with Chapter 2 (actually, we're at the end of Chapter 2, so hopefully I can stay 1 step ahead and lead the discussion!)

If anyone is interested, download the book and I'll post some of my notes tomorrow.


I ahve a question on the example of the integral presented in Example 3.3 (pages 40-41, from the hep archive).

He seems to go from dx^dy directly to dr^dt, where r and t are parametrizations of the upper half unit sphere, x= r cost, y = r sin t, z = sqrt(1- r^2), r ranging from 0 to 1 and t from 0 to 2 pi.

I don't understand that, it seems to me that dx^dy = r dr ^ dt.

Any one can help?

Thanks


Patrick
 
Last edited by a moderator:
  • #146
The extra r is there.

(z^2) dx^dy was transformed to (1 - r^2) r dr^dt.

Regards,
George
 
  • #147
George Jones said:
The extra r is there.

(z^2) dx^dy was transformed to (1 - r^2) r dr^dt.

Regards,
George

Yes, of course...:redface: Thanks

(I simply made the change of variables x,y -> r,t into dx^dy and got r dr^dt. Now I see that his [itex] \omega_{\phi(x,y)}[/itex] calculates the Jacobian which is included automatically in the way I did it. Now I see that he literally meant to replace dx^dy by dr^dt without taking derivatives...that confused me).

Thanks..

On a related note...I know that I will sound stupid but I still find very confusing that the symbols "dx" and "dy" are used sometimes to represent infinitesimals and sometimes to represent differential forms. :eek:

Anyway...
 
  • #148
nrqed said:
On a related note...I know that I will sound stupid but I still find very confusing that the symbols "dx" and "dy" are used sometimes to represent infinitesimals and sometimes to represent differential forms. :eek:

Umm... that's on purpose since the one forms dx and dy are defined so that one can do the calculus without all this infinitesimal nonsense.

BTW whatevre is the obsession with infinitesimals? I thought that Bishop Berkley firmly nailed the last nail of their coffin way back in the 1600s. And Cauchy showed us how to do all of analysis and hence calculus without thinking once about them. Virtually no one that I know of in the research field actually thinks in terms of these. Don't we have enough non-computable numbers to deal with (e.g. the vast majority of irrational numbers) without willfully adding more?
 
  • #149
I thought that Bishop Berkley firmly nailed the last nail of their coffin way back in the 1600s.
I'm not sure what you mean, but I'm afraid you mean that using infinitessimals can make no sense! But we've had nonstandard analysis since the 1950s, which can be used to put infinitessimals on a perfectly rigorous foundation.
 
  • #150
Hurkyl said:
I'm not sure what you mean, but I'm afraid you mean that using infinitessimals can make no sense! But we've had nonstandard analysis since the 1950s, which can be used to put infinitessimals on a perfectly rigorous foundation.

I'm not sure, but I think that Doodle Bob was referring to these when

Doodle Bob said:
Don't we have enough non-computable numbers to deal with (e.g. the vast majority of irrational numbers) without willfully adding more?

Regards,
George
 
  • #151
Hurkyl said:
I'm not sure what you mean, but I'm afraid you mean that using infinitessimals can make no sense! But we've had nonstandard analysis since the 1950s, which can be used to put infinitessimals on a perfectly rigorous foundation.

George's hunch was correct -- I just see nonstandard analysis as rather redundant given that it only adds structure to R that is not really needed for anything. But I am sort of contradicting myself above there. Berkley was railing against Newton's use of fluxions, which were his version of infinitesimals and which he used very much nonrigorously.

But, I am aware that nonstandard analysis has been rigorously established (I always thought it was much older than the 50s). My general feeling, though, is that it's not all that necessary. Sure, standard analysis can be a pain to learn -- at least, it is for my students right now. But eventually one can get used to it -- and even appreciate its utility and elegance.
 
  • #152
My general feeling, though, is that it's not all that necessary.
That's true -- a statement is true in standard analysis if and only if it's true in nonstandard analysis. So it doesn't provide any extra power. Any NSA proof could be directly translated into a standard proof, but it's messy.

But it's alledged that NSA proofs are shorter, cleaner, and more intuitive. If so, then there is a practical reason to use it. Mathworld even claims that there will eventually be NSA theorems that will never be proven in a standard way because the proof will be too long to ever write down. (I don't know if that's just sensationalism, or if there's actually been work done on complexity analysis of the two approaches)

I'm told physicists prefer to think in terms of infinitessimals. *shrug*


And who says they have to be uncomputable? :smile: You could just introduce a symbol e for a particular infinitessimal, and do computations with the resulting structure. It's no less "computable" than doing arithmetic with polynomials.
 
Last edited:
  • #153
Hurkyl said:
I'm told physicists prefer to think in terms of infinitessimals. *shrug*

Try to separate physicists from their infinitesimals, and they get downright hostile.

Regards,
George
 
  • #154
George Jones said:
Try to separate physicists from their infinitesimals, and they get downright hostile.

Regards,
George


I admit not having the mathematical sophistication of all you guys and I am obviously too stupid to understand advanced maths to start with but here is what I mean.

When I see an integral [itex]\int dx dy F(x,y)[/itex], I think of the Riemann sum [itex] lim_{\Delta x, \Delta y \rightarrow 0} \sum F(x,y) \Delta x \Delta y}[/itex]. That`s what I mean by infinitesimals. As far as I know, this is the onlyway to actually compute an integral (if anyone knows how to get from the expression with differential forms to an actual number without using the above formula, I will be happy to be enlightened from my complete lack of knowledge).

Now, when I see integrals in terms of differential forms defined, I always reach a point where the author will something like '' we *define* '[itex]\int dx^dy F(x,y)[/itex] to be [itex]\int dx dy F(x,y)[/itex] (ex: Equation 8.12 of Felsager..I don't have Frankel with me right now but he does exactly the same thing). On the left side there are differential forms, on the right side there are what I call infinitesimal.

It's this step which bothered my weak and feeble brain. I have always thought that there should be a way to get from the left side to the right side without saying ''well, we have to take this as a *definition*''. This sounds like there is nothing one can do with the left side so one makes this leap by saying that it has to be taken as a definition. I have always wondered why one cannot simply ''feed'' two infinitesimal vectors to the dx^dy to get a scalar dx dy. But that is never presented this way in books, it is always presented as a definition. So on one side, there are differential forms labeled dx, dy and on the other side there are the dx and dy that I call infinitesimals (which, in my mind, are defined throught the limiting procedure I gave at the very top, which is something one can actually work with to get a numercal value (let's say on a computer).

If there was an explicit step going from one side to the other I could understand the connection between the diff forms dx, dy and the ''infinitesimals'' (defined through the limit) dx, dy. But no explicit step is ever shown. This is why I find it confusing to use the same symbols ''dx'' and ''dy'' on both sides because (to my very unsophisticated and slow intellect), they have totally different meaning on the two sides!


I guess that the notation ''dx'' stems from this one form being the exterior derivative of the coordinate function? And this why when we do a change of variables we can use the usual rules of derivatives to get the corresponding differential form in the new basis?
I was hoping a discussion along those lines but I am obviously too unsophisticated and dumb to discuss those things.

(as you have surely guessed by the stupidity of my comments, I am a physicist).


Regards

Patrick

PS: I hope that it`s clear that my response is not aimed at George specifically..but I had to reply to one of the posts
 
  • #155
Well, everything has to have a definition, right? The question is about the motivation of the definitions!


One of the basic ideas of differential geometry is that it's very easy to define things on Euclidean space, and then we extend those definitions to apply to any manifold.


Our picture of a differential form is that it acts like some sort of geometric measure, right? So, say, if we integrate a 3-form in R³, it had better look like an ordinary volume integral. But which 3-form should we take as a "standard"? This one seems most natural:

[tex]
\int_R f \, dx \wedge dy \wedge dz := \int_R f dV
[/tex]

If I write [itex]\omega := f \, dx \wedge dy \wedge dz[/itex], then observe that we also have:

[tex]
f = \omega \left( \frac{\partial}{\partial x} \otimes \frac{\partial}{\partial y} \otimes \frac{\partial}{\partial z} \right)
[/tex]

So we can think of it as evaluating our 3-form on a triple of tangent vectors, and thus producing a scalar function. We then integrate that scalar function with the ordinary volume measure in R³. In other words:

[tex]
\int_R \omega = \int_R \omega(\mathbf{e}_1 \otimes \mathbf{e}_2 \otimes \mathbf{e}_3) \, dV
[/tex]


Ah, but this generalizes!

An n-dimensional surface in your manifold M is really just a map from the Euclidean n-cube [0, 1]n to your manifold M.

But if we have an n-dimensional surface S into M, we can map differential forms on M back to differential forms on the cube. Recall that for a covector:

[tex]S^*(\omega) =
\omega \left( \frac{\partial S}{\partial x_1} \right) \, dx_1 + \cdots +
\omega \left( \frac{\partial S}{\partial x_n} \right) \, dx_n[/tex]

And this easily extends to an n-form:

[tex]S^*(\omega) =
\omega \left( \frac{\partial S}{\partial x_1} \otimes \cdots
\otimes \frac{\partial S}{\partial x_n} \right) dx_1 \wedge \cdots \wedge dx_n[/tex]

And integration over a surface is defined in terms of an integral over the parameter space:

[tex]
\int_S \omega := \int_{[0, 1]^n} S^*(\omega)
= \int_{[0, 1]^n}
\omega \left( \frac{\partial S}{\partial x_1} \otimes \cdots
\otimes \frac{\partial S}{\partial x_n} \right) \, dV
[/tex]

so, your intuition was right in a sense -- when we integrate our n-form, we are applying it to some vectors and producing a scalar function, to which we apply an ordinary integral. In particular, we apply it to the standard tangent vectors defined by the parametrization!
 
  • #156
This is why I find it confusing to use the same symbols ''dx'' and ''dy'' on both sides because (to my very unsophisticated and slow intellect), they have totally different meaning on the two sides!
This is, in fact, something I really dislike about differnetial geometry, and has been an obstacle to my understanding... even when it was still in the context of doing ordinary multivariable calculus.

On the one hand, it's convenient to use the same terminology for things that are "essentially" the same, but on the other hand, when you're learning, it makes it awfully difficult to figure out what you're really doing! :frown:
 
  • #157
Patrick,

I apologize if my brief comment caused offence.

The comment was not directed in any way at you - in fact, quite the opposite.

Nor did I intend that anyone should draw the inference that I think physicists are stupid and slow - they aren't. But I have observed that some physicists show incredible stubborness with respect to learning new mathematical techniques. Conversely, many mathematicians are not willing to "let down their hair" enough to see the deep ideas in physics behind the informal mathematics used by physicists.

I have commented on these issues before in this forum, and yesterday's comment was meant as a lightheated continuation of these comments. As such, it was a complete failure.

I do not expect you or anybody else to agree with my views. However, my experience is that you have always been interested in the interplay between physics and mathematics.

The question of what level of mathematical rigor is apropriate for physics is a difficult and context-dependent one. All that can be said for sure is that it lies in the open interval (a , b), where a = no rigor and b = level of mathematical analysis.

Too much rigor can be a real hindrance to communicating physics, and to doing physics. The same for too little rigor. I certainly do not feel that physicists should give up their informal approach to differentials, which they have used profitably for so long.

I enjoy your well thought out posts, both your answers and your questions. You often ask questions to which I don't know the answer, and I geatly appreciate the discussions provoked by your questions and answers.

Regards,
George
 
Last edited:
  • #158
George Jones said:
Patrick,

I apologize if my brief comment caused offence.

The comment was not directed in any way at you - in fact, quite the opposite.

Thank you for your thoughtful reply. Sorry if I misunderstood the tone of your reply. I know that my questions may sound very naive but I assure you that they are not meant to criticize the rigor of mathematicians but are genuine attempts to understand the concepts.

I have a PhD and a postdoc in theoretical particle physics but when it comes to differential geometry I am at the same level as a beginning undergraduate student in maths.

As a professor, I ask only one thing of my students: that they make a genuine effort to understand what I am explaining. If they do, I can forgive them for taking time to understand or for trying to offer counterarguments to explanations I am giving. As long as there is a genuine desire to learn and to understand, I am quite patient.

I cannot ask you or anyone here spending time replying to my posts, of course. But the only thing I ask , for anyone taking the time to reply,is for patience. Any question/comment/counterargument I may offer is purely with the intention of understanding, not of putting down rigor or to be argumentative.


Nor did I intend that anyone should draw the inference that I think physicists are stupid and slow - they aren't. But I have observed that some physicists show incredible stubborness with respect to learning new mathematical techniques.

I agree completely!
But the reason I am posting those questions is that I am trying to learn new mathematical techniques.

In learning a mathematical technique, I see two stages: The first stage is to learn how to *apply* it...how to get results using the technique. The second stage is to understand the deep reasons why it works, to understand the underlying concepts and foundations. (I know that mathematicians prefer to follow the opposite order...and often not to even bother with the applications :wink: but I still think that the best way to learn a new technique is to see how it works before understanding the why.)

I can do simple calculation with differential forms, and I could stop there and not bother about the "why" and deeper foundations. But that does not satisfy me. This leads me to very naive questions and I realize that.



Conversely, many mathematicians are not willing to "let down their hair" enough to see the deep ideas in physics behind the informal mathematics used by physicists.

I have commented on these issues before in this forum, and yesterday's comment was meant as a lightheated continuation of these comments. As such, it was a complete failure.

I do not expect you or anybody else to agree with my views. However, my experience is that you have always been interested in the interplay between physics and mathematics.

The question of what level of mathematical rigor is apropriate for physics is a difficult and context-dependent one. All that can be said for sure is that it lies in the open interval (a , b), where a = no rigor and b = level of mathematical analysis.

Too much rigor can be a real hindrance to communicating physics, and to doing physics. The same for too little rigor. I certainly do not feel that physicists should give up their informal approach to differentials, which they have used profitably for so long.

I agree. Except that I was not asking to "lower" the level of rigor. My goal was not to say "why bother with this definition guys when this less rigorous approach works perfectly well". That was not my intent at all and if it came out that way, I apologize!

My comment was more "I am really trying to understand this and I don't see why this step is needed. Why couldn't this other way be possible, even if rigor is to be maintained...why is it necessary to involve a definition at that step". I did not want to sacrifice rigor, but to understand why it would be non rigorous to do it another way.
I


I enjoy your well thought out posts, both your answers and your questions. You often ask questions to which I don't know the answer, and I geatly appreciate the discussions provoked by your questions and answers.

Regards,
George

Thank you. And I certainly have learned a lot from your posts, especially on GR.



Patrick
 
  • #159
If you think you have it bad as a physicist, nrqed, pity us poor engineers. I'm not interested in GR as such but differential forms look as though they might be useful in fluid mechanics and I've been trying to acquire the tools for A Very Long Time. I still have a problem with 'dx' and it doesn't help when the most recent book I bought (on GR) talks of the 'infinitesimal interval dx' and then the 'exterior derivative dx' two chapters later!

Interesting that you say:

nrged said:
In learning a mathematical technique, I see two stages: The first stage is to learn how to *apply* it...how to get results using the technique. The second stage is to understand the deep reasons why it works, to understand the underlying concepts and foundations. (I know that mathematicians prefer to follow the opposite order...and often not to even bother with the applications but I still think that the best way to learn a new technique is to see how it works before understanding the why.)

Without wishing to insult either group, I'd tended to lump physicists and mathematicians together, thus, from my own notes to students:

This isn’t how engineers approach a new subject. Their training is different from that of physicists and mathematicians and their aim is usually to get to a particular application as quickly as possible, leaving the development of the general theoretical framework until later. For example, ‘stress’ first appears on p.27 of Ashby and Jones’ ‘Engineering Materials’, in the context of simple uniaxial structures, but p.617 of Frankel’s ‘Geometry of Physics’, in the context of a general continuum. Engineers therefore tend to go from the particular to the general rather than vice versa and their ‘worked examples’, preferably taken from familiar engineering fields e.g fluid mechanics or stress analysis, rather than relativity or quantum mechanics, usually start with ‘Calculate…’ rather than with ‘Prove…’. Consequently, many of the otherwise-excellent maths and physics textbooks, including Flanders, aren’t really suitable as engineering texts.

Apologies to both, or all three?

ron.
 
Last edited:
  • #160
Hurkyl said:
Well, everything has to have a definition, right? The question is about the motivation of the definitions!


One of the basic ideas of differential geometry is that it's very easy to define things on Euclidean space, and then we extend those definitions to apply to any manifold.


Our picture of a differential form is that it acts like some sort of geometric measure, right? So, say, if we integrate a 3-form in R³, it had better look like an ordinary volume integral. But which 3-form should we take as a "standard"? This one seems most natural:

[tex]
\int_R f \, dx \wedge dy \wedge dz := \int_R f dV
[/tex]

Yes.

Just a quick comment: then the obvious question is: what is "dV" on the rhs? It is not a form, it is an "infinitesimal". And it is this expression that one can actually use to compute something tangible (a number) using a computer, say. And yet I get beaten up when I talk about d(something) being an infinitesimal!:wink: But they *must* be introduced at some point, differential forms notwithstanding! (again, by infinitesimal I mean what is defined within a Riemann sum)



If I write [itex]\omega := f \, dx \wedge dy \wedge dz[/itex], then observe that we also have:

[tex]
f = \omega \left( \frac{\partial}{\partial x} \otimes \frac{\partial}{\partial y} \otimes \frac{\partial}{\partial z} \right)
[/tex]

So we can think of it as evaluating our 3-form on a triple of tangent vectors, and thus producing a scalar function. We then integrate that scalar function with the ordinary volume measure in R³. In other words:

[tex]
\int_R \omega = \int_R \omega(\mathbf{e}_1 \otimes \mathbf{e}_2 \otimes \mathbf{e}_3) \, dV
[/tex]

Good But I wonder why this is not done this way in books (as opposed to *defining* the integration as you wrote in your first equation).

The way I (thought) I understood it, forms and the wedge product provide an elegant way to calculate (signed) areas or volumes (or length element or higher dimensional quantities).

I found neat the first time I saw the Jacobian arising naturally and how natural it is to go from one coordinate system to another. This is beautiful. But it is surprising that after seeing all this power for the calculation of areas and volumes, one gets to a point where it would seem the *most* natural of all to use this machinery to get an infinitesimal area or a volume element but instead of feeding vectors to the forms, one introduces a definition!

To emphasize my point, let's say that one is working with the boring case of finding the area of a surface in 2D Euclidian space. Let's say one starts with Cartesian coordinates (x,y) and one wants to go to polar coordinates. The way I used to see it, using forms provided a wonderful way to do this. But now it seems that one would have to do the following:

a) Define the integral over the area measure dx dy to be an integral over the wedge product dx^dy

b) do your change of variable to r, theta

c) Now you have an integral over dr ^ d(theta). (times r^2 but I am just looking at the differential form part) What to do with this? You have to *define* it as an integral over the measure dr d(theta) !
So it seems that every time one does a change of variable, one must introduce a new definition to connect with something that can be integrated over explictly (in the sense of "put in a computer").
That does not sound very useful!


What I have always hoped to hear is the following (I know this is wrong, I ma just trying to show my reasoning):

One is integrating over the usual measure dx dy. But this really comes from the wedge product dx^dy to which two basis vectors of the Cartesian coordinate system have been fed.This implies that this expression is already in a specific coordinate system. As such, it is not in a suitable form to change basis. The "real" expression (before having fed any basis vectors) is the integral over dx^dy. *Now* one can do a change of variables which leads to r^2 dr^ d(theta). Now, to finally get an expression that is useful numerically, one must feed two basis vectors of the coordinate system one is working with. Feeding the basis vectors of the polar coordinate system turns dr^d(theta) into the usual dr d(theta) and voila! We have made a change of coordinate system in which the Jacobian very naturally arises and without having to *define* something new.


Ah, but this generalizes!

An n-dimensional surface in your manifold M is really just a map from the Euclidean n-cube [0, 1]n to your manifold M.

But if we have an n-dimensional surface S into M, we can map differential forms on M back to differential forms on the cube. Recall that for a covector:

[tex]S^*(\omega) =
\omega \left( \frac{\partial S}{\partial x_1} \right) \, dx_1 + \cdots +
\omega \left( \frac{\partial S}{\partial x_n} \right) \, dx_n[/tex]

And this easily extends to an n-form:

[tex]S^*(\omega) =
\omega \left( \frac{\partial S}{\partial x_1} \otimes \cdots
\otimes \frac{\partial S}{\partial x_n} \right) dx_1 \wedge \cdots \wedge dx_n[/tex]

And integration over a surface is defined in terms of an integral over the parameter space:

[tex]
\int_S \omega := \int_{[0, 1]^n} S^*(\omega)
= \int_{[0, 1]^n}
\omega \left( \frac{\partial S}{\partial x_1} \otimes \cdots
\otimes \frac{\partial S}{\partial x_n} \right) \, dV
[/tex]

so, your intuition was right in a sense -- when we integrate our n-form, we are applying it to some vectors and producing a scalar function, to which we apply an ordinary integral. In particular, we apply it to the standard tangent vectors defined by the parametrization!

Exactly what I was hoping for! But why then the need to define an integral over forms in terms of an integral over an ordinary measure (which is what I can an infinitesimal)?? It would seem to me that the definition should be at a different level, it should be that the integral over the differential forms is defined to be the same as the integral over the forms to which the tangent vectors have been fed! *That* would make sense to me. But I have never seen in a book the connection between integration over forms and integration over infinitesimals described that way. They never seem to mention *feeding vectors to the forms* in going through that step. It is always defined as going directly from integration over forms to integration over infinitesimals, period. Why?

Thank you for your help!
 
Last edited:
  • #161
I'm bumping this in the hope that discussions on the book are still ongoing.

I'm finding the book very good so far. It's explaining the meaning and application of differential forms far better than most introductions, which just present rather abstract definitions.

However, I'm having trouble with the presentation of the derivative of a differential form. This is presented in chapter 5 on page 89 in the updated version of the book. Here's the extract.

A Geometric Approach to Differential Forms said:
The goal of this section is to figure out what we mean by the derivative of a
differential form. One way to think about a derivative is as a function which measures
the variation of some other function. Suppose [tex]\omega[/tex] is a 1-form on R2 . What do we mean
by the “variation” of [tex]\omega[/tex] ? One thing we can try is to plug in a vector field V . The result is a function from R^2 to R. We can then think about how this function varies near a point p of R^2 . But p can vary in lots of ways, so we need to pick one. In Section 6 of Chapter 1 we learned how to take another vector, W , and use it to vary p. Hence, the derivative of [tex]\omega[/tex] , which we shall denote [tex]d\omega[/tex], is a function that acts on both V and W . In other words, it must be a 2-form!

Let’s recall how to vary a function f (x, y ) in the direction of a vector W at a
point p. This was precisely the definition of the directional derivative:

[tex] \nabla_W f(p) = \nabla f(p) \cdot W[/tex]

This all seems fine. The derivative of the one-form [tex]d\omega (V,W)[/tex]will be the rate of change of in the direction of W, of the one-form applied to a fixed V. I would thus be inclined to think that [tex]d\omega(V,W) = \nabla_W \omega (V) [/tex]. However the book goes on to say that;

[tex]d\omega(V,W) = \nabla_W \omega (V) - \nabla_V \omega(W)[/tex]

What is the reason for this second term, and why is W, a change in R^2, being used as input to a one-form on tangent space V?
 
  • #162
k-forms are antisymmetric in its arguments. So, if we intend the derivative of a 1-form to be a 2-form, we need to define the derivative so that it is antisymmetric. (I'm sure there are other reasons, but this one jumps to mind the quickest)

The derivative you mention is simply [itex]\nabla \omega[/itex], which is not a 2-form, and thus not a candidate to be the exterior derivative.
 
Last edited:
  • #163
But then what is [tex]d\omega(V,W)[/tex]? It is clearly not the rate of change of the form with respect to the evaluation point. Are we just defining things for convienience here?
 
  • #164
i guess the reason you want an alternating object is so the area of a parallelogram spanned by two copies of the same vector, i.e. a flat parallelgram, will be zero?
 
  • #165
OK. It's become clear to me that in general [tex]d\omega(V_1, \ldots V_n, W)[/tex] is not the derivative of [tex]\omega(V_1, \ldots V_n)[/tex] in the direction of [tex]W[/tex]. It appears to be some kind of derivative in multiple directions, but the exact interpretation of what the derivative of a general form is escapes me.

However, having read further on in the text, I think the best I can really say is that the differential of a form is something that satisfies the generalised Stokes equation. This seems a little circular though. I'd like to be able to say what the differential of a form is in its own right.
 
Last edited:
  • #166
in mathematics you always get to choose where to begin. i.e. what your definitions are. then the 0therv ersions of the same thing become theorems. so if you define a derivative of a form as something that satisfies the stokes theorem, then you are defining it in the same way as the dual of a linear operator os defiend in vector space theory.

then you have to prove that its matrix is the transpose of the other one in appropriate corrdinates, or that it behaves a certain way wrt the inner product.

if you define a derivative more directly, them you have to prove stokes theorem.

what a form is directly, is a measure of volume as i said in my last post. then to geta measure of 4 volume from a measure of three volume, you first take the boiundary of the 4 block, and then apply the measure of three volume to that.

that is the derivative of the three form, and also a local version of the adjoint property mentioned before.

alwys think of what you want your object to measure. then read the definition.
 
  • #167
George Jones said:
Try to separate physicists from their infinitesimals, and they get downright hostile.

Regards,
George
You know, I have been thinking about this lately and the following came to mind.

It's hard to give up entorely on infinitesimals when they have been used for years and have faithfully led to correct results.

Just a very simple example. Let's say a student asks to find the electriv field at a point produced by an infinite line of charge. Then you set up the integral of [itex] k dq / r^2 {\hat r} [/itex] and so on. Now, how is one to think about dq here? As a differential form?:eek: Or as an infinitesimal (again in the sense of writing the integral as a sum and taking the limit etc etc)?

I am probably too simple minded to see it but I have a hard time seeing how this can be seen in terms of differential forms!

Patrick
 
  • #168
patrick, what is a field? i.e. what does it do? how do you detect its presence? say, given the charge?

do you put a particle in there anbd see if it accelerates? or let something move through there and see how it is diverted?

i am trying to see if the field acts on vectors, i.e. tangent vectors, or moving particles.

if it acts on vectors then it should be represented by a differential form.

i.e. it would be a covector field, as opposed to a vector field.
 
  • #169
mathwonk said:
patrick, what is a field? i.e. what does it do? how do you detect its presence? say, given the charge?

do you put a particle in there anbd see if it accelerates? or let something move through there and see how it is diverted?

i am trying to see if the field acts on vectors, i.e. tangent vectors, or moving particles.

if it acts on vectors then it should be represented by a differential form.

i.e. it would be a covector field, as opposed to a vector field.

That's a good question.

well, the answer that any physicist would give is the first one you mentioned: you place a "test charge" (i.e. a physicist's idealization of an "infinitesimal" charge) charge at a point and see if it accelerates.

On the other hand, in mathematical physics books, people usually say that E is a one-form. They say that one should think at grabbing the charge, moving it through the E field and measuring the work done by the electric "field" on the charge. So that the E field, integrated over the path gives a number, the work.

It's very confusing to start from the conventional physics equations and to figure out what quantities are truly differential forms, what quantities are vector fields.

I actually have a few questions about this which I will post soon.

Regards,

Patrick
 
  • #170
The book has been published!

Just a quick note to officially announce the release of my book, "A Geometric Approach to Differential Forms." It has been published by Birkhauser, and is available via their webisite, Amazon.com, Barnes & Noble, etc.

I have done what I can to keep the purchase price low. I think its in the $35-40 range. There have been many significant additions/corrections since the versions that were put up on the web, such as the one on the arxiv.

So please support the author and buy yourself a copy! To make it easy for you, here's a link to the book at Amazon:
https://www.amazon.com/dp/0817644997/?tag=pfamazon01-20

Thanks!
Dave Bachman
 
Last edited by a moderator:
  • #171
mathwonk said:
patrick, what is a field? i.e. what does it do? how do you detect its presence? say, given the charge?

do you put a particle in there anbd see if it accelerates? or let something move through there and see how it is diverted?

i am trying to see if the field acts on vectors, i.e. tangent vectors, or moving particles.

if it acts on vectors then it should be represented by a differential form.

i.e. it would be a covector field, as opposed to a vector field.

In general a field may vary according to some physical properties of the object you're studying. Depending on the environment you're working in (classical mechanics, electromagnetism) one describes a physical object by certain quantities. Examples: mass, charge, spin,... and the state of motion, which - in a classical context - is described by the momentum, forces and position. In most classical mechanics situations the mass of the object you're observing is constant. That makes it a lot easier.
An electromagnetic field for example depends upon tangent vectors, as the Lorentz force depends on the velocity of an object.

But I feel that the issue of whether to take a field or a 1-form isn't really deep, as you're dealing with settings in [itex]\mathbb{R}^n[/itex] and you can "convert" quite easily between the two.
 
  • #172
@patrick:
I think the trouble you're having goes much deeper than differential geometry. Your problem is integration in [itex]\mathbb{R}^n[/itex], in my eyes. Once you have understood the notion of integration in higher dimensions, the integrating part of differential geometry will not be so hard to understand. One way of looking at manifolds is as objects that are locally isomorphic to a euclidean space, and it is exactly this property one is using to integrate, differentiate and so forth. If you know how to integrate in euclidean space, you'll simply "pull back" your form on the manifold to the euclidean space and integrate there. And that, of course, is a definition, but I feel it's quite reasonable.
 
  • #173
Sorry, hate to dig up an old thread but I found it relevant to my question. I'm using this book to teach myself about differential forms and the generalized Stokes' theorem.

On page 32 (2006 edition) Bachman asks to evaluate the following integral:

[tex]\int_{R} f(\phi(r,\theta))Area(\partial\phi/\partial r, \partial\phi/\partial\theta)drd\theta[/tex]

Where Area(.,.) is the area of the parallelogram spanned by the vectors [tex]\partial\phi/\partial r[/tex], [tex]\partial\phi/\partial\theta[/tex]. The function is f(x,y,z) = z^2 and the region R is the top half of the unit sphere, parametrized by [tex]\phi(r,\theta) = (rcos\theta, rsin\theta, \sqrt{1 - r^2})[/tex]. This was part of a discussion of how to generalize the integral (if anyone has read/is reading this book). It would be simple enough to just evaluate the integral [tex]\int_{R} (1 - r^2)r dr d\theta[/tex], using the standard change of variables, but for the sake of the development I took this approach. However, it did not give me the desired answer ([tex]\pi/2[/tex], which is also the answer he got when using differential forms to integrate later in the book, p. 56). This is how I did it:

[tex]\int^{2\pi}_{0}\int^{1}_{0}(1-r^2)|\partial\phi/\partial r \times \partial\phi/\partial\theta| dr d\theta[/tex]

[tex]\int^{2\pi}_{0}\int^{1}_{0}(1-r^2)|<r^2cos\theta/\sqrt{1 - r^2}, r^2sin\theta/\sqrt{1 - r^2}, r>| dr d\theta[/tex]


[tex]\int^{2\pi}_{0}\int^{1}_{0}(1-r^2)\sqrt{(r^4cos^2\theta + r^4sin^2\theta)/(1 - r^2) + r^2} dr d\theta[/tex]


[tex]\int^{2\pi}_{0}\int^{1}_{0}(1-r^2)\sqrt{(r^2/(1 - r^2)} dr d\theta[/tex]


[tex]\int^{2\pi}_{0}d\theta\int^{1}_{0}r\sqrt{1 - r^2} dr[/tex]

[tex]\int^{2\pi}_{0}d\theta\int^{1}_{0}r\sqrt{1 - r^2} dr[/tex]

[tex](\int^{2\pi}_{0}d\theta)(-1/2)\int^{0}_{1}\sqrt{u} du[/tex] Letting [tex]u = 1 - r^2[/tex]

[tex]\pi\int^{1}_{0}\sqrt{u} du = 2\pi/3[/tex].

There may be something really obvious I'm missing here. I used the magnitude of the cross product of [tex]\partial\phi/\partial r[/tex], [tex]\partial\phi/\partial\theta[/tex] for the Area function, but it's not giving the right answer. Appreciate any help!
 
  • #174
Hi Tom Mattson,
I'm reading this book now.
 
  • #175
This is slightly off topic and I know you have a book you're already using, but I thought I'd share this reference. There is a book by http://www.ucolick.org/~burke/home.html" [/I], and it seems to be a bit of a treasure in this subject. Although I haven't read the book, I read the introduction and it seems at least relevant to what you are studying.
 
Last edited by a moderator:

Similar threads

  • Differential Geometry
3
Replies
70
Views
13K
Replies
5
Views
2K
  • Topology and Analysis
Replies
2
Views
1K
  • Differential Geometry
Replies
1
Views
3K
  • Topology and Analysis
Replies
1
Views
1K
Replies
4
Views
2K
  • Differential Geometry
Replies
14
Views
5K
Replies
2
Views
1K
  • Differential Equations
Replies
5
Views
1K
  • Beyond the Standard Models
Replies
19
Views
2K
Back
Top