A Geometric Approach to Differential Forms by David Bachman

In summary, David Bachman has written a book on differential forms that is accessible to beginners. He recommends using them to prove theorems in advanced calculus, and advises starting with Chapter 2. He has started a thread at PF for others to ask questions and discuss the material.
  • #71
chap 2: page 39, same incorrect statement about defining integrals via evenly spaced subdivisions occurs again.

problems witth the definition of parametrization raises its head again on page 40. on page 23 a parametrization of a curve was defiend as a one to one, onto, differentiable map from (all of) R^1 to the curve, (although most exampels so far have not bee defiend on all of R^1, so it might have been better to say from an interval in R^1.

more significant, the first example given on page 40 is not differentiable at the end points of its domain. so again it might be well to say the parametrization, although continuous on the whole interval may fail to be differentiable at the endpoints.

this is the beginning of another potential situation where one probably is intending to integrate this derivative even though it is not continuous or even bounded on its whole domain. this problem is often overlooked in calculus courses. i.e. when the "antiderivative" is well defined and continuous on a closed interval, it is often not noticed that the derivative is not actually riemann integrable by virtue of being unbounded.

indeed as i predicted, exercise 2.1 page 43 asks the reader to integrate the non - integrable function, derivative of (1-a^2)^(1/2), from -1 to 1.

this function is not defined at the endpoints of that interval and is also unbounded on that interval. interestingly enouhg it has a bounded continulous "antiderivative" which enables one to "integrate" it, but not by the definition given in the section, since the limit of those riemann sums does not in fact exist.

the polar parametrization of the hemisphere, on page 44, is again not one to one. and again the third coordinate function of the parametrization phi is not differentiable wrt r at r=1, hence the integral written is again not defined by a limit of riemann sums.

it seems worthwhile to face head on this problem about many natural parametrizations often not being one to one, and point out that for questions of integration, there is no harm in non one to one ness occurring on sets of lower dimension, since the integral over those sets will be zero.

Stieltjes is misspelled on page 44, both the t and one e are omitted.

the language at the bottom of page 45 describes regions parametrized by R^1, R^2, and R^n, although what is apparently meant, and what is done, is to parametrize by rectangular blocks in those spaces.
 
Last edited:
Physics news on Phys.org
  • #72
what about this Gza?

I understand now, thank you. :approve:
 
  • #73
does anyone appreciate my comment about sqrt(1-x^2) not being differentiable at
x= 1?

this is the familiar fact that the tangent line to a circle at the equator is vertical.

it is rather interesting that this derivative function can be "integrated" in some sense (i.e. as an improper integral) in spite of being unbounded.

does anyone agree that the polar parametrizations given are not actually one to one? and does anyone see why that does not matter?

(but that it does call for a new definition of parametrization?)
 
  • #74
My apologies for not having read the text so I am sure its already been pointed out.

One endless source of confusion for me when I was learning this stuff is the notion of axial and polar vectors. At first glance its easy and obvious, but then terminology starts getting confused, particularly when you learn clifford algebras and some peoples pet concepts to reinvite notation via geometric algebra.

People get in endless debates about how to properly distinguish these different types of *things*. eg What constitutes active and passive transformations of the system, what is a parity change, do we take Grassman or Clifford notation blah blah blah.

Unfortunately if you want a cutesy picture of what's going on, alla MTW (forms now look like piercing planes) some of this stuff becomes relevant or else you quickly end up with ambiguities.

Most of the confusion goes away when you get into some of the more abstract and general bundle theory, but then the audience quickly starts getting pushed into late undergrad/early grad material and the point is lost.
 
  • #75
mathwonk said:
does anyone appreciate my comment about sqrt(1-x^2) not being differentiable at
x= 1?

this is the familiar fact that the tangent line to a circle at the equator is vertical.

Yes, but we're not there yet. As I said in the beginning, I want to march through the book sequentially. The purpose of this thread is twofold:

1. To help my advisees for their presentation.
2. To see if a book such as Bachman's could be used as a follow-up course to what is normally called "Calculus III".

It doesn't really help to achieve my primary goal (#1) if we jump all over the place. My advisees are in Chapter 4 (on differentiation), and we are using this thread to nail down any loose ends that we left along the way in our effort to keep moving ahead.

I'll be posting the last of my Chapter 2 notes tonight and tomorrow. Once the discussion has died down I'll start posting notes on Chapter 3, which is about integration. I'll also try to pick up the pace.

Thanks mathwonk and everyone else for your useful comments, especially post #65 by mathwonk.

edit to add:

By the way mathwonk, my copy of Spivak's Calculus on Manifolds is in. Great book, thanks for the tip! One of my advisees (*melinda*) picked up Differential Forms with Applications to the Physical Sciences by Flanders. What do you think of it?
 
Last edited:
  • #76
i like flanders.


i do not understanbd your reamrk about the sequential treatment, and not being up to my comment yet.

if you are talking about amrching sequentially throguh bachmann, i started on page 1, and those comments are about chapters 1 and 2. how can someone be in chapter 4 and not be sequentially up to chapters 1 and 2 yet?


are you talking about chapter 4 of some other book?

it seems to me you guys are still way ahead of me.
 
  • #77
flanders had a little introductory article in a little MAA book, maybe Studies in Global Geometry and Analysis (ISBN:0883851040)
Chern, S.S., that first got me unafraid of differential forms, by just showing how to calculate with them.

i had been frightened off of them by an abstract introduction in college. i had only learned their axioms and flanders showed just how easy it is to multiply them. i liked the little article better than his more detailed books.
 
  • #78
mathwonk said:
i do not understanbd your reamrk about the sequential treatment, and not being up to my comment yet.

Never mind my comment. I was looking at the arXiv version of Bachman's book, in which page 39 is in Chapter 3 (the chapter on integrating 1-forms).

To prevent further confusion, I am now going to burn the arXiv version and exclusively use the version from his website. I'll re-do the chapter and section numbers in my notes.
 
  • #79
thats right, there were two versions of the book!
 
  • #80
Flanders is sort of the defacto reference book on differential forms for US math majors. You get some treatment in Spivak, and also some good stuff in various physics books, but its not quite the same.

A modern book some people liked a lot was Darling's book on Differential forms.

Regardless I am a little bit wary of placing too much weight on intuitive pictures of the whole affair. Differential forms to me are much ore of a formal language that makes calculations tremendously simpler (not to mention the fact that they are much more natural geometric objects what with being coordinate independant and hence perfect for subjects like cohomology and algebraic geometry). Notation changes from area to area and I suspect having too rigid a 'geometric' intution might actually hurt in some cases.

I guess I am just a little bit disenchanted with some of the earlier attempts to 'picture' what's happening, like the piercing plane idea from MTW (Bachmans text has a good section where they explain why that whole thing doesn't quite work out well in generality)
 
  • #81
Chapter 3: Forms

Section 4: 2-forms on [itex]T_p\mathbb{R}^3[/itex]​

Here is the next set of notes. As always comments, corrections, and questions are warmly invited.


Exercise 3.15

Try as you might, you will not be able to find a 2-form (edit: on [itex]T_p\mathbb{R}^3[/itex]) which is not the product of 1-forms. We in this thread have already argued as much, and indeed in the ensuing text Bachman explains that he has just asked you to do something that is impossible. Nice guy, that Dave. :-p


This brings us to the two Lemmas of this section. I feel that the details of the proofs are straightforward enough to omit, so I am just going to talk about what the lemmas say. If any of our students has any questions about the proofs, go right ahead and ask.

Lemma 3.1 reinforces the idea that was first brought up by Gza: The 1-forms whose wedge product make up a 2-form are not unique.

Lemma 3.2 is really what we want to see: It is the proof that any 2-form is a product of 1-forms. The lemma itself states that if you start with two 2-forms that are the product of 1-forms, then their sum is a 2-form that is the product of 1-forms. That is, any 2-form that can be written as the sum of the product of 1-forms, is itself a product of 1-forms.


Note: There is a typo in Bachman's proof (both versions of the book).

Where it says:

"In this case it must be that [itex]\alpha_1\wedge\beta_1=C\alpha_2\wedge\beta_2[/itex], and hence [itex]\alpha_1\wedge\beta_1+\alpha_2\wedge\beta_2=(1+C)\alpha_1\wedge\beta_1[/itex]",

it should say:

"In this case it must be that [itex]\alpha_1\wedge\beta_1=C\alpha_2\wedge\beta_2[/itex], and hence [itex]\alpha_1\wedge\beta_1+\alpha_2\wedge\beta_2=(1+C)\alpha_2\wedge\beta_2[/itex]".


Bachman goes from the last statement in black above to concluding that "any 2-form is the sum of products of 1-forms."


To explicitly show this, start with the most general 2-form:
[itex]
\omega=c_1dx \wedge dy+c_2dz \wedge dy+c_3dz \wedge dx
[/itex]

Now use the distributive property:
[itex]
\omega=(c_1dx+c_2dz) \wedge dy +c_3dz \wedge dx
[/itex]

And there we have it.


This leads us to the following conclusion:

David Bachman said:
Every 2-form on [itex]T_p\mathbb{R}^3[/itex] projects pairs of vectors onto some plane and returns the area of the resulting parallelogram, scaled by some constant.

There is thus no longer any need for the "Caution!" on page 55.

edit: That is, there is no need for it when we are dealing with 2-forms on [itex]T_p\mathbb{R}^3[/itex]. See post #82.


Exercise 3.16

Now that we know that every 2-form on [itex]T_p\mathbb{R}^3[/itex] is a product of 1-forms, this is a piece of cake. Just look at the following 2-form:

[itex]\omega(V_1,V_2)=\alpha\wedge\beta(V_1,V_2)[/itex]
[itex]\omega(V_1,V_2)=\alpha(V_1)\beta(V_2)-\alpha(V_1)\beta(V_2)[/itex]
[itex]\omega (V_1,V_2)=(<\alpha>\cdot V_1)(<\beta>\cdot V_2)-(<\alpha>\cdot V_2)(<\beta>\cdot V_1)[/itex]

This 2-form vanishes identically if either [itex]V_1[/itex] or [itex]V_2[/itex] (doesn't matter which) is orthogonal to both [itex]<\alpha>[/itex] and [itex]<\beta>[/itex].

Exercise 3.17

Incorrect answer edited out:

The above argument does not extend to higher dimensions because not all 2-forms are factorable in higher dimensions.

Counterexample:

Take the following 2-form on [itex]T_p\mathbb{R}^4[/itex]:

[itex]\omega=dx \wedge dy + dz \wedge dy +dz \wedge dw + 2dx \wedge dw[/itex].

Try to factor by grouping:

[itex](dx+dz) \wedge dy + (dz+2dx) \wedge dw[/itex],

and note that we can go no further. It turns out that no grouping of terms will result in a successful factorization.



Exercise 3.18

Maybe I'm just being dense, but I do not see how to solve this one. The hint right after the exercise doesn't help. If [itex]l[/itex] is in the plane spanned by [itex]V_1[/itex] and [itex]V_2[/itex], then of course the vectors that are perpendicular to [itex]V_1[/itex] and [itex]V_2[/itex] will be perpendicular to [itex]l[/itex].

Anyone want to jump in here?
 
Last edited:
  • #82
Hi all,

Sorry I have been silent for a few days. Busy, busy busy...

And even now I do not have time to give proper responses, but here are a quick few...

Mathwonck, please read a bit more carefully if you are going to take on a role as "proofreader":

To your comment about integrating with evenly space intervals: there is a discussion of this on page 41.
To your comment on saying that we want an "oriented area": I couldn't use the word "oriented" because at this point students have no idea what an orientation is. In fact, at that point in the text I do not even assume that the student realizes that the deterimant can give you a negative answer (although I am sure this seems obvious to you). I do, however, emphasize this by inentionally computing an example where the answer is negative, and then pointing out that we really don't want "area", but rather a "signed area". It's all there.

Next... there is a rather long discussion here about factoring 2-forms into products. Mathwonk has a "proof" in one of his earlier posts, but this was a little bit of wasted effort, since this is the content of Section 4 of Chapter 3.

Also, Tom... be careful! The CAUTION on page 55 is ALWAYS something to look out for. The point of Section 4 of Chap 3 is that dimension 3 is special, because there you can always factor 2-forms. The next edition of the book will have a new section about 2-forms in four dimensions, with particular interest on those that can NOT be factored.

Hopefully more tomorrow... I should give you more of a hint on Exercise 3.18.

Dave.
 
  • #83
Dave I am sorry to see my corrections are not welcomed by you. They are accurate however.

As an expert I probably should have not gotten involved since everyone is having fun, and my corrections are invisible to the average student. But you did ask for comments in your introduction. When you do that, you should expect to get some.

I think this book is nice for a first dip into the topic, but I have a concern that a person learning the subject from this source will be left with a certain amount of confusion, due to the imprecise discussion, and non standard language, which will cause problems in trying to discuss the material with more knowledgeable people.

If followed up with Spivak however it should be fine. And any source that gets people involved and allows them friendly access to a topic is good. This is the strength of Dave's book. I don't know who they sent it to for reviewing, but Dave, I think you might get some comments like mine from other reviewers.
 
Last edited:
  • #84
for tom and students: you can argue that diff forms are useful in the 10 or more dimensions physicists apparently use now for space time, and they are also easily adaptable to the complex structures used there and in in string theory (Riemann surfaces, complex "Calabi Yau" manifolds).
 
  • #85
Bachman said:
Hi all,

Sorry I have been silent for a few days. Busy, busy busy...

Glad to see you back. :smile:

Also, Tom... be careful! The CAUTION on page 55 is ALWAYS something to look out for. The point of Section 4 of Chap 3 is that dimension 3 is special, because there you can always factor 2-forms.

Whoops. I've put in an edit that corrects my remark about the Caution. I've also changed my answer to Exercise 3.17, which was evidently wrong.
 
  • #86
another comment about selling differential forms to your audience. Dave has a nice application in chapter 7 showing that their use reduces Maxwell's equations from 4 to 2.
 
  • #87
The line [itex]l = \{\vec{r}t + \vec{p} : t \in \mathbb{R}\}[/itex] for some [itex]\vec{r},\ \vec{p} \in T_p\mathbb{R}^3[/itex]. Suppose [itex]\vec{v},\ \vec{w} \in T_p\mathbb{R}^3[/itex] such that [itex]l \subseteq Span(\{\vec{v},\ \vec{w}\})[/itex]. Then the set [itex]\{\vec{p},\ \vec{v},\ \vec{w}\}[/itex] is linearly dependent, hence:

[tex]\det (\vec{p}\ \ \vec{v}\ \ \vec{w}) = 0[/itex]

Define [itex]\omega[/itex] such that:

[tex]\omega (\vec{x},\ \vec{y}) = \det (\vec{p}\ \ \vec{x}\ \ \vec{y}) \ \forall \vec{x}, \vec{y} \in T_p\mathbb{R}^3[/tex]

You can easily check, knowing the properties of determinants, that [itex]\omega[/itex] is an alternating bilinear functional, and hence a 2-form. If you want, you can express it as a linear combination of [itex]dx \wedge dy,\ dy \wedge dz,\ dx \wedge dz[/itex], and it shouldn't be hard, but probably not necessary.

EDIT: actually, to answer the question as given, perhaps you will want to write [itex]\omega[/itex] in terms of those wedge products, and determine [itex]\vec{p}[/itex] from there. Then, to find [itex]l[/itex] you just need to choose any line that passes through [itex]\vec{p}[/itex]. Any two vectors containing that line will have to contain [itex]\vec{p}[/itex], hence those three vectors must be linearly dependent, hence their determinant will be zero, and since [itex]\omega[/itex] depends only on [itex]\vec{p}[/itex] and not the choice of [itex]\vec{r}[/itex], you're done.
 
Last edited:
  • #88
hi
~Thanks everyone on the feedback to my question. It’s so reassuring to know when you’ve got the right idea!
~For exercise 3.17 (post 81), Tom says:

“The above argument does not extend to higher dimensions because not all 2-forms are factorable in higher dimensions”.

~I can see why this is the case in exercise 3.16, but it seems like there’s a bit more to this than a simple question of factorability. I’m probably way off, but I was thinking that it has more to do with some general property of 3-space that makes it inherently different than say, 4-space or any other space for that matter. Then again, I suppose that not being able to write a 2-form as a product of 1-forms in R^4 could very well be a general property of higher dimensions. Unfortunately these are ideas that I don’t know very much about yet, so please excuse if my questions are a bit silly or obvious.
 
  • #89
For applications, I know of many places in physics where differential forms are useful, even to an undergrad.

First and foremost, the often quoted derivation of maxwells equations in a very neat and elegant form.

The fundamental equations of thermodynamics as well are often cast in differential form notation. You instantly get out several relations that are painful to get in other notation.

Finally general relativity/String theory etc

One thing to note though.. I really didn't see at the time the advantage of using differential forms in those situations, I often would ask 'why not just use tensor calculus instead'? And I was right in the sense that you will get very compact notation (if you suppress the irratating indices) just as quickly as with differential forms without the added hassle of learning the new, somewhat unintuitive language.

I was wrong though in the deeper meaning of these objects. It wasn't until I learned of Yang Mills theory, and principle bundles as applied to general relativity, that the full power of differential forms became instantly apparent.

Modern Physics fundamentally wants to be written down in coordinate invariant, read diffeomorphism invariant language. It doesn't necessarily want to know about metrics, and things like that. Indeed there are situations where such concepts stop you from seeing the global topology of the problem, and it is in that sense that differential forms immediately become obvious as THE god given physical language.
 
  • #90
melinda,

pardon me if my posts have been unhelpful. I will try to explain why a 2 form is never a product of one forms in any dimension higher than 3.

Let V be the space of one forms on R^n, and let V^V be the space of 2 forms. Then since V has coordinates dx1,...dxn, and has dimension n, V^V has coordinates dxi^dxj with i <j, so has dimension = bonomial coefficient "n choose 2".


Now, just look at the product map, VxV-->V^V, taking a pair of 1 forms f,g to their product f^g. The question is when is this map surjective?

Without going into it too much, I claim that this map cannot raise dimension, much as a linear map cannot, so since the domain has dimension 2n and the range has dimension (1/2)(n)(n-1), it follows that as soon as the second number outruns the first, the map cannnot be surjective.

In particular for n > 5, the map cannot be surjective, but actually this occurs sooner than that, I claim for n > 3.

The key is to look at the dimension of the fibers of the map. Here there is a principle almost exactly the same as the "rank - dimension" theorem in linear algebra.

i.e. if we can discover the dimension of the set of domain points which map to a given point in the target of ther map, then the dimension of the actual image of the map cannot be more than the amount by which the dimension of the domain exceeds this "fiber" dimension. i.e. if (f,g) is a general point of the domain VxV, then the dimension of the set of 2 forms which are products in V^V, cannot be more than 2n - dim of the set of one forms having the same product f^g as f and g.


Now it helps to think geometrically, i.e. of f and g as vectors and f^g as the parallelogram they span. Then two other vectors have the same product if and only if they span a parallelogram in the same plane as f and g, and also ahving the same area.

So there is a 2 dimenmsional family of vectors in that plane, hence a 4 dimensional fmaily of pairs of vectors in that plane spanning it, but if choose only thos having the right area, there is noly a three dimnsional family.

Thus the inverse image of a general product f^g is 3 dimensional in VxV. Thus the dimension of the image of the rpoduct map, in V^V, i.e. the dimension of the family of factorable 2 forms, equals 2n - 3. we see this is less than (1/2)(n)(n-1) as soon as n >3.

so for n > 3, it never again happens that all 2 forms are a product of two 1 forms.

does that help?

if you look back at some of my free flying posts earlier you will probably see that these ideas are there, but not explained well.
 
  • #91
An apology and some comments:

I apologize for making critical comments no one was interested in and which stemmed from not reading Dave's introduction well enough. He said there he was not interested in "getting it right", whereas "get it right" is my middle name (it was even chosen as the tagline under my photograph in high school, by the yearbook editor, now I know why!) I have always felt this way, even as an undergraduate, but apparently not everyone does. My happiest early moments in college came when the fog of imprecise high school explanations was rolled away by precise definitions and proofs.

On the first day of my beginning calculus class the teacher handed out axioms for the reals and we used them to prove everything. In the subsequent course the teacher began with a precise definition of the tangent space to the uncoordinatized euclidean plane as the vector space of translations on the plane.

E.g. if you are given a translation, and a point p, then you get a tangent vector based at p by letting p be the foot of the vector, then applying the translation to the point p and taking that result as the head of the vector.

This provides the isomorphism between a single vector space and all the spaces Tp(R^n) at once. Then we proceeded to do differential calculus in banach space, and derivatives were defined as (continuous) linear maps from the get go.

So I never experienced the traditional undergraduate calculus environment until trying to teach it. As a result I do not struggle with the basic concepts in this subject, but do struggle to understand attempts to "simplify" them.

I am interested in this material and will attempt to stifle the molecular imbalances which are provoked involuntarily by imprecise statements used as a technique for selling a subject to beginners.

One such point, concerning the use of "variables" will appear below, in answer to a question of hurkyl.

to post #6 from Tom, why does Dave derive the basis of Tp(R^2) the way he does? instead of merely using the fact that that space is isomorphic to R^2, hence has as basis the basis of R^2?

I think the point is that space is not equal to R^2, but only isomorphic to R^2. Hence the basis for that space should be obtained from the basis of R^2 via a given isomorphism.

Now the isomorphism from Tp(R^2) to R^2 proceeds by taking velocity vectors of curves through p, so Dave has chosen two natural curves through p, the horizontal line and the vertical line, and he has computed their velocity vectors, showing them to be <1,0> and <0,1>.

So we get not just two basis vectors for the space but we get a connection between those vectors and curves in the plane P. (Of course we have not proved directly they are a basis of Tp(P), but that is true of the velocity vectors to any two "transverse curves through p").

So if you believe it is natural to prefer those two curves through p, then you have specified a natural isomorphism of Tp(R^2) with R^2. In any case the construction shows how the formal algebraic vector <1,0> corresponds to something geometric associated to the plane and the point p.


In post #18, Hurkyl asks whether dx and dy are being used as vectors or as covectors? This is the key point that puzzled and confused me for so long. Dave has consciously chosen to extend the traditional confusion of x and y as "variables" on R^2 to an analogous confusion of dx and dy as variables on Tp(R^2).

The confusion is that the same letters (x,y) are used traditionally both as functions from R^2 to R, and as the VALUES of those functions, as in "let (x,y) be an arbitrary point of R^2."

In this sense (x,y) can mean either a pair of coordinate functions, or a point of R^2. Similarly, (dx,dy) can mean either a pair of linear functions on Tp(R^2) i.e. a pair of covectors, or as a pair of numbers in R^2, hence a tangent vector in Tp(R^2) via its isomorphism with R^2 described above.

So Dave is finessing the existence of covectors entirely.

This sort of thing is apparently successful in the standard undergraduate environment or Dave would not be using it, but it is not standard practice with mathematicians who tend to take one point of view on the use of a notation, and here it is that x and y are functions, and dx and dy are their differentials.

There is precedent for this type of attempt to popularize differentials as variables and hence render them useful earlier in college. M.E. Munroe tried it in his book, Calculus, in 1970 from Saunders publishers, but it quickly went out of print. Fortunately I think Dave's book is much more user friendly than Munroe's.

(Munroe intended his discussion as calculus I, not calculus III.)

In post #43, Gza asked what a k cycle is, after I said a k form was an animal that gobbles up k cycles and spits out numbers.

I was thinking of a k form as an integrand as Dave does in his introduction, and hence of a k cycle as the domain of integration. Hence it is some kind of k dimensional object over which one can integrate.


Now the simplest version would be a k dimensional parallelpiped, and that is spannned by k vectors in n space, exactly as Gza surmised. A more general such object would be a formal algebraic sum, or linear combination, of such things, and a non linear version would be a piece of k dimensional surface, or a sum or lin. comb. of such.


now to integrate a k form over a k diml surface. one could parametrize the surface via a map from a rectangular block, and then approximate the map by the linear map of that block using the derivative of the parameter map.

Then the k form would see the approximating parametrized parallelepiped and spit out a number approximating the integral.

By subdividing the block we get a family of smaller approximating parallelepipeds and our k form spits out numbers on these that add up to a better approximation to the integral, etc...


so k cycles of the form : "sum of parallelepipeds" do approximate non linear k cycles for the purposes of integration over them by k forms.

The whole exercise people are going through trying to "picture" differential forms, may be grounded in the denial of their nature as covectors rather than vectors. I.e. one seldom tries to picture functions on a space geometrically, except perhaps as graphs.

On the other hand I have several times used the technique of discussing parallelepipeds in stead of forms. That is because the construction of 2 forms from 1 forms is a formal one, that of taking an alternating product. the same, or analogous, construction that sends pairs of one forms to 2 forms, also sends pairs of tangent vectors to (equivalence classes of) parallelograms.

I.e. there is a concept of taking an alternating product. if applied to 1 forms it yields 2 forms, if applied to vectors it yields "alternating 2 - vectors".

In post #81, Tom asked for the proof of the lemma 3.2 that all 2 forms in R^3 are products of 1 forms. I have explicitly proved this in the most concrete way in post #66 by simply writing down the factors in the general case.

In another post in answer to a question of Gza I have written down more than one solution to every factorization, proving the factors are not unique.

Also in post #81, Tom asked about solving ex 3.18. What about something like this?
Intuitively, a 1 form measures the (scaled) length of the projection of a vector onto a line, and a 2 form measures the (scaled) area of the projection of a parallelogram onto a plane. Hence any plane containing the normal vector to that plane will project to a line in that plane. hence any parallelogram lying in such a plane will project to have area zero in that plane.

e.g. dx^dy should vanish on any pair of vectors spanning a plane containing the z axis.

Notice that when brainstorming I allow myself the luxury of being imprecise! there are two sides to the brain, the creative side and the critical side. one should not live exclusively on either one.
 
Last edited:
  • #92
Melinda,

You can also see that in dimensions bigger than three you will not always be able to factor 2-forms by just writing one down. If there are at least four coordinates then consider the following 2-form:

[tex] \omega=dx_1 \wedge dx_2 + dx_3 \wedge dx_4 [/tex]

Now, if this 2-form could be written as [itex] \alpha \wedge \beta [/itex] then

[tex] \omega \wedge \omega=\alpha \wedge \beta \wedge \alpha \wedge \beta=0 [/tex]

But when you compute [itex] \omega \wedge \omega [/itex] for the above 2-form you do not get zero. The conclusion is that this 2-form can never be factored.

Dave.
 
  • #93
Dear all,

I have been going through my book agaiin with my current students and we have found a few errors. I'll post them:

Exercise 1.6 (4) The coefficient should be [itex] \frac{2}{5} [/itex] instead of [itex] \frac{5}{2} [/itex]
Exercise 3.21 ... then [itex] V_{\omega}=\langle F_x, F_y, F_z \rangle [/itex].
Exercise 4.8 The form should be [itex] 2z\ dx \wedge dy + y\ dy \wedge dz -x\ dx \wedge dz [/itex]. The answer should be [itex] \frac{1}{6} [/itex].
Exercise 4.13 Answer sholuld be [itex] \frac{32}{3} [/itex]

If anyone finds any more please let me know!

Dave.
 
  • #94
Dave's example recalls post #60:

"here is a little trick to see that in 4 dimensions not all 2 forms are products of one forms. since the product of a one form with itself is zero, if W is a 2 form which is a product of one forms, then W^W = 0. But note that [dx^dy + dz^dw] ^ [dx^dy + dz^dw] = 2 dx^dy^dz^dw is not zero. so this 2 form is not a product of one forms."

Indeed if n= 4, we have argued above that the subspace of products has codimension one in the space of 2 forms, and it seems the condition w^w = 0 is then necessary and sufficient for a 2 form to be a product.
 
  • #95
Here is another use of the constructions Dave is explaining to us: analyzing the structure of lines in 3 space.

For example what if we consider the old problem of Schubert: how many lines in (projective) 3 space meet 4 general fixed lines? This has been tackled valiantly in another thread by several people, some successfully.

I claim this can be solved using the algebriac tools we are learning.

I am going to try to wing this along the lines of the discussion so far, so Dave, feel free to jump in and correct, clarify, or augment my misstatements.

We have been seeing that a 2 form assigns a number to a pair of vectors. Since every 2 form is a linear combination of basic ones, i.e. of products of one forms, it suffices to know how those behave, and we have been seeing that e.g. the one form dx^dy seems to project our two vectors into the x, y plane and then take the oriented area of the parallelogram they span.

Now just as in linear algebra when we "mod out" a domain vector space by the kernel of a linear transformation, to make the new domain space into a space on which the transformation is one to one, we could also try to mod out the space of pairs of vectors, by equating two pairs to which every 2 form assigns the same number.

Now it suffices as remarked above, to equate two pairs of vectors if the basic two forms dxi^dxj all agree on them. From the discussion so far, it seems this means we should equate two pairs of vectors if the parallelogram they span has the same oriented area when projected into every pair of coordinate planes.

Now I claim this just means the two pairs of vectors span the same plane, and the parallelograms they span have the same area, and the same orientation. So this essentially contains the data of the plane they span, plus a real scalar.

We denote the equivalence class of all pairs equivalent in this way to v,w by the symbol v^w. Then we have taken alternating products of vectors, just as before we took alternating products of one forms, i.e. of functionals.

i.e. the same formal rules hold; v^w = - w^v, v^(u+w) = v^u + v^w, v^aw = av^w, etc...

But we again cannot add these except formally, so we consider also formal linear combinations of such guys: v^w + u^z, etc...

Now just as in 4 space and higher, not all 3 forms were products of one forms, so also not all 2-vectors are simple ones of form v^w.

E.g. in 4 space the same condition must hold as remarked above for 2 forms, i.e. that a 2 vector T is a simple product if and only if T^T = 0.

Now we have constructed a linear space of alternating 2 vectors T, in which those that satisfy the property T^T =0 correspond to products v^w. For vectors in R^4, this linear space has dimension "4 choose 2" = 6. So the space of all 2 vectors in R^4 is identifiable with R^6.

I claim this has the following interpretation:

by definition projective 3 space consists of lines through the origin of R^4, so 2 planes in R^4 correspond to lines in projective 3 space.

Now each 2 plane in R^2 is represented by a simple 2 vector, i.e. a product v^w, in fact by a "line" of such 2 vectors, since v^w and av^w represent the same plane, just accompanied by a different oriented area.

so 2 planes in R^4 are represented by the lines through the points of R^6 representing simple 2 vectors. Moreover this subset of R^6 is defined by the quadratic equation T^T = 0, hence 2 planes in R^4 are represented by a quadratic cone of lines in R^6.

If we consider the projective space of lines through the origin of R^6, we have the space of all lines in projective three space, represenetd as a quadric hypersurface of dimension 4 in the projective 5 space defined by all 2 vectors in R^4.


Now in projective 3 space we ask what it means algebraically for two lines to meet? i.e. when do the two pairs of simple 2 vectors u^v, and z^w represent planes in R^4 that have a line in common? Well it means that u^v^z^w = 0, (since this happens when the 4 diml parallelepiped they span has volume zero in 4 space).

Consequently when u^v is fixed, this is a linear equation in z^w, hence the lines in projective 3 space meeting a given line, correspond to a linear hyperplane section in 5 space, on the quadric of all lines. hence the lines meeting 4 given lines in 3 space, would be the intersection of our quadric of all lines, with 4 linear hyperplanes.

But 4 linear hyperplanes in P^5 meet in a line, so the lines in 3 space meeting 4 given lines, correspond to the points of P^5 where a quadric hypersurface meets a line, i.e. exactly 2 points.


You might ask an audience, consisting of skeptics as to the value of alternating form methods, if they can solve that little geometry problem as neatly using classical vector analysis.
 
Last edited:
  • #96
I guess to make sure that quadric meets that line in 2 points, I should have chosen an algebraically closed field, like the complex numbers, to work over, instead of the reals?
 
  • #97
It finally dawned on me what Dave is doing and why he calls this a geometric approach to differential forms.

given a vector space V, the space of linear functions on V is the dual space V*. But if we define a dot product on V we get an isomorphism between V* and V. I.e. then a linear functional f on V is represented by a vector w in V. The value of f at a vector v is given by projecting v onto the line spanned by w and multiplying the length of the projection by (plus or minus) the length of w.


Now suppose we jack that up by one degree to bilinear functions. I.e. given a dot product, a bilinear alternating functional which is an alternating product of two linear forms, is represented by a parallelogram, such that the action of the function on a pair of vectors becomes projection of those two vectors into the plane of the parallelogram, taking (plus or minus) the area of the image parallelogram, and multiplying by the area of the given parallelogram.

So this approach has more structure than strictly necessary for the concept of differential forms, but allows them to be represented as (a sum of) projection operators.

nice.

In that spirit, one is led to pose geometric versions of the factorization questions asked above in R^3:
1) given two parallelograms in R^3, find one parallelogram such that the bilinear function defined by the sum of those two given parallograms equals the one given by projection on the one resultant parallelogram.
2) give a geometric proof in R^4 that the bilinear function defined by the sum of dx^dy and dz^dw, cannot be equal to the function defined by projection on the plane spanned by anyone parallogram.

In short the use of a dot product, allows one to have an isomorphism between the space V*^V* of 2 forms and the more geometric object V^V I defined above, which I said was analogous to the space of 2 forms.

Dave, you have obviously put a lot of thought into this.
 
Last edited:
  • #98
another in my wildly popular series of commentaries:

towards a more fully geometric view of differential forms.

It seems after reading Dave's section on how [to and] not to picture differential one forms, he does not advocate there the use of the dot product. I.e. he suggests picturing the kernel planes of the field of one forms in R^3, a view point which depends only on the nature of a one form as functional, having a kernel, and not on its nature as a dot product.

I.e. I would have thought one might use the picture of the one form df, for example as a "gradient field", i.e. as a vector field whose vector at each point is given by the cooordinate vector of partial derivatives of f in the chosen coordinate dircetions.

I guess Dave is not doing this because he wants to give us a coordinate invariant view of forms although coordinates seem to be used in the projected area point of view introduced earlier.

If we pursue this, we have an interpretation of every one form as a vector, namely the vector perpendicular to the kernel hyperplane, with length equal to the valoue of the functional on a unit vector.

Then we truly have a geometric object representing a one form (although it depends on a dot product), and moreover we can add one forms and representing vectors interchangeably. I.e. the vector representing the sum of two one forms, is the geometric vector sum of the vectors representing each of them.

In this same vein, if we represent a 2 form on R^3 as an oriented parallelogram, as suggested above, and in R^4 as a formal sum of oriented parallelograms, then we do get a geometric representation of 2 forms, i.e. as a sum of parallelograms.

But to have a fully geometric interpretatioin we should haver a geometric view also of addition of 2 forms. so as asked before, given two parallelograms in R^3, what is a geometric construction of a parallelogram in R^3 represenmting their sum as 2 forms?

And since in R^4, we have a 6 dimensional space of 2 forms, and it is one quadratic condition to be represented by just one parallelogram, we ask what is the geometric condition on a pair of parallelograms that their sum be represented by just one parallelogram, and then what is that parallelogram?

Well, we already know part of this don't we? Because Dave's condition w^w = 0, for this says that the two parallelograms have a sum represented by just one parallelogram if ands only if they span together only a 3 space in R^4. And then surely the construction is the same as the construction in R^3, whatever that is.

If we try to avoid the choice of dot product, as Dave does in his "kernel plane" interpretation of one forms, what would be the correct interpretation?

If we restrict to factorable 2 forms, is there a geometric kernel plane interpretation?

peace.

More free flowing conjectures: We "know" that in projective 5 space the point represented by the coordinates of a 2 form on R^4 is factorable into a product of one forms if and only if satisfies w^w = 0, i.e. if and only if it lies on the 4 dimensional quadric hypersurface defiend by that degree two equation in the coordinates of the 2 form.

Now what is the geometric condition for the sum of two factorable 2 forms to still be factorable? Would it be that the line joining those two points on the quadric still lies wholly in the quadric? I.e. just as a quadric surface in P^3 is doubly ruled by lines, a quadric 4 fold in P^5 also contains a lot of lines.

Just wondering and dreaming. And urging people who want a "geometric" view of the subject to explore further what that would mean.

peace.
 
Last edited:
  • #99
Sorry I've been away for so long. Work gets in the way of what I really want to do, sometimes. :frown:

AKG said:
The line [itex]l = \{\vec{r}t + \vec{p} : t \in \mathbb{R}\}[/itex] for some [itex]\vec{r},\ \vec{p} \in T_p\mathbb{R}^3[/itex]. Suppose [itex]\vec{v},\ \vec{w} \in T_p\mathbb{R}^3[/itex] such that [itex]l \subseteq Span(\{\vec{v},\ \vec{w}\})[/itex]. Then the set [itex]\{\vec{p},\ \vec{v},\ \vec{w}\}[/itex] is linearly dependent, hence:

[tex]\det (\vec{p}\ \ \vec{v}\ \ \vec{w}) = 0[/itex]

Define [itex]\omega[/itex] such that:

[tex]\omega (\vec{x},\ \vec{y}) = \det (\vec{p}\ \ \vec{x}\ \ \vec{y}) \ \forall \vec{x}, \vec{y} \in T_p\mathbb{R}^3[/tex]

You can easily check, knowing the properties of determinants, that [itex]\omega[/itex] is an alternating bilinear functional, and hence a 2-form. If you want, you can express it as a linear combination of [itex]dx \wedge dy,\ dy \wedge dz,\ dx \wedge dz[/itex], and it shouldn't be hard, but probably not necessary.

OK thanks, but as you recognized this is answering the reverse question: Given the line, find the 2-form.

EDIT: actually, to answer the question as given, perhaps you will want to write [itex]\omega[/itex] in terms of those wedge products, and determine [itex]\vec{p}[/itex] from there. Then, to find [itex]l[/itex] you just need to choose any line that passes through [itex]\vec{p}[/itex]. Any two vectors containing that line will have to contain [itex]\vec{p}[/itex], hence those three vectors must be linearly dependent, hence their determinant will be zero, and since [itex]\omega[/itex] depends only on [itex]\vec{p}[/itex] and not the choice of [itex]\vec{r}[/itex], you're done.

Right, this is what I was wondering about. I think I've worked it out correctly. Here goes.

Exercise 3.18
Let [itex]\omega=w_1dx \wedge dy +w_2dy \wedge dz +w_3dz \wedge dx[/itex].
Let [itex]A=<a_1,a_2,a_3>[/itex] and [itex]B=<b_1,b_2,b_3>[/itex] be vectors in [itex]\mathbb{R}^3[/itex].
Let [itex]C=[c_1,c_2,c_3][/itex] be a vector in [itex]T_p\mathbb{R}^3[/itex] such that [itex]C=k_1A+k_2B[/itex]. So the set [itex]{A,B,C}[/itex] are dependent. That implies that [itex]det|C A B|=0[/itex].

Explicitly:

[tex]
det [C A B]=\left |\begin{array}{ccc}c_1&c_2&c_3\\a_1&a_2&a_3\\b_1&b_2&b_3\end{array}\right|
[/tex]

[tex]
det [C A B]=c_1(a_2b_3-a_3b_2)-c_2(a_1b_3-a_3b_1)+c_3(a_1b_2-a_2b_1)
[/tex]

Now let [itex]\omega[/itex] act on [itex]A[/itex] and [itex]B[/itex]. We obtain the following:

[tex]
\omega (A,B)=w_1(a_1b_2-a_2b_1)+w_2(a_2b_3-a_3b_2)+w_3(a_3b_1-a_1b_3)
[/tex]

Upon comparing the expressions for [itex]det [C A B][/itex] and [itex]\omega (A,B)[/itex] we find that [itex]\omega (A,B)=0[/itex] if [itex]w_1=c_3[/itex], [itex]w_2=c_1[/itex], and [itex]w_3=c_2[/itex]. So the line [itex]l[/itex] is the line that is parallel to the vector [itex][w_2,w_3,w_1][/itex]. So I can write down parametric equations for [itex]l[/itex] as follows:

[itex]
x=x_0+w_2t
[/itex]
[itex]
y=y_0+w_3t
[/itex]
[itex]
z=z_0+w_1t
[/itex]


I'll wait for any corrections on this before continuing. If this is all kosher, then I'll post the last of my Chapter 3 notes and we can finally get to differential forms, and the integration thereof.

mathwonk said:
Also in post #81, Tom asked about solving ex 3.18. What about something like this?
Intuitively, a 1 form measures the (scaled) length of the projection of a vector onto a line, and a 2 form measures the (scaled) area of the projection of a parallelogram onto a plane. Hence any plane containing the normal vector to that plane will project to a line in that plane. hence any parallelogram lying in such a plane will project to have area zero in that plane.

That's helpful. I have to admit I don't really like this geometric approach. But I think that I haven't warmed up to it yet because it still feels uncomfortable. I very much prefer to formalize the antecedent conditions and manipulate expressions or equations until I have my answer, as I've done with all my solutions to the exercises so far. It's my shortcoming, I'm sure.
 
Last edited:
  • #100
have you read post 98?

I apologize if my comments are not of interest. I am stuck between trying to be helpful and just letting my own epiphanies flow as they will.


I appreciate your patience.
 
  • #101
mathwonk said:
have you read post 98?

Not yet, but I will.

I apologize if my comments are not of interest. I am stuck between trying to be helpful and just letting my own epiphanies flow as they will.

No, your comments are very much of interest. I'm glad you're making them, and I'm glad that they will be preserved here so that we can go over them at leisure later. But right now, the clock is ticking for us. We are preparing to present some preliminary results to the faculty at our school. Basically the ladies (Melinda and Brittany, who has been silent in this thread so far, but she has been reading along) will be presenting the rules of the calculus, why it is advantageous, and a physical application (Maxwell's equations). The centerpiece of the presentation will be the same as the centerpiece of the book: the generalized Stokes theorem.

Once the presentation to the faculty is done, we will have 2 weeks until the conference. During that time we will get back to your comments.

I appreciate your patience.

That's what I should be saying to you!
 
  • #102
Tom Mattson said:
So the line [itex]l[/itex] is the line that is parallel to the vector [itex][w_2,w_3,w_1][/itex].
As I said, [itex]l[/itex] is the (or rather, any) line containing [itex][w_2,w_3,w_1][/itex], not parallel to it. Actually, since the plane spanned by two vectors passes through the origin (and since a plane is a subspace if and only if it passes through the origin), you can choose the line parallel to that vector, but this seems like more work.

[tex]\omega = \omega _1 dx \wedge dy + \omega _2 dx \wedge dz + \omega _3 dy \wedge dz[/tex]

[tex]\omega(A, B) = \omega _1 (a_1b_2 - b_1a_2) + \omega _2 (a_1b_3 - b_1a_3) + \omega _3 (a_2b_3 - b_2a_3)[/tex]

[tex]= p_3(a_1b_2 - b_1a_2) - p_2(a_1b_3 - b_1a_3) + p_1(a_2b_3 - b_2a_3)[/tex]

[tex]= \det (P A B)[/tex]

So [itex]P = (p_1, p_2, p_3) = (\omega _3, -\omega _2, \omega _1)[/itex]. (I believe you have the above, or something close, in your post).

If we choose a line containing P, then any pair of vectors A, B that span a plain containing that line will also have to conatin P. Then {P, A, B} is dependent, so the determinant is 0. Therefore it is sufficient (and easier) to choose a line containing P. The line parallel to P may not contain P (if the line doesn't pass through the origin), and hence the plane containing the line may not contain P, and hence the set {P, A, B} may not be dependent, so the determinant may not be zero, and so [itex]\omega (A, B)[/itex] may not be zero. To claim that the plane containing the line parallel to P can be done, but requires (a very little) more proof. You know that the line parallel to P, paramterized by t, contains points for t=0 (let's call it P0) and t=1 (P1). So the plane contains these two points. Now you know that P1 - P0 = P. Since the plane in question is a subspace, it is closed under addition and scalar multiplication, and since it contains the line, it contains P1 and P0, and hence P1 - P0, and hence P.

So anyways, you have it right, and if you want to choose a line parallel to P, you may want to throw in that extra bit that allows you to claim that P is in the plane. One more remark: You have A and B in R³, and C in the tangent space. It seems as though you should have them all in R³, or all in the tangent space.
 
  • #103
tom, thank you very much!

the one geometric thing i added recently may be too far along to be useful to your students but it addresses the geometry of whether a 2 form is or is not a product of one forms, in R^4.

the answer is that 2 forms in R^4 form a vector space of dimension 6, and in that space the ones which are products of one forms form a quadratic cone of codimension one.

I think I also have the answer to the geometric question of what it means to add two 2 forms in R^3, both of which are products of one forms. i.e. to add two paralleograms.


i.e. take the planes they span, and make them parallelograms in those planes, sharing one side.

then take the diagonal of the third side of the parallelepiped they determine, and pair it with the shared side of the two paralleograms.

maybe that is the parallelogram sum of the two parallelograms? at lea`st if the teo parallelograms are rectangles?

ok i know your students do not have time for this investigation, but i am trying to throw in more geometry.

of course i agree with you, the geometry is a little unnatural.

these suggestions are not worked out on paper but just in my head on the commute home from work, but they gave me some pleasure. and i had your students in mind, maybe at some point some will care about these comments.

best,

roy
 
  • #104
Tom, here are a few more comments on how to possibly convince skeptics of the value of differential forms.

These are based on the extreme simplification of the variuous stokes, greens, gauss theorems as stated in dave's book.

The point is that when a result is simplified we are better able to understand it, and also to understand how to generalize it, and to understand its consequences.

I also feel that you sell the popwer of some tool more effectively if you give at elast one application of its power. I.e. not just simplifying statements but applying those simpler statements to prove something of interest. hence in spite of the demands on the reader I will sketch below how the insight provided by differential forms, leads to a proof of the fundamental theorem of algebra.

(I actually discovered these standard proofs for myself while teaching differential forms as a young pre PhD teacher over 30 years ago, and taught them in my advanced calc class.)

It is of course true that every form of stokes theorem, in 3 dimensions and fewer, has a classical statement and proof.

But I claim none of those statements clarify the simple dual relationship between forms and parametrized surfaces.

i.e. in each case there is an equation between integrals, one thing integrated over a piece of surface [or curve or threefold], equals something else integrated over the boudary of the surface [or curve or threefold].

But in each case the "something else" looks different, and has a completely different definition. i.e. grad(f) looks nothing like curl(w), nor at all like div(M).

It is only when these objects, functions, one forms, two forms, threeforms, are all expressed as differential forms, that the three operations, grad, curl, div, all look the same, i.e. simply exterior derivative "d".

then of course stokes theorem simply says <dS,w> = <S, dw>.


Now that is clear already from what is in the book. But once this is done, then forms begin to have a life of their own, as objects which mirror surfaces, i.e. which mirror geometry.

I.e. this reveals the complete duality or equality between the geometry of parametrized surfaces S, and differential forms w. There is a gain here because even though taking boundary mirrors taking exterior derivative, what mirrors exterior multiplication of forms? I.e. on the face of them, forms have a little more structure than surfaces, which enables calculation a bit better.

Eventually it turns out that multiplication of forms mirrors intersection of surfaces, but this fact only adds to the appeal of forms, since they can then be used to calculate intersections.

Moreover, who would have thought of multiplying expressions like curl(w) and grad(f)? without the formalism of forms?

Already Riemann had used parametrized curves to distinguish between surfaces, and essentially invented "homology", the duality above reveals the existence of a dual construction, of "cohomology".

I.e. if we make a "quotient space" from pieces of surfaces, or of curves, we get "kth homology", defined as the vector space of all parametrized pieces of k dimensional surfaces, modulo those which are boundaries.

this object measures the difference between the plane (where it is zero) and the punctured plane (where it is Z), because in the latter there exists a closed curve which is not the boundary of a piece of parametrized surface, namely the unit circle. Then a closed curve represents n if it wraps n times c.c. around the origin.

This difference can be used to prove the fundamental theorem of algebra, since a polynomial can be thought of as a parametrizing map. Moreover a globally defined polynomial always maps every closed curve onto a parametrized curve that IS the boundary of a piece of surface. namely, if C is the boiundary of the disc D, then the image of C bounds the image of D!.


But we know that some potential image curves, like the unit circle, are not boundaries of anything in the complement of the origin. Hence a polynomial without a zero cannot map any circle onto the unit circle one to one, nor onto any closeed curve that winds around the origin,

Hence if we could just show that some circle is mapped by our polynomial onto such a curve, a curve that winds around the origin (0,0), it would follow that our polynomial does not map entirely into the complement of (0,0). I.e. that our polynomial must "have a zero"!

So it all boils down to verifying that certain curves in the punctured plane are not boundaries, or to measuring how many times they wind around the origin. How to do this? How to do it even for the simple unit circle? How to prove it winds once around the origin?

Here is where the dual object comes in. i.e. we know from greens theorem or stokes theorem or whatever you want to call it, that if w is a one form with dw = 0, then w must have integral zero over a curve which is a boundary.

Hence the dual object, cohomology, measure the same phenomena, as a space of those differential forms w with dw = 0, modulo those forms w which themselves equal dM for some M.

Hence, how to see why the unit circle, does wind around the origin?

Answer: integrate the "angle form" "dtheta" over it. if you do not get 0, then your curve winds around the origin.

here one must must realize that "dtheta" is not d of a function, because theta is not a single valued function!

so we hjave simultaneously proved that fact.

anyway, this is taking too long.

but the solid angel form, integrated =over the 2 sphere also proves that the 2 sphere wrapos around the origin in R^3, and proves after some argument, that there can be no never zero smooth vector field on the sphere, i.e. that you cannot comb the hair on a billiard ball.
 
  • #105
Hey all,

I have been going through the book and following the very interesting discussion here. David, I definitely fall into the category of people who like to learn things in a visual way, so I am finding your book to be a nice introduction to the subject. (As for my math background, btw, I majored in electrical engineering as an undergrad and graduated in 1993 -- since then I have been in the medical field, so I'm a bit rusty! :smile: )

As time permits I may join in the discussion. For now I thought I'd post something on this:

mathwonk said:
for example if N and M are anyone forms at all

N^M = N^(N+M) = N^(cN+M) = (cM+N)^M, for any constant c.

In keeping with the spirit of the geometric interpretation, I was inspired when I got to mathwonk's post to make a powerpoint visualization to demonstrate
N^M = N^(cN+M). You can download it from my briefcase at briefcase.yahoo.com/straycat_md in the "differential forms" folder. It's got animations so you have to view it as a "presentation" and then click on the spacebar to see things move (vectors appearing, etc.). Tell me what you think! :)

Regards,

straycat
 

Similar threads

Back
Top