Musings on the physicists/mathematicians barrier

  • Thread starter nrqed
  • Start date
  • Tags
    Barrier
In summary, learning differential geometry and topology from a background in physics can be challenging due to the need to connect it with previous knowledge. This can be difficult when there is a perceived contempt from those more well-versed in pure mathematics. Differential forms, as a mathematical tool, may not yet be mature enough for general use and their notation can be confusing. They may ultimately suffer the same fate as quaternions did in physics and be replaced by more applicable methods such as vector calculus.
  • #1
nrqed
Science Advisor
Homework Helper
Gold Member
3,766
297
After having spent some time trying to learn differential geometry and differential topology (my background is in physics phenomelogy) I can`t help making the following observation.

I think it is harder to learn the maths starting from a background in physics than learning the math from scratch (i.e. being formed as a mathematician. And the reason is that in *addition* to learn the math concepts, someone with my background feels the need to make the connection with everything he/she has learned before. That's a normal thing to do. If the maths are so powerful and more general, everything that was known before should be ''expressible'' in the language of this new and more powerful formalism.

And this is when one hits almost a brick wall. Because a common reaction from the more mathemically inclined and knowledgeable people is to reject off-hand everything the physicist has learned (and has used to make correct calculations!) as being rubbish and almost infantile.
But that just creates frustration. Because the physicist has done thousands of calculations with the less sophisticated concepts so it`s not possible to scratch everything as being wrong and start with a totally independent formalsim an dnever make the connection. That`s the main problem, there seems to be almsot some contempt from many (surely not all) people more well versed in pure maths toward simple physics. And yet, it feels to me that mathematicians should be very interested in bridging the gap between the pure an dmore abstract aspects of maths and physics calculations.

I don`t mind at all realizing that I get something correct by luck because I am doing something that works only a sa special case, for example. That``s the kind of thing that I *actually* want to see happening when learning more advanced maths so that I can see that I was limited to special cases and I can see how the maths allows me to go further.
But if I am told flatly that everything I have used before is plain wrong, this is hard to understand and creates a huge barrier in understanding a new mathematical formalism which seems then completely divorced from any actual practical calculations.

The exmaple that comes to mind first is the physicist view of infinitesimals.

I am running out of time on a public terminal but will write more what I mean in later post, if this one does not get pulled .

I better run for cover
 
Physics news on Phys.org
  • #2
I have studied the sum and entirety of differential forms, and have thus far found little of use in them. The generalised Stoke's theorem was nice, but only just about worth the effort.

My opinion, for what it's worth, is that differential forms is simply not a mature mathematical topic. Now it's rigourous, complete and solid, but it's not mature. It's like a discovery made by a research scientist that sits, majestic but alone, waiting for another physisist or engineer to turn it into something useful. Differential forms, as a tool, are not ready for general use in their current form.

There's not a lot that can save the topic from obscurity, given its current formulation. Divorced from physics, the study of forms becomes an exercise in fairly pointless abstraction. The whole development of forms was likely meant to formalise concepts that were not entirely clear when using vector calculus alone.

Let me explain. The units of the electric field E, are in Volts per metre, V/m. The units of electric flux, D, in columbs per metere squared, C/m^2. E is measured along lengths, lines, paths, etc. D is measured across areas, surfaces, sheets, etc. Using vector calculus with the definition [tex]\mathbf{D}=\epsilon \mathbf{E}[/tex], it's not clear why one should be integrated along lines and the other over surfaces(unless your a sharp physisist). However, defining E as a one-form, and D as a two-form, makes this explicit. A one-form must be evaluated along lines, and a two-form must be evaluated over surfaces.

Does this reasoning appear anywhere in any differential form textbook? No. Not even is it mentioned that certain vector fields might be restricted to such evaluations. Once the physics is removed, there is little motivation for forms beyond Stoke's theorem, which could probably be proved by other methids anyway. There is in the main, a derth of examples, calculations, reasoning and applications, beyond the rather dire presentations of the Faraday, Maxwell and four current. All that effort to reduce Maxwell's equations from five to three, is frankly embarrassing.

In short, the subject is not mature. Certainly not as mature as tensor analysis, and in no possible way as mature as vector calculus. It's lack of use supports this conclusion. Engineers, physicsts, and indeed mathematicians, cannot be expected to use a method that is not yet ready to be used. There is no real justification for learning , or applying this method when the problem can be solved more expiediently and more clearly, using tensor or vector calculus.

The primary problem is the notation. It just doesn't work. Trying to pass off canonical forms as a replacement for variables of integration simply is not tenable, and proponents do not help their argument by making fast and loose conversion between the two, totally unsupported by any formalism. The classic hole the notation digs for itself is the following:
[tex]\iint f(x,y) dxdy = \iint f(x,y)dydx[/tex]
[tex]\iint f(x,y) dx\wedge dy = - \iint f(x,y)dy\wedge dx[/tex]
And the whole supposed isomorphism breaks down. This is not good mathematics.

I don't think differential forms are really going to go places. I see their fate as being that of quaternions. Quaternions were origionally proposed as the foremost method representation in physics, but were eventually superceeded by the more applicable vector calculus. They are still used here and there, but nowhere near as much as vector calculus. Forms are likely to quickly go the same way upon the advent of a more applicable method.
 
  • #3
The topics you mention are relatively esoteric, and highly mathematical. The purpose of my post was to emphasise that differential forms have not found their way into the applied mainstream. Electromagnetics, fluid dynamics, etc, are all still dominated by vector calculus. As nrqed mentioned, the expression of physical problems through differential forms is simply not done to any great degree.

As a mathematical tool forms are not as usable as other methods. There are many pitfalls and potential sources of confusion embedded in the notation and framework. Again, the reculance of the applied communities to use the method is a testament to its immaturity. We may have different definitions of maturity here, but my own is that the method must be ready for practial use.

I think the trouble stems from the treatment of forms as an integrand and a variable of integration when it is quite clear that they are not. There seems to be a lot of confusion about this point amoung the community which again can be traced back to notation. The notation is confused and relies upon the user selecting, sometimes by chance, the correct relationship between canonical forms dx and variables of integration dx. This a real mess, and isn't ready for mainstream application.
 
  • #4
Can someone explain to me the MATHEMATICAL content of this? If not, I will delete the thread.
 
  • #5
HallsofIvy said:
Can someone explain to me the MATHEMATICAL content of this? If not, I will delete the thread.

well, it was partly to open up the discussion between the language of physicists and mathematicians but I was not really expected a much different reaction. Does anyone know a board/forum on the web where mathematicians are open-minded to relating advanced concepts of maths to the language used by physicists? I would appreciate the information.


Well, I was going to ask to connect with physics.

For example, people say that a one-form is something you integrate over a line. And that a two form is something that one integrates over a surface. But things are not so simple!

In E&M, for example, one encounters the integrals of the E field over a line ([itex] \int {\vec E} \cdot {\vec dl} [/itex]) in Faraday's law but one also encounters the surface integral [itex] \int {\vec E} \cdot {\vec A} [/itex] in Gauss' law. And the same situation appears with the B field.

Now, I realize that using Hodge's dual, one can go from forms of different degrees, etc. But usually math books will say that the E field is really a one form and that the B field is a two form, without explaining why.

This is one type of problem that I was alluding to.


Another one is the use of infinitesimals. It seems to be the consensus that the concept of infnitesimals is a completely useless one and that everything should be thought as differential forms. (I am still wondering about a comment in the online book by Bachman where he says that not all integrals are over differential forms, btw)


Consider the expression [itex] df = \partial_x f \, dx + \partial_y f\, dy + \partial_z f \, dz[/itex].
The view is usually that this makes only sense as a relation for differential forms. Of course, the way a physicist thinks of this is simply as expanding [itex] f(x+dx, y+dy, z+dz) - f(x,y,z) [/itex] to first order in the "small quantities" dx, dy and dz. I still don't understand what is wrong with this point of view.

At first it might seem that differential geometry has for goal to eliminate completely the concept of "infininitesimal", but of course they reappear when defining integrals anyway, as Obsessive pointed out.
Not only that, but it seems to me that the concept of infnitesimals is still all over the place, as part of the derivatives. For example, what does one mean by [itex] \partial_x f[/itex]? If not the limit
[tex] lim_{\Delta x \rightarrow 0} { f(x + \Delta x) - f(x) \over \Delta x}[/tex]
? It is understood that delta x is taken small enough that this expression converges to some value.

So why can't one think of [itex] f(x+dx, y+dy, z+dz) - f(x,y,z) [/itex] in the following way : compute [itex] f(x+\Delta x, y+ \Delta y, z+\Delta z) - f(x,y,z) [/itex] and take the delta smaller and smaller until the dependence on them is linear. *That* is my definition of infinitesimals. But I know that the "small Delta x" limit in the partial derivatives is well accepted but it is rejected as being totally wrong for something like df.


Anyway, that's the kind of questions I wanted to discuss but I realize that it is not welcome here. Physicists can't understand maths, right??
what I was trying to point out in my first post was that the difficulty is NOT mainly in understanding the maths. I can sit down with a math book and just follow the definitions and learn it as a completely new field. The difficulty, for a physicist, comes when trying to connect with one's previous knowledge.

But, as expected, this is deemed irrelevant and not of much worth here.

So go ahead, erase the thread.

regards

Patrick
 
Last edited:
  • #6
I think the barrier between physicists and mathematicians is more of a language barrier than anything else.

One might think, hey, don't they both speak the language of mathematics, the language of nature? (Some of you may already know what I think of that).

Mathematics is a consistent formal system, so it must be different from the language used to communicate it, because that language is inconsistent.

Notation plays a large role in communicating mathematics. The rules of mathematical notation are inconsistent, not just between groups of people, but within groups of people between different topics in mathematics (even if they may be consistent within topics). For example, tensor analysis uses superscript to distinguish different coordinates, but algebra ordinarily uses the subscript to distinguish different coordinates and superscript to denote exponents. The notation of tensor analysis may be consistent within tensor analysis, but not with the notational conventions of other mathematical topics.

Within the topic of tensors, mathematicians and physicists adopt differing conventions as well. Einstein, who we could say was initially much more physicist than mathematician, adopted the summation convention, or the omission of summation signs in favor of an assumption regarding the positions of a letter in both superscript and subscript. This convention allows the physicist to refer specifically to a coordinate system, whereas the mathematician's notation is independent of a coordinate system. Penrose believes this supposed conflict between mathematicians and physicists are resolved by the convention known as the abstract-index notation (and that the conflicts of abstract-index notation are resolved by diagrammatic notation). He talks about all of this in Chapter 12 of "Road to Reality."

I remember a scene from "The Mechanical Universe" videos where Goodstein said that, while struggling with GR, Einstein said that he had a newfound appreciation for mathematicians and what they do. Einstein had to account for all the rules and nagging little exceptions to the rules in order to make everything consistent. Goodstein used the opportunity to say that, although physicists help us understand the universe, mathematicians are the "guardians of purity of thought."

So, when you feel you've hit a brick wall, think of it as learning the language of the guardians.
 
Last edited:
  • #7
nrqed said:
In E&M, for example, one encounters the integrals of the E field over a line ([itex] \int {\vec E} \cdot {\vec dl} [/itex]) in Faraday's law but one also encounters the surface integral [itex] \int {\vec E} \cdot {\vec A} [/itex] in Gauss' law. And the same situation appears with the B field.

But, strictly speaking, one should integrate the electric flux D over surface integrals. Forms make this more explicit by enabling you to define E and D in such a way as each can only be integrated over the correct type of manifold, i.e. curve, surface or volume. D is a two form, and is in fact the Hodge dual of E if you wanted to be more "concise" about things.

nrqed said:
Now, I realize that using Hodge's dual, one can go from forms of different degrees, etc. But usually math books will say that the E field is really a one form and that the B field is a two form, without explaining why.

Mathematically there is no explanation whatsoever. E and B are simply vector fields in vector calculus. The reason comes only from the physics. Physically speaking, the reason is that E is the electric field and B is in fact, the magnetic flux. It's units can be measured in Webers per metre squared, Wb/m^2, so it must be evaluated as a flow through areas, so strictly speaking, it's a two form. It's "dual" is the magnetic field H, which is a one form like the electric field.

This might be consider a matter of extreme pedantics, paticularly when the fields and fluxes typically differ only by constants [tex]\epsilon[/tex] and [tex]\mu[/tex]. But sometimes you need to be pedantic. In my case, this is useful as I am working with materials in which the permeability an d permittivity are not constant. Your milage may vary.

nrqed said:
Another one is the use of infinitesimals. It seems to be the consensus that the concept of infnitesimals is a completely useless one and that everything should be thought as differential forms. (I am still wondering about a comment in the online book by Bachman where he says that not all integrals are over differential forms, btw)

I understand what you mean by infinitesimals to be variables of integration dx, dy, dz etc. You seem to have been introduced to variables of integration from the point of view of riemannian sums, i.e. [tex]\int f(x) dx = lim_{\Delta x \rightarrow 0} \sum f(x_i) \Delta x [/tex]. Strictly speaking, dx is not an infinitesimally small [tex]\Delta x[/tex], but is rather an operator applied to a function to obtain an "anti-derivative", i.e. to integrate something. Similarly, strictly speaking, dy/dx is not an infinitesimally small ratio, but is the operator d/dx applied to the function y(x), i.e., [tex]\frac{d}{dx}\left(y(x)\right)[/tex].

However, your view is not entirely wrong, as when it comes down to the final solution of many physical problems, numerical estimates of integration and differenciation are used, and dx and dy do become approximated by [tex] \Delta x[/tex] and [tex]\Delta y[/tex].

As to the point of view that every integration should be throught of as a differential form, or taken over differential forms; this is clearly nonsense. Differential forms are ultimately reduced to integral equations once they are applied to specific manifolds, i.e curves or surfaces, etc depending on the form. They are no more a replacement for integration than integration is a replacement for addition.

nrqed said:
But I know that the "small Delta x" limit in the partial derivatives is well accepted but it is rejected as being totally wrong for something like df.

Remember, df is an operator on vectors, and has nothing to do with variables of integration or infinitesimals except that it is written the same way, and that the two are often interchanged in a rather flippant matter to convert a "differential form integral" into an integral proper, but as I've said above, this conversion is frought with peril. The form dx is not a variable of integration, or an infinitesimal. It's an operator applied to vectors. You have to tack on the "right" variable of integration later.

Variables of integration "dx" are operators applied to integrands, and in fact the integrands in this case are differential forms. The full equation is in fact;

[tex]\int f(x) dx(\vec{V}(x)) dx[/tex]
Here the first dx is a form, and the second is a variable of integration. this is slightly clearer in the following.
[tex]\int f(t) dx(\vec{V}(t)) dt[/tex]
Here the variable of integration "x" has been replaced with a "t"

Forms are operators on vectors. Variables of integration are operators on integrands. The two are not the same, and the only reason people are lead to believe so is due to poor notation.
 
Last edited:
  • #8
ObsessiveMathsFreak said:
But, strictly speaking, one should integrate the electric flux D over surface integrals. Forms make this more explicit by enabling you to define E and D in such a way as each can only be integrated over the correct type of manifold, i.e. curve, surface or volume. D is a two form, and is in fact the Hodge dual of E if you wanted to be more "concise" about things.




Mathematically there is no explanation whatsoever. E and B are simply vector fields in vector calculus. The reason comes only from the physics. Physically speaking, the reason is that E is the electric field and B is in fact, the magnetic flux. It's units can be measured in Webers per metre squared, Wb/m^2, so it must be evaluated as a flow through areas, so strictly speaking, it's a two form. It's "dual" is the magnetic field H, which is a one form like the electric field.

This might be consider a matter of extreme pedantics, paticularly when the fields and fluxes typically differ only by constants [tex]\epsilon[/tex] and [tex]\mu[/tex]. But sometimes you need to be pedantic. In my case, this is useful as I am working with materials in which the permeability an d permittivity are not constant. Your milage may vary.
Very interesting.

So from this point of view, one should think of E and D as being simply proportional to each other, there is truly a deep difference. To do E&M on a curved manifold, for example, the simple proportionality relation that physicists are used to would break down then? Or could one see this even in flat manifold but by going to some arbitrary curvilinear coordinate system? (I kow this woul be answered by looking at how the Hodge dual depends on a change of coordinate system). *This* would be the kind of insight that would make the differential form approach to E&M much more interesting!

I understand what you mean by infinitesimals to be variables of integration dx, dy, dz etc. You seem to have been introduced to variables of integration from the point of view of riemannian sums, i.e. [tex]\int f(x) dx = lim_{\Delta x \rightarrow 0} \sum f(x_i) \Delta x [/tex]. Strictly speaking, dx is not an infinitesimally small [tex]\Delta x[/tex], but is rather an operator applied to a function to obtain an "anti-derivative", i.e. to integrate something.

Ok. Interesting. What is the way to formalize this? Is dx (say) the operator or should one think of [itex] \int dx [/itex] as the operator?


Similarly, strictly speaking, dy/dx is not an infinitesimally small ratio, but is the operator d/dx applied to the function y(x), i.e., [tex]\frac{d}{dx}\left(y(x)\right)[/tex].
That makes sense to me, except I am wondering how, in this approach, one goes about finding any derivative. For example, how does one prove that [tex] \frac{d}{dx} \left(x^2 \right) = 2 x [/tex]?
If one defines d/dx as an operator, how does one find how it acts on anything? And if the only way to find an explicit result is to go through the limit definition, then isn't this tantamount to say that the definition of the operator *is* the limit?
However, your view is not entirely wrong, as when it comes down to the final solution of many physical problems, numerical estimates of integration and differenciation are used, and dx and dy do become approximated by [tex] \Delta x[/tex] and [tex]\Delta y[/tex].

As to the point of view that every integration should be throught of as a differential form, or taken over differential forms; this is clearly nonsense. Differential forms are ultimately reduced to integral equations once they are applied to specific manifolds, i.e curves or surfaces, etc depending on the form. They are no more a replacement for integration than integration is a replacement for addition.
Ok. That's good to hear. Because books sometimes say (not formally) that differential forms are the things we integrate over!
Remember, df is an operator on vectors, and has nothing to do with variables of integration or infinitesimals except that it is written the same way, and that the two are often interchanged in a rather flippant matter to convert a "differential form integral" into an integral proper, but as I've said above, this conversion is frought with peril. The form dx is not a variable of integration, or an infinitesimal. It's an operator applied to vectors. You have to tack on the "right" variable of integration later.
Ok. It's nice to hear this said explicitly!
Variables of integration "dx" are operators applied to integrands, and in fact the integrands in this case are differential forms. The full equation is in fact;

[tex]\int f(x) dx(\vec{V}(x)) dx[/tex]
Here the first dx is a form, and the second is a variable of integration. this is slightly clearer in the following.
[tex]\int f(t) dx(\vec{V}(t)) dt[/tex]
Here the variable of integration "x" has been replaced with a "t"

Forms are operators on vectors. Variables of integration are operators on integrands. The two are not the same, and the only reason people are lead to believe so is due to poor notation.
That's clear (and I wish books would say it this way!). The question is then how is the vector chosen? I mean, the way it is usually presented is as if [itex]dx({\vec V})[/itex] is always equal to one (or am I missing something?).


Thank you very much for your comments. They are very appreciated.

Regards

Patrick
 
Last edited:
  • #9
nrqed said:
To do E&M on a curved manifold, for example, the simple proportionality relation that physicists are used to would break down then? Or could one see this even in flat manifold but by going to some arbitrary curvilinear coordinate system?

On a curved manifold embedded in euclidean space, the proportiality relation is still fine. I'm not sure what happens in curved spacetime.

However, in certain materials, D is not linearly proprotional to E, and may not in fact have the same direction. And of course, if the electric permittivity is not constant, for example if the range of your problem encompassed different materials, then the proportiality constant would not be strictly correct either.

In any case, the flux must only be evaluated through surfaces, and the field only along curves. You can get away with this using vector calculus if you are very careful, or if it's not vital to the problem, but differential forms make this more explicit.

D is also known as the polarization density and B as the magnetic flux density if that's any help. These are densities per unit area, and so must be "summed" or integrated over areas to get the overall flux through that area. If you go back an examine the SI units of each of the quantities, E, D, H, B, [tex]\rho[/tex], J, etc, you will see which are zero, one,two and three forms, simply by noting which are expressed in metres, metres squared, metres cubed and of course metres^0(no metres in the units).

nrqed said:
Ok. Interesting. What is the way to formalize this? Is dx (say) the operator or should one think of [itex] \int dx [/itex] as the operator?

[tex]\int dx[/tex] is the operator. The variable and the sign must be taken together. On their own, each is relatively meaningless. It's just the way things are done. The integral sign usually denotes the limits, making the whoel thing a definite integral.

nrqed said:
If one defines d/dx as an operator, how does one find how it acts on anything? And if the only way to find an explicit result is to go through the limit definition, then isn't this tantamount to say that the definition of the operator *is* the limit?

Yes, the definition of the d/dx operator is the limit.
[tex]\frac{d}{dx}(f(x)) = lim_{\Delta x \rightarrow 0}\frac{ f(x+ \Delta x) - f(x)}{\Delta x}[/tex]
But please remember that the dx in d/dx is not at all the same thing as the dx in [tex]\int dx[/tex]. Of course, when people work with differntial equations such as dy/dx = g(x) becoming [tex]\int dy = \int g(x) dx[/tex] often the dx is treated like a variable, and appears to be the "same thing", but in reality the two perform totally different operations.

This distinction is often hidden or unstated, but for example, you would never do the following: ln(dy/dx) = ln(dy) - ln(dx). I think you would agree instinctively that this is somehow wrong. Another example might be that [tex]\frac{d^2 y}{dx^2} = g(x) [/tex] and [tex](\frac{dy}{dx})^2 = g(x)[/tex] are two very different equations.
nrqed said:
That's clear (and I wish books would say it this way!). The question is then how is the vector chosen? I mean, the way it is usually presented is as if [itex]dx({\vec V})[/itex] is always equal to one (or am I missing something?).

The vector can be any function of t, that you wish. Usually however, [tex]\vec{V}(t) = \frac{dt}{dx}[/tex], or in other words, dx(V(t)) is the jacobian. And later on dx^dy(V1,V2) will be the 2D jacobian, dx^dy^dz(V1,V2,V3) the 3D jacobian, etc.

And of course, usually, [tex]V(x) = dx/dx = 1[/tex]
 
Last edited:
  • #10
nrqed said:
At first it might seem that differential geometry has for goal to eliminate completely the concept of "infininitesimal", but of course they reappear when defining integrals anyway, as Obsessive pointed out.
Not only that, but it seems to me that the concept of infnitesimals is still all over the place, as part of the derivatives.
Maybe this is sort of the problem. Infinitessimals simply aren't there in standard analysis -- not even in integrals or derivatives. I think, maybe, you are doing yourself a bit of harm thinking "Oh, it's just using infinitessimals after all."

The point of the formalism is to provide rigorously defined tools that can be used to rigorously achieve the same informal purposes we use infinitessimals for. Because they are intended for the same purposes, they will of course have similarities... but presumably, if you can modify your thinking to pass from the informal infinitessimal approach to more rigorous equivalents, you will be better off.

For example, whenever you think about "infinitessimals", try to mentally substitute the notion of "tangent vectors". So when you would normally think about an "infinitessimal neighborhood around P"... try thinking instead about the "tangent space at P".

Then, once you've done that, you no longer have to think about a cotangent vector as something that tells you how "big" an infinitessimal displacement is... you can now think of it as a linear functional on the tangent space.

In fact, I'm rather fond of using the notation P+e to denote the tangent vector e based at the point P. With this notation, we can actually write things like:

f(P+e) = f(P) + f'(P) e

and be perfectly rigorous. This is even better than infinitessimals -- that is an actual equality! If we were using infinitessimals, it is only approximate, and we have to wave our hands and argue that the error is insignificantly small.


If one defines d/dx as an operator, how does one find how it acts on anything?
Through axioms! You define d/dx to be an operator that:
(1) is a continuous operator
(2) satisfies (d/dx)(f+g) = df/dx + dg/dx
(3) satisfies (d/dx)(fg) = f dg/dx + df/dx g
(4) satisfies dx/dx = 1

and I think that's all you need.
 
  • #11
Hurkyl said:
Maybe this is sort of the problem. Infinitessimals simply aren't there in standard analysis -- not even in integrals or derivatives. I think, maybe, you are doing yourself a bit of harm thinking "Oh, it's just using infinitessimals after all."

The point of the formalism is to provide rigorously defined tools that can be used to rigorously achieve the same informal purposes we use infinitessimals for. Because they are intended for the same purposes, they will of course have similarities... but presumably, if you can modify your thinking to pass from the informal infinitessimal approach to more rigorous equivalents, you will be better off.
Yes, I am starting to realize this. I realize at some level that even before thinking in terms of differential forms, in plain old calculus I have to stop thinking in terms of infinitesimals. Your comments and Obsessive's comments are making me realize this and this is helpful.

I also realize that if I was only doing pure maths, that would be very easy for me to do. I would just think in terms of operators and their properties and so on. But the difficulty is in now trying to connect this to years of physics training. I am not closed minded to seeing things in a new light and I have a strong desire to move beyond the simple minded picture of maths I have from years of physics training. But the difficulty is in reexpressing everything I know and have worked with over the years in terms of this new language.

For example, just to mention an elementary example, almost at the high school level: given the expression for the E field produced by a point charge, what is the E field at a distance "d" from an infinitely long line of charge with linear charge desnsity [itex] \lambda [/itex]?
The physicist's approach is to separate the line in tiny sections of "infinitesimal" length dl, write the expression for the E field produced by this small section ,making the approximation that all the charged in this section, [itex] \lambda dl [/itex] can be assumed to be located at the cebter (say), and sum the contributions from all the sections. What would I mean by "infinitesimal" in that context? Well, I imagine making the dl smaller an dsmaller until the sum converges to some value. In some sense, I realize that I always mean a "practical infinitesimal", so maybe that's why infinitesimals don't bother me.

But I am open to enlarging my views on this, if my views are incorrect at some level. But then my first question is obviously: what is the correct (i.e. mathematically sound) way to do the above calculation? How would a mathematician go about finding the E field of the infinite line of charge starting from the expression for a point charge? I know that the expression would end up being the same, but what would be the interpretation of a mathematican?

For example, whenever you think about "infinitessimals", try to mentally substitute the notion of "tangent vectors". So when you would normally think about an "infinitessimal neighborhood around P"... try thinking instead about the "tangent space at P".

Then, once you've done that, you no longer have to think about a cotangent vector as something that tells you how "big" an infinitessimal displacement is... you can now think of it as a linear functional on the tangent space.

In fact, I'm rather fond of using the notation P+e to denote the tangent vector e based at the point P. With this notation, we can actually write things like:

f(P+e) = f(P) + f'(P) e

and be perfectly rigorous. This is even better than infinitessimals -- that is an actual equality! If we were using infinitessimals, it is only approximate, and we have to wave our hands and argue that the error is insignificantly small.
This is very interesting and I do like this way of thinking about things. And I would have no problem if I was focusing on maths only. But then I run into conceptual problems when I try to connect to my physics background, do you see what I mean?
Through axioms! You define d/dx to be an operator that:
(1) is a continuous operator
(2) satisfies (d/dx)(f+g) = df/dx + dg/dx
(3) satisfies (d/dx)(fg) = f dg/dx + df/dx g
(4) satisfies dx/dx = 1

and I think that's all you need.
Ok. I like this.

But then how would you show that dsin(x)/dx = cos(x)? It seems thatthe above axioms can only be applied to obtain explicit results for powers of x! Of course, maybe the answer is that one must apply the axioms to the Taylor expansion of sin(x). But how does one define the Taylor expansion of sin(x) ?? Usually, it's though derivatives, but here this leads to a vicious cycle.


Thank you for your comments, it's very much appreciated.

Patrick
 
  • #12
Well, I imagine making the dl smaller an dsmaller until the sum converges to some value. In some sense, I realize that I always mean a "practical infinitesimal", so maybe that's why infinitesimals don't bother me.
If you look carefully, you just said "I take the limit of Riemann sums", and we know that the limit of Riemann sums is an integral! :smile:

Another way to think about it is this.

You know the electrostatic field due to a point charge. You know if you add charges, you simply add the fields. The limit of this "operation" to an arbitrary charge distribution is simply a convolution -- i.e. an integral.


But how does one define the Taylor expansion of sin(x) ?? Usually, it's though derivatives, but here this leads to a vicious cycle.
It all depends on how you define sin(x). Actually, when building everything up from scratch, I usually see people define sin(x) to be equal to the power series.
 
  • #13
Hurkyl said:
If you look carefully, you just said "I take the limit of Riemann sums", and we know that the limit of Riemann sums is an integral! :smile:

Another way to think about it is this.

You know the electrostatic field due to a point charge. You know if you add charges, you simply add the fields. The limit of this "operation" to an arbitrary charge distribution is simply a convolution -- i.e. an integral.
I agree completely. But then infinitesimals are unavoidable (in the sense I described), no? I mean, the only way to do the calculation is to do it the way I described.

My problem is that if I go back to an integral like [itex] x dx [/itex] or *any* integral, I still think of it exactly in the same way: as breaking up into small pieces and taking the limit until the sum converges.

But then I am told "no, no, you shoudl not think of the dx there as being something small that is summed over, it is an operator (in standard analysis) or a differential form (in diff geometry).
So what is wrong in thinking of all integrals as being broken into a large number of small pieces and summing over? That has worked for all situations I have encountered so far, including doing calculations in thermodynamics, E&M, relativity, etc etc. And that works as well for cases for which the integrand is not exact so that the path matters. I just think of the differential (dx, dV, dq or whatever) as being a very small element of length, volume, charge, whatever. Small enough that the sum converges. And then summing over.

Then the question that someone like me obviously encounters when learning about differential forms is "why"? I mean, is it just a neat trick to unify vector calculus identities? Maybe, and that's fine. But the feeling I get is that even when I reach a point of actually carrying the integration, it is wrong to revert back to thinking of the dx (say) as a small (infinitesimal) element. But that's the only way I know of actually carrying an integral! Especially if the integrand is not exact!

It all depends on how you define sin(x). Actually, when building everything up from scratch, I usually see people define sin(x) to be equal to the power series.
Ok. Fair enough (so the definition of sin(x) as the opposite side over the hypothenuse in a right angle triangle because secondary in that point of view? Just curious). What about the derivative of ln(x)? How one would show that the derivative is 1/x?

Regards

Patrick
 
  • #14
The thing to remember is that all of these things you do to characterize familiar operations like sines, logarithms, and derivatives work in both directions. For example, from the 4 axioms I provided, you can derive differential approximation (and Taylor series!), and then conclude that derivatives can be computed with limits.

If you're curious, if you defined the trig functions as power series, then you would probably wind up defining angle measure via the inverse trig functions, from which their geometric interpretation follows trivially.

You could even define two sine functions -- one geometrically, and one analytically -- and then eventually prove they are equal.


I mean, the only way to do the calculation is to do it the way I described.
When's the last time you actually calculated an integral that way? I usually calculate it symbolically, and if that doesn't work I'll try to approximate the integrand with something I can calculate symbolically, and make sure the error is tolerable. And, of course, if I use a computer program it will decompose it into small but still finite regions.


So what is wrong in thinking of all integrals as being broken into a large number of small pieces and summing over?
(emphasis mine)

Because you lock yourself into that way of thinking. It keeps you from looking at a problem in a way that might be conceptually simpler. And it doesn't work for problems that don't have a density interpretation.

One good example is the exterior derivative. It's an obvious thing to do from a purely algebraic perspective. It has a wonderful geometric interpretation ala Stoke's theorem. But I'd be at a total loss if you asked me to describe it pointwise.
 
  • #15
Hurkyl said:
Through axioms! You define d/dx to be an operator that:
(1) is a continuous operator
(2) satisfies (d/dx)(f+g) = df/dx + dg/dx
(3) satisfies (d/dx)(fg) = f dg/dx + df/dx g
(4) satisfies dx/dx = 1

and I think that's all you need.
Ok. But then, all the proofs physicists go through to obtain derivatives of dunctions using [tex] lim_{\Delta x \rightarrow 0} { f(x + \Delta x) - f(x) \over \Delta x} [/tex]
become completely unnecessary?? A mathematician would look at those proofs and consider them completely unnecessary? Or plain wrong?
And if these proofs are unnecessary, do they work by "chance"? Or are they considered as complete and "convincing" to mathematicians as they are to physicists?

This is again the problem I always find myself facing. Mathematicians have a language which is different but at some level *must* be related to the physicist's approach. But different enough that it feels like there the physicist's approahc over here, and the mathematician approach over there, and it's really hard to get anyone even interested in bridging the gap. That's what I am hoping to find help with here.

Patrick
 
  • #16
Hurkyl said:
The thing to remember is that all of these things you do to characterize familiar operations like sines, logarithms, and derivatives work in both directions. For example, from the 4 axioms I provided, you can derive differential approximation (and Taylor series!),
That sounds interesting and I would love to see this. It's not obvious to me (I still don't quite see how to obtain that dln(x)/dx = 1/x starting from the 4 axioms). Again, I am not trying to be difficult, I am just saying that seeing a few of the usual results starting from the 4 axioms (such as the derivative of ln(x), a differential approximation of some function, one Taylor series) explicitly, it would clarify things greatly for me. I guess I learn a lot by seeing explicit examples.


If you're curious, if you defined the trig functions as power series, then you would probably wind up defining angle measure via the inverse trig functions, from which their geometric interpretation follows trivially.

You could even define two sine functions -- one geometrically, and one analytically -- and then eventually prove they are equal.



When's the last time you actually calculated an integral that way?

I usually calculate it symbolically, and if that doesn't work I'll try to approximate the integrand with something I can calculate symbolically, and make sure the error is tolerable. And, of course, if I use a computer program it will decompose it into small but still finite regions.



(emphasis mine)

Because you lock yourself into that way of thinking. It keeps you from looking at a problem in a way that might be conceptually simpler. And it doesn't work for problems that don't have a density interpretation.

One good example is the exterior derivative. It's an obvious thing to do from a purely algebraic perspective. It has a wonderful geometric interpretation ala Stoke's theorem. But I'd be at a total loss if you asked me to describe it pointwise.
Maybe, but one can also obtain the divergence theorem, Stokes theorem, etc, completely by simply breaking volumes or surfaces into tiny elements, writing derivatives as limits where higher powers of the "infinitesimals" are neglected, summing, etc. All those theorems then come out without any problem (that's the way they are derived in physicists E&M classes). Now, maybe there is something deeply wrong with this approach and that differential forms are the only really correct way to do it, but that's not completely clear to me.

Consider the integration over something which is not an exact form, now. Let's say that integral of y dx over some given path. I have no problem defining this by breaking the path over "infinitesimals" and adding the contributions over the path. This is in no way more difficult conceptually than any other integral. Bit how does one think about doing the integral using the language of differential forms if the integrand cannot be written as d(something)?? How does one get the answer?

Thanks!

Patrick
 
  • #17
That sounds interesting and I would love to see this. It's not obvious to me
I did miss something. Unfortunately, I'm more used to algebraic treatments. :frown:

Now that I've thought it over more, I realize what I've missed is that I should postulate the mean value theorem as an axiom. So...

' is an operator on a certain class of continuous functions satisfying:

(1) f' is continuous
(2) If a < b, there exists a c in (a, b) such that:
f(b) - f(a) = f'(c) (b - a)

(The synthetic treatments I've seen for integration use the mean value theorem for integrals as an axiom, that's why I think I need it here)


I suppose with these axioms it's somewhat more clear how to deduce the traditional definition of derivative.

Blah, now I'm going to be spending the next few days trying to figure out how to do it without postulating the MVT. :frown: I know the rules I mentioned before give you the derivatives of anything you can define algebraically... but I haven't yet figured out what continuity condidion I need to extend it to arbitrary differentiable functions.


Maybe, but one can also obtain the divergence theorem, Stokes theorem, etc, completely by simply breaking volumes or surfaces into tiny elements
That only works when you're working with things that can be broken into tiny elements. (e.g. you'll run into trouble with distributions, manifolds without metrics, and more abstract spaces of interest)

But that's not the point I was trying to make. We generally aren't interested in breaking things into tiny pieces so that we can sum them, and the like. That's just one means towards computing the thing in which we're really interested. And there are other means. For example, Eudoxus's method of exhaustion.

By fixating on the integrals as being sums of tiny pieces, it distracts you from focusing on the things that are really interesting, like what the integral actually computes!


IMHO, it's much more important to focus on what something does, than what it is. (Especially since there are so many different, yet equivalent, ways to define what it "is")
 
Last edited:
  • #18
Bit how does one think about doing the integral using the language of differential forms if the integrand cannot be written as d(something)?? How does one get the answer?
Well, here's an example.

In the punctured plane (that is, there's a hole at the origin), there is a differential form w that measures angular distance about the origin. This is not an exact form.

So how would I think about integrating this form along a curve? Simple: I compute the angular displacement between the starting and ending points, and adjust it as necessary by counting how many times the curve loops around the origin. That's much simpler than trying to imagine breaking our curve up into little tiny pieces, and then adding up [itex](-y \, \Delta x + x \, \Delta y) / (x^2 + y^2)[/itex] over all of them.

But, unless I was tasked with actually computing something, I wouldn't even put that much effort into thinking about the integral. All I care about is that "integrating this form gives me angular distance about the origin" and I wouldn't think about it any further.
 
Last edited:
  • #19
Hurkyl said:
Well, here's an example.

In the punctured plane (that is, there's a hole at the origin), there is a differential form w that measures angular distance about the origin. This is not an exact form.

So how would I think about integrating this form along a curve? Simple: I compute the angular displacement between the starting and ending points, and adjust it as necessary by counting how many times the curve loops around the origin. That's much simpler than trying to imagine breaking our curve up into little tiny pieces, and then adding up [itex](-y \, \Delta x + x \, \Delta y) / (x^2 + y^2)[/itex] over all of them.

But, unless I was tasked with actually computing something, I wouldn't even put that much effort into thinking about the integral. All I care about is that "integrating this form gives me angular distance about the origin" and I wouldn't think about it any further.

Ok. That' san interesting example. But it has a simple interpretation because this happens to be [itex] - d \theta [/itex]. I can see that removing the orgin makes it not exact.

But let's be more general. Let's say that instead, we consider integrating
[itex](-y^2 \, \Delta x + x \, \Delta y) / (x^2 + y^2)[/itex] along, say, a straight line from a certain point to another point. How would you set about doing this integral, without "breaking" up the trajectory into "small" pieces?

Thank you for the feedback, btw! I really appreciate it.

Patrick
 
Last edited:
  • #20
Hurkyl said:
I did miss something. Unfortunately, I'm more used to algebraic treatments. :frown:

Now that I've thought it over more, I realize what I've missed is that I should postulate the mean value theorem as an axiom. So...

' is an operator on a certain class of continuous functions satisfying:

(1) f' is continuous
(2) If a < b, there exists a c in (a, b) such that:
f(b) - f(a) = f'(c) (b - a)

(The synthetic treatments I've seen for integration use the mean value theorem for integrals as an axiom, that's why I think I need it here)

I suppose with these axioms it's somewhat more clear how to deduce the traditional definition of derivative.

Blah, now I'm going to be spending the next few days trying to figure out how to do it without postulating the MVT. :frown: I know the rules I mentioned before give you the derivatives of anything you can define algebraically... but I haven't yet figured out what continuity condidion I need to extend it to arbitrary differentiable functions.
I hope this won't be driving you crazy :smile:

Iguess I learn more by specific examples, so just seeing how to get the derivative of sin(x) and ln(x) would clarify things greatly (for sin(x) are you still saying that the infinite expansion must be postulated?)

That only works when you're working with things that can be broken into tiny elements. (e.g. you'll run into trouble with distributions, manifolds without metrics, and more abstract spaces of interest)
I can appreciate this but for now I just wanted to understand integration over differential forms. And since one can always "feed" vectors to differential forms to get numbers, I did not see any problem with this approach (of breaking up into tiny pieces).

But that's not the point I was trying to make. We generally aren't interested in breaking things into tiny pieces so that we can sum them, and the like. That's just one means towards computing the thing in which we're really interested. And there are other means. For example, Eudoxus's method of exhaustion.

By fixating on the integrals as being sums of tiny pieces, it distracts you from focusing on the things that are really interesting, like what the integral actually computes!


IMHO, it's much more important to focus on what something does, than what it is. (Especially since there are so many different, yet equivalent, ways to define what it "is")


Fair enough. But what if you have to integrate something as simple as
"y dx" over a specified path. How to proceed then without breaking into tiny pieces?

After our excahnges, I dug out a book I have: Advanced Calculus: A Differential Forms Approach" by Harold Edwards.

Maybe his presentation is not standard but the way he integrates over forms is by doing it exactly the way I would

In general, an integral is formed from an integrand which is a 1-form, 2-form or 3-form, and a domain of integration which is, respectively, an oriented curve, a surface or a solid. The integral is defined as the limit of approximating sums and an approximating sum is formed by taking a finely divided polygonal approximation to the domain of integration, "evaluating" the integrand on each small oriented polygon by choosing a point P in the vicinity of the polygon, by evaluating the functions A, B, etc at P to obtain a constant form and by evaluating the constant form on the polygon in the usual way"
(p.26, 1994 edition)

Here, when he talks about "evaluating" , he means feeding line element or triangles or cubes to the differential forms. And A, B, etc are the functions multiplying the basis forms, as in A dx ^ dy + B dx^ dz...

Again, I don't know if that's common thinking among people more mathematically sophisticated than me.


Now, I agree with what you said in a previous post that this is not the way one usually go about carrying out integrals! One use the fundamental theorem of calculus (FTC). I agree, but I see the FTC as a shortcut to get the answer when it is possible this way (I mean that it's not always possible to find a closed form expression for the antiderivative). Whereas the limit of sums remains the fundamenatl definition.

Then I read this in the book:
"At this point two questions arise: How can this definition of "integral" be made precise? How can integrals be evaluated in specific cases? It is difficult to decide which of these questions should be considered first. On the one hand, it is hard to comprehend a complicated abstraction such as "integral" without concrete numerical examples; but, on the other hand, it is hard to understand the numerical evaluation of an integral without having a precise definition of what the integral is. Yet, to consider both questions at the same time would confuse the distinction between the *definition* of integrals (a slimits of sums) and the *method* of *evaluating* integrals (using the FTC), This confusion is one of the greatest obstacles to understanding calculus and should be avoided at all cost"

(all emphasis are his).


Then, after discussing integrals as sums in the infinite limit, he gets to the FTC which he states as having two parts:


1. Let F(t) be a function for which the derivative F'(t) exists and is a continuous function for t in the interval [a,b]. Then
[tex] \int_a^b F'(t) \, dt = F(b) - F(a) [/tex].

Part II: Let f(t) be a continuous function on [a,b]. Then there exists a differentiable function F(t) on [a,b] such that f(t) = F'(t).

Part I says that in order to evaluate an integral it *suffices to write the integrand as a derivative* .

Part II says that theoretically this procedure always works , that is, theoretically any continuous integrand can be written as a derivative...Anyone who has been confronted with an integrand such as
[itex] f(t) = { 1 \over {\sqrt{ 1 - k^2 sin^2 t}}} [/itex] with or withiut a table of integrals knows how deceptive this statement is. In point of fact, II sasy little more that *the definite integral of a continuous function over an interval converges*
.

Emphasis his...

A final quote:

Statement II is confusing to many students because of a misunderstanding about the word "function". When one thinks of a function one unconsciously imagines a simple rule such as F(t)=sin(sqrt(t)) which can be evaluated by simple computation, by consultation of a table or, at worse, by a manageable machine computation. The function defined by [itex] F = \int f(t) dt [/itex] need not be a standard function at all and a priori there is no reason to believe that it can be evaluated by any means other than by forming approximating sums and estimating the error as in the preceding chapter.



I know you know all that, but this first perfectly with my conception of doing inetgrals (using differential forms or not).

I am wondering if you have criticisms for what he is saying.


Regards

Patrick
 
Last edited:
  • #21
nrqed said:
are you still saying that the infinite expansion must be postulated?)

May I jump in for a moment on this question? The postulate or axiom you need to do limits is called a Completeness Axiom. It says any bounded infinite set of (whatever you are talking about) has a cluster point, that is a point (value, whatever) in the topological space of whatevers where any neighborhood of the cluster point contains all but a finite number of members of the set.

So it is assumed in Real Variables Theory that the real line is Complete and hence any set of partial sums that stays between two limiting numbers has a cluster point. And on the real line of course every point represents a real number. So that number is the sum of the series. You can do all this with epsilons and deltas, but you still need that Completeness Axiom.

There are other ways of stating the completeness axiom for the reals, and Dedekind's Cut is a famous one. He was the first mathematician to realize that this needs to be axiomatized.
 
  • #22
nrqed said:
Iguess I learn more by specific examples, so just seeing how to get the derivative of sin(x) and ln(x)
As I said, it all depends on how you define it. (e.g. I often see ln effectively defined as the antiderivative of 1/x)

If you defined the trig functions in a way that allowed you to prove the trig identities, then using the product rule for derivatives yields:

sin² x + cos² x = 1
2 sin x sin' x + 2 cos x cos' x = 0
2 sin 0 sin' 0 + 2 cos 0 cos' 0 = 0
cos' 0 = 0

sin(a + x) = sin a cos x + cos a sin x
sin'(a + x) = sin a cos' x + cos a sin' x
sin' a = sin a cos' 0 + cos a sin' 0 = cos a sin' 0

And that's the best we can do -- remember that there's an ambiguity in scale for trig functions. (Is the angle measured in radians? Degrees? Gradians? Some other esoteric system?)



Fair enough. But what if you have to integrate something as simple as
"y dx" over a specified path. How to proceed then without breaking into tiny pieces?
If I actually had to calculate it, and I didn't have a clever method... I would first "pullback" the integral to an ordinary integral over [0, 1], and compute that with my favorite method.

(Of course, I didn't have to compute it but merely perform some manipulations with it, I wouldn't even do that much if I could avoid it)


I know you know all that, but this first perfectly with my conception of doing inetgrals (using differential forms or not).

I am wondering if you have criticisms for what he is saying.
No -- in fact I want to re-emphase one of his points. There is (often) a distinction between the definition of an integral, and how you actually work with them.


When you're actually using the concept, it doesn't matter which facts about that concept are "definitions" and which ones are "theorems".

By focusing in on the definition of the integral, you strongly bias yourself towards the facts that are more closely related to the definition. I'm not saying that by doing this you cannot use other facts... just that you are mentally boxing yourself in. By doing this, you will become better at working problems closely related to breaking things into tiny pieces and adding them together -- but you will not become as proficient in exploiting the other aspects of the integral.


Also the choice of which ones are called definitions is not unique. (e.g. integrals can be defined as an operator satisfying the mean value theorems and that the integral over a whole is the sum of the integrals over the parts)
 
Last edited:
  • #23
I don't want to interrupt the discussion on differential forms, but I'd like to give my own thoughts on the topic of the physicists/mathematicians barrier.

I'll give an example from an area of maths I've been recently studying.

Take the calculating theorem for the fundamental homotopy group. (the one which involves triangulation)

How I learned to "understand" this concept was to take the definition of the calculating theorem and then attempt to understand the definition by looking at it in tandem with examples of its use.

I looked at it being used on [tex]S^{1}[/tex], [tex]B^{2}[/tex], the mobius band and [tex]T^{2}[/tex].
Only after seeing it calculated for the torus did I understand the definition of the calculating theorem.

Then I tried several more examples, after which I went back and attempted to understand the derivation of the theorem.

However my friend who does pure mathematics, read the derivation first and then did examples.

Perhaps this difference in pedagogical order is part of what separates mathematicians and physicists. You'll notice that my way of learning (not that I own it) is often used in texts on mathematics for physicists.
(e.g. Shlomo Sternberg's Group Theory for Physicists)
 
  • #24
ObsessiveMathsFreak said:
But, strictly speaking, one should integrate the electric flux D over surface integrals. Forms make this more explicit by enabling you to define E and D in such a way as each can only be integrated over the correct type of manifold, i.e. curve, surface or volume. D is a two form, and is in fact the Hodge dual of E if you wanted to be more "concise" about things.



Mathematically there is no explanation whatsoever. E and B are simply vector fields in vector calculus. The reason comes only from the physics. Physically speaking, the reason is that E is the electric field and B is in fact, the magnetic flux. It's units can be measured in Webers per metre squared, Wb/m^2, so it must be evaluated as a flow through areas, so strictly speaking, it's a two form. It's "dual" is the magnetic field H, which is a one form like the electric field.

This might be consider a matter of extreme pedantics, paticularly when the fields and fluxes typically differ only by constants [tex]\epsilon[/tex] and [tex]\mu[/tex]. But sometimes you need to be pedantic. In my case, this is useful as I am working with materials in which the permeability an d permittivity are not constant. Your milage may vary.
I have to think some more about all this in order to have clearere questions. E&M is sure a good exampel to focus on to see the usefulness of the differential form approach in physics.

I am still bothered by this last point about D vs E. It is strange that multiplying a one form by a constant (or even if the material is not isotropic/homogeneous and so on, epsilon is stil a function so a 0-form, no?) would give a two-form! How is that possible? Is that something added on top of the usual axioms of differential geometry in order to apply to physics? I hope you can see why this is unsettling!

I understand what you mean by infinitesimals to be variables of integration dx, dy, dz etc. You seem to have been introduced to variables of integration from the point of view of riemannian sums, i.e. [tex]\int f(x) dx = lim_{\Delta x \rightarrow 0} \sum f(x_i) \Delta x [/tex].
Right. And I knwo I need to get away from this but it's hard when it has been so fruitful and effective in solving thousands of physics problems. I am open-minded about new concepts, but the first step is to see how everything one has learns before fits within th enew structure. And there are things that I just cannot see as being integrals over differential forms!
Strictly speaking, dx is not an infinitesimally small [tex]\Delta x[/tex], but is rather an operator applied to a function to obtain an "anti-derivative", i.e. to integrate something. Similarly, strictly speaking, dy/dx is not an infinitesimally small ratio, but is the operator d/dx applied to the function y(x), i.e., [tex]\frac{d}{dx}\left(y(x)\right)[/tex].
Ok, that's a very good point. An, if I understand correctly, your comment is within "standard analysis", right? I mean it's even before starting thinking of the "dx, etc" as differential forms, right?

So maybe I need to first absorb this view before moving on to differential forms instead of mixing everything together.
Let me see if I get this straight. I should really think of [itex] \int dx [/itex] as an operator that gives the anti-derivative. ok. *OR* I can think of it as a shorthand for a riemannian sum. The fact that the two views are equivalent is established by the fundamental theorem of calculus.
Is that a good way to put it?

But then the obvious question I have is about the cases when the integral can *not* be written in terms of elementary functions, i.e. an antiderivative cannot be written in closed form. How should I think about this then? Should one say that the definition as an operator fails so that one must go back to the sum definition? You see, this is one of my psychological blocks. The two definitions are then not completely equivalent, and the one in terms of a Riemannian sum is more powerful.


Is that a fair assesment or am I completely wrong in thinking this way?
However, your view is not entirely wrong, as when it comes down to the final solution of many physical problems, numerical estimates of integration and differenciation are used, and dx and dy do become approximated by [tex] \Delta x[/tex] and [tex]\Delta y[/tex].

As to the point of view that every integration should be throught of as a differential form, or taken over differential forms; this is clearly nonsense.
I am glad to hear that!
I have seen more than once "what are differential forms? Those are the things we are integrating over!"

So I started trying to interpret every single integral I do in terms of differential forms but that just did not make sense sometimes.

The obvious question is: do you have a few examples where the integration is not over a differential form? And, more importantly, how can one tell?


I want to go back to other very interesting points you made in this post but i will do that a bit later after thinking a bit more.

Thank you for your help!

Patrick
 
Last edited:
  • #25
nrqed said:
I am still bothered by this last point about D vs E. It is strange that multiplying a one form by a constant (or even if the material is not isotropic/homogeneous and so on, epsilon is stil a function so a 0-form, no?) would give a two-form!

No, no. The difference in multiplying by a constant only occurs with the vector field versions of D and E. With the forms version this isn't the case. From here on, I'll denote form by using an accent, and good old fashioned vector fields with the regular boldface.

So the vector fields are [tex]\mathbf{D}[/tex] and [tex]\mathbf{E}[/tex], and the forms are [tex]\acute{D}[/tex] and [tex]\acute{E}[/tex], which are a two and a one form respectively. [tex]\mathbf{D}[/tex] and [tex]\acute{D}[/tex] both represent the same physical quantity, which I'll denote as just plain D albiet in a different mathematical fashion. The same goes for [tex]\mathbf{E}[/tex] and [tex]\acute{E}[/tex], representing E.

Take a look at these physical quantities. E is the electric field. D is the electric flux, or sometimes the displacement current.. E represent the gradient of the potential difference(voltage). D on the other hand, represents...? To be very honest, I'm not entirely surewhat it is suppoed to represent. It's units are Columbs per metre squared (C/m^2), so it seems to be measuring an amount of charge over a surface, but there are no "real" charges on these surfaces. But I digress.

The point is that E is conserved with the potential different as you travel along lines. D is concerned with the charge, or perhaps flux, over or through surfaces. To ask the question, what is the amount of charge through a line, doesn't make much sense.

Now with the vector representation, this isn't very clear. We have [tex]\mathbf{D}=\epsilon \mathbf{E}[/tex], and it seems that what goes for one will go for the other. This is a result of our formulation using vector calculus, which is more concerned with the individual representation of a quantity at a point than it is with the integral of quantities over manifolds. In this case, vector calculus can't see the wood for the trees. It doesn't know that we should only perform certain integrations over certain types of manifold, curves or surfaces.

We can make this explicit by defining the one form [tex]\acute{E}[/tex] and the two form [tex]\acute{D}[/tex]. Now, the one form can only be integrated over lines, and the two form only over surfaces. The disadvantage here is that we lose the interpreation of the the value of a field at anyone specific point, at the benefit of ensuring our interpretations are correct for our integrals. Forms cannot pick out trees from the wood. Incidentally [tex]\acute{D} = \epsilon \star \acute{E}[/tex], where [tex]\star[/tex] is the maddeningly defined hodge dual operator, so the two are still related as in the vector calculus case.

As to why E is integrated along lines, and D over surfaces, I'm afraid you'll have to consult the physics on the deeper meaning behind that. I'm just a mathematician.

nrqed said:
An, if I understand correctly, your comment is within "standard analysis", right? I mean it's even before starting thinking of the "dx, etc" as differential forms, right?

Yes I'm not talking about differential forms at all there. Just regular integration. To avoid confusion, I'll mark out forms in some way, such as with an accent. Thus dx is a variable of integration, and [tex]d\acute{x}[/tex] is a form. The two are of course, totally different things, despite what anyone might be lead to believe by their similarity.


nrqed said:
Let me see if I get this straight. I should really think of [itex] \int dx [/itex] as an operator that gives the anti-derivative. ok. *OR* I can think of it as a shorthand for a riemannian sum. The fact that the two views are equivalent is established by the fundamental theorem of calculus.
Is that a good way to put it?

But then the obvious question I have is about the cases when the integral can *not* be written in terms of elementary functions, i.e. an antiderivative cannot be written in closed form. How should I think about this then? Should one say that the definition as an operator fails so that one must go back to the sum definition? You see, this is one of my psychological blocks. The two definitions are then not completely equivalent, and the one in terms of a Riemannian sum is more powerful.

But hold on. What if no limits of integration are given at all. Suppose I simply as for [tex]\int x^2 dx[/tex]. What is the answer according to riemannian sums? None can be given, as there are no limits or places between which to compute the sum. Some attempt to bypass this by equating the last integral with [tex]\int_0^x x^2 dx[/tex], but this is not strictly correct for all integrals as the antiderivative may not exist at 0. For example [tex]\int_0^x ln(x-1) dx[/tex]

When you write down [tex]\int f(x) dx[/tex], you are asking, "What is the antiderivative of f(x)?", or "What function when differenciated with respect to x gives f(x)?". When you write [tex]\int_a^b f(x) dx[/tex], you are asking the previous, but now you are also asking, what is the value of that function at b minus its value at a. It is this second question that can be approximated by a riemannian sum.

Though it is true that certain antiderivatives cannot be found in closed form, it is still often far more preferable to perform the operation of integration than it is to approximate a Riemannian sum. An integration over an interval only requires the computation of two values and a subtraction. A riemannian sum requires a huge number of computations and additions.

I learned integration before I was introduced to riemannian sums, or indeed any kind of sums, so I don't really see integration as requiring infinite additions. I must try and fish aout my old notes on this.

nrqed said:
I have seen more than once "what are differential forms? Those are the things we are integrating over!"

That is an incorrect assesment. Differential forms are simply operators on vectors. Often however, in fact almost always, differential forms are themselves integrated over manifolds. You no more integrate "over" a form than you would intregate "over" a function. You must integrate a function over an interval, and you must similarly integrate a differential form over a manifold.

nrqed said:
So I started trying to interpret every single integral I do in terms of differential forms but that just did not make sense sometimes.

No. This isn't the case. The notation has laid a trap for you. Forms are operators on vectors, or multi variable functions if you will.

nrqed said:
The obvious question is: do you have a few examples where the integration is not over a differential form? And, more importantly, how can one tell?

Of course. Indefinite integration is not performed over any manifold. It is an operation on functions.
 
  • #26
ObsessiveMathsFreak said:
But hold on. What if no limits of integration are given at all. Suppose I simply as for [tex]\int x^2 dx[/tex]. What is the answer according to riemannian sums? None can be given, as there are no limits or places between which to compute the sum. Some attempt to bypass this by equating the last integral with [tex]\int_0^x x^2 dx[/tex], but this is not strictly correct for all integrals as the antiderivative may not exist at 0. For example [tex]\int_0^x ln(x-1) dx[/tex]

When you write down [tex]\int f(x) dx[/tex], you are asking, "What is the antiderivative of f(x)?", or "What function when differenciated with respect to x gives f(x)?". When you write [tex]\int_a^b f(x) dx[/tex], you are asking the previous, but now you are also asking, what is the value of that function at b minus its value at a. It is this second question that can be approximated by a riemannian sum.

Though it is true that certain antiderivatives cannot be found in closed form, it is still often far more preferable to perform the operation of integration than it is to approximate a Riemannian sum. An integration over an interval only requires the computation of two values and a subtraction. A riemannian sum requires a huge number of computations and additions.

I learned integration before I was introduced to riemannian sums, or indeed any kind of sums, so I don't really see integration as requiring infinite additions. I must try and fish aout my old notes on this.

A couple of comments:

1. one should continually remember that [tex]\int f(x) dx[/tex] is not necessarily *the* antiderivative of f(x) but the family of anti-derivatives of f(x), i.e. don't forget that +C. And the anti-derivatives don't always exist, i.e. there are functions that are not Riemann-integrable.

2. I'm a little surprised that no one in this discussion has pointed out the "standard analysis" schedule of events regarding the definition of integrals:

In analysis, the first definition is that of Riemann integrability over an interval. That is, the definition of [tex]\int_a^b f(x)dx[/tex] is given as the limit of Riemann sums (if the limit exists) over a given interval [a,b]. This gives us a definition of Riemann integrability over a given interval [a,b]. Here the dx is a convention that is there just to indicate what variable is getting integrated over. Some analysis texts actually don't even bother putting it there.

From here, it is noted that we can now form new functions given by
[tex]F(t)=\int_a^t f(x)dx[/tex]. The FTC then always us to calculate the integrals of some Riemann-integrable functions quite nicely. This is quite important since as pointed out calculating the integral from Riemann sums can be quite difficult. Furthermore, it is at this point, that the various familiar calculus results can be proven rigourously.

From there, we now can form a concept of [tex]\int f(x)dx[/tex] as the family of functions F_C(t) such that for each C [tex]F_C(t)=\int_a^t f(x)dx[/tex] for some a.

That is, rigourously, the Riemann integral is derived from the sums, not from the antiderivatives.

I realize that this doesn't quite fit in with the discussion, but I think it's important to note how many, if not most, mathematicians view the theoretical background of Riemann-integrability. Of course, in practice, they simply put everything they can in terms of the good-old calculus they learned in college.3. Incidentally, Hurkyl, I think you need to put in the Chain Rule as one of your axioms of the derivative.
 
Last edited:
  • #27
I'm a little surprised that no one in this discussion has pointed out the "standard analysis" schedule of events regarding the definition of integrals:
It's sort of contrary to the point I'm trying to make (that "how you use it" is more important than "what it is"), so I've avoided it. :smile:


From here, it is noted that we can now form new functions given by [itex]F(t)=\int_a^t f(x)dx[/itex]
I wanted to clarify that this doesn't give us new functions -- antidifferentiation can only give us functions that already existed.

It does, however, allow us to write some functions that cannot be written through "elementary" means -- only in that sense does it give us something "new".


3. Incidentally, Hurkyl, I think you need to put in the Chain Rule as one of your axioms of the derivative.
Which axiom set are you referring to?

The latter one (derivatives are continuous, and the mean value theorem) is sufficient to derive the limit expression for derivatives.

For the former one (continuity + algebraic manipulations), my idea had been approximating arbitrary functions with polynomials... and since the chain rule for polynomials follows from the other algebraic rules, I didn't think I needed it as an axiom. I know this axiom set isn't enough, but I don't think adding the chain rule is enough either.
 
  • #28
Hurkyl said:
It's sort of contrary to the point I'm trying to make (that "how you use it" is more important than "what it is"), so I've avoided it. :smile:

I understand now. However, it seems to me that one needs to use the limit definition of (and subsequent theorems about) of the Riemann integral to prove the existence of the anti-derivative of a given function (except of course "easy" functions like polynomials).

Hurkyl said:
I wanted to clarify that this doesn't give us new functions -- antidifferentiation can only give us functions that already existed.

It does, however, allow us to write some functions that cannot be written through "elementary" means -- only in that sense does it give us something "new".

It seems to me that, if we are using the limit definition of Riemann integration, they will give you new functions, which we later find out via FTC are antiderivatives (in some circumstances) of the original functions.

Hurkyl said:
Which axiom set are you referring to?

The latter one (derivatives are continuous, and the mean value theorem) is sufficient to derive the limit expression for derivatives.

For the former one (continuity + algebraic manipulations), my idea had been approximating arbitrary functions with polynomials... and since the chain rule for polynomials follows from the other algebraic rules, I didn't think I needed it as an axiom. I know this axiom set isn't enough, but I don't think adding the chain rule is enough either.

I see. Never mind. Seems to me that, in order to figure out whether you have the correct number of axioms, you just need to prove that the derivative of a given function at a given point according to the axioms is equal to the usual limit definition.
 
  • #29
ObsessiveMathsFreak said:
No, no. The difference in multiplying by a constant only occurs with the vector field versions of D and E. With the forms version this isn't the case. From here on, I'll denote form by using an accent, and good old fashioned vector fields with the regular boldface.

So the vector fields are [tex]\mathbf{D}[/tex] and [tex]\mathbf{E}[/tex], and the forms are [tex]\acute{D}[/tex] and [tex]\acute{E}[/tex], which are a two and a one form respectively. [tex]\mathbf{D}[/tex] and [tex]\acute{D}[/tex] both represent the same physical quantity, which I'll denote as just plain D albiet in a different mathematical fashion. The same goes for [tex]\mathbf{E}[/tex] and [tex]\acute{E}[/tex], representing E.

Take a look at these physical quantities. E is the electric field. D is the electric flux, or sometimes the displacement current.. E represent the gradient of the potential difference(voltage). D on the other hand, represents...? To be very honest, I'm not entirely surewhat it is suppoed to represent. It's units are Columbs per metre squared (C/m^2), so it seems to be measuring an amount of charge over a surface, but there are no "real" charges on these surfaces. But I digress.

The point is that E is conserved with the potential different as you travel along lines. D is concerned with the charge, or perhaps flux, over or through surfaces. To ask the question, what is the amount of charge through a line, doesn't make much sense.

Now with the vector representation, this isn't very clear. We have [tex]\mathbf{D}=\epsilon \mathbf{E}[/tex], and it seems that what goes for one will go for the other. This is a result of our formulation using vector calculus, which is more concerned with the individual representation of a quantity at a point than it is with the integral of quantities over manifolds. In this case, vector calculus can't see the wood for the trees. It doesn't know that we should only perform certain integrations over certain types of manifold, curves or surfaces.

This is extremely interesting.
I think it's a very good example to focus on.

The big problem, for me, is that I still have a hard time seeing the wood through the trees.

For example, the work done by the electric field as a particle is moved from a point A to a point B is
[tex] q \int_A^B \, {\vec E } \cdot {\vec dl} [/tex]

Then the this should be seen as a one-form, right? Of course, I am used to think of this as a vector field with components E_x, E_y and E_z, dotted with the line element "dl".

Now, going to a one-form picture, this would be a one-form [itex] E_x dx + E_y dy + E_z dz [/itex]. So in that case, the components of the vector field are the same as the components of the one-form.

What if we are in, say, spherical coordinates. Since the metric is not the identity matrix, the components of the one-form and of the vector field should be different. Let's say that the components of the vector field are called [itex] E^r, E^\theta, E^\phi[/itex]. Then, what are the components of the one-form?

Well, I would write [itex] {\vec dl} [/itex] in spherical coordinates and write the integrand as [itex] E_r dr +E_\theta d\theta + E_\phi d\phi [/itex], right? So that the components of the one-form would be different than the components of th evector field, and I guess that this would be equivalent to using the metric to transfer from one to the other [itex] E_i = g_{i,j} E^j [/itex], right?





Thank you for your comments, they are very stimulating.



We can make this explicit by defining the one form [tex]\acute{E}[/tex] and the two form [tex]\acute{D}[/tex]. Now, the one form can only be integrated over lines, and the two form only over surfaces. The disadvantage here is that we lose the interpreation of the the value of a field at anyone specific point, at the benefit of ensuring our interpretations are correct for our integrals. Forms cannot pick out trees from the wood. Incidentally [tex]\acute{D} = \epsilon \star \acute{E}[/tex], where [tex]\star[/tex] is the maddeningly defined hodge dual operator, so the two are still related as in the vector calculus case.
Ok, that makes great sense. I am reading "Advanced Calculus" by Edwards and when he discusses E&M, he still writes that the two-form [itex] {\acute D} [/itex]is [itex]\epsilon {\acute E} [/itex] and then when he writes D as a two form. He just says that D is defined that way without further ado. But now I realize that he never defines the Hodge dual so he had to drop this out of nowhere.

As to why E is integrated along lines, and D over surfaces, I'm afraid you'll have to consult the physics on the deeper meaning behind that. I'm just a mathematician.
I am not concerned about that level of understanding for now.

Yes I'm not talking about differential forms at all there. Just regular integration. To avoid confusion, I'll mark out forms in some way, such as with an accent. Thus dx is a variable of integration, and [tex]d\acute{x}[/tex] is a form. The two are of course, totally different things, despite what anyone might be lead to believe by their similarity.

Ok. It's good for me to hear said very explicitly!

But hold on. What if no limits of integration are given at all. Suppose I simply as for [tex]\int x^2 dx[/tex]. What is the answer according to riemannian sums? None can be given, as there are no limits or places between which to compute the sum. Some attempt to bypass this by equating the last integral with [tex]\int_0^x x^2 dx[/tex], but this is not strictly correct for all integrals as the antiderivative may not exist at 0. For example [tex]\int_0^x ln(x-1) dx[/tex]

When you write down [tex]\int f(x) dx[/tex], you are asking, "What is the antiderivative of f(x)?", or "What function when differenciated with respect to x gives f(x)?".
Good point. yes, indefinite integrals have a separate status in my mind since any actual physical application would always involve definite integrals. But I see now that your way of viewing integrals unifies better the two cases (definite and indefinite)
When you write [tex]\int_a^b f(x) dx[/tex], you are asking the previous, but now you are also asking, what is the value of that function at b minus its value at a. It is this second question that can be approximated by a riemannian sum.
oh!



You are saying that the correct way to think of the riemannian sum is really as an approximation of F[a]-F (where F is the antiderivative), right? Is that what you are saying?

Here's where I am running into a mental block with all this. If you could help me clear this up, I would be grateful!

Ok, I am willing to go along with the view that [itex] \int dx [/itex] must be seen as an operator giving the antiderivative and that a definite integral just amounts to taking the difference F-F[A].

But what about integrating, say, ydx in the x-y plane from a point A to a point B. Then it is impossible to write the result as something evaluated at the final point minus something evaluated at the final point minus at the initial point! The result depends on the path. So how does this fit in? Does one then extend the meaning of the antiderivative "F" as being something which is afunction of path?
It's not even sensible anymore to use the notation F-F[A]. You know what I mean.

You see my mental block now. The reason why I have trouble letting go of the riemannian sum approach as the fundamental definition. It is as easy to define the integral in this case using my naive view: by breaking up the path in tiny segments, small enough that the integrand can be approximated as being constant, taking the limit, summing, etc.

It's true that in practice, if the path is, say, along the line y=x, I would simply integrate [itex]\int_A^B x dx [/itex] so yes, it could be seen as an ordinary antiderivative calculation but if I write this as F - F[A], and then write that my initial integral is given by this, it gives the misleading impression that the result depends only on the final and initial points.

So it seems in this case that saying that we found an antiderivative in this case and that the result is F -F[A] is difficult for me to understand.
What is the correct way to think of this?




Though it is true that certain antiderivatives cannot be found in closed form, it is still often far more preferable to perform the operation of integration than it is to approximate a Riemannian sum. An integration over an interval only requires the computation of two values and a subtraction. A riemannian sum requires a huge number of computations and additions
Of course. But my questions were not about ease of use about the fundamental meaning of integration.

I learned integration before I was introduced to riemannian sums, or indeed any kind of sums, so I don't really see integration as requiring infinite additions. I must try and fish aout my old notes on this.

This is an eye opener for me!

My formation as a physicist has given me the feeling that riemannian sums are the fundamental definition!

Actually, when I think of almost any physical application of physics, the starting point is always a riemannian sum! Again, an example is: given the equation for the E field produced by a point charge, calculate the E field produced by an infinite line of charge. Or, if you know the linear charge density lambda(x) of a line of charge, what is the total charge on the rod? Or, if you know how the current varies with time in a circuit with a capacitor, how much total charge crosses a point in a certain time interval? If the volume of a gas at fixed temperature is changed whil ethe pressure follows a certain function, what is the total work done by the external force?

And on and on. In all those cases, the way to setting up an integral is to break up the problem as a riemannian sum!

For example, how does one even set up the calculation for fidning the total charge crossing a wire in the circuit problem? Well, if the current was constant, one would simply calculate [itex] I \Delta t [/itex] But now the current is varying so one imagine taking a time interval small enough that the current during that interval can be approximated as being constant (of course, this will get closer and closer to being true as delta t is taken to zero) so that the charge is [itex] I(t) \Delta t [/itex]. On sums uf these values from t_A to t_B and one takes the limit. And then we say that this limit is the definition of an integral. And *then*, by the fundamental theorem of calculus, one can find the answer from the antiderivative.

So now that I am writingall this, I realize that for a physicist, the thinking process is really

Physical problem -> riemannian sum -> antiderivative

In fact, the symbol [\itex] \int dt [/itex], etc is really not required at all. It is only written as an intermediate step to represent the riemannian sum, as shorthand notation basically.

On the other hand, I realize now that for you, as a mathematician, the operator [itex] \int dt [/itex] becomes the starting point, it takes a life of its own (and that allows to describe indefinite integrals in the same breath as definite integrals). You are used to think of it that way, so for you, you see the integral as a formal operation applied on a function f(t_ t)produces its antiderivative.

This has been very illuminating for me!

I still have to think about many of these pints, though. So more questions will come.


That is an incorrect assesment. Differential forms are simply operators on vectors. Often however, in fact almost always, differential forms are themselves integrated over manifolds. You no more integrate "over" a form than you would intregate "over" a function. You must integrate a function over an interval, and you must similarly integrate a differential form over a manifold.



No. This isn't the case. The notation has laid a trap for you. Forms are operators on vectors, or multi variable functions if you will.



Of course. Indefinite integration is not performed over any manifold. It is an operation on functions.
Ok. This is what I thought but it's good to hear. The problem is related to the way books jump from integration over differential forms to integration in the usual sense, as you have pointed out. I wish books would show that one is actually feeding a vector to the differential form, as you showed in one of your post! why they don't do that (and why they don't use a different symbol with an acute or, as Garrett does, an underlying arrow is a mystery to me!)
 
Last edited:
  • #30
I wish books would show that one is actually feeding a vector to the differential form, as you showed in one of your post!
It turns out that's not the only way to look at it. :smile:

I talked about "pulling back" the integral -- let me explain that a little better.

Suppose you have a differentaible map [itex]f : M \rightarrow N[/itex]. How do you think of it? One reasonable way is that f provides us a way to imagine the manifold M as living inside of the manifold N.

For example, we can imagine a curve [itex]\gamma : [0, 1] \rightarrow M[/itex] as actually providing a way for us to view the unit interval as lying inside of M.


So, if we can imagine M lying inside of N, then surely there should be some relationship to their geometry!

To me, it seems intuitively obvious that if we're traveling around M and know which way we're going, then f should provide a way for us to know which way we're going along N.

Similarly, if we know how to measure things on N, then f should provide a way for us to measure things on M.

In particular, if I'm integrating over a curve [itex]\gamma : [0, 1] \rightarrow M[/itex], there ought to be some obvious way to view it as an integral over the unit interval!


Well, it turns out my intuition is correct -- associated with f are two maps: the map [itex]f_* : TM \rightarrow TN[/itex] on the tangent bundle, and the map [itex]f^* : T^*N \rightarrow T^*M[/itex] on the cotangent bundle. The first one tells us how to push out derivatives from M to N (via operating on tangent vectors), and the second tells us how to pull back integrals from N to M (via operating on cotangent vectors).


That the geometry can be related by looking at tangent and cotangent vectors is, IMHO simply incidental: a means to an end, not the end itself!

As you might imagine, the pushforward and pullback maps satisfy, for a tangent vector v on M and differential form w on N:

[tex](f^*\omega) (\vec{v}) = \omega(f_*\vec{v})[/tex]

which is why you can think of integrating your form by eating tangent vectors.
 
  • #31
some comments:Differential forms, are a part of tensor calculus. To be precise differential forms are what is called "alternating" tensors. this is made extremely clear in spivaks little book calculus on manifolds, which is recommended to everyone.

as to the common usefulness of differential forms:

Two (three?) words: "de Rham cohomology" may suffice. this is explained in guillemin and pollack, or at a more advanced level in bott -tu.
 
Last edited:
  • #32
i agree completely that it is frustrating, maybe hopeless, to try to learn mathematics that was created to express physical concepts, with no link to the physics that gave it life.

Most of us mathematicians do not write such books out of ignorance i guess.

but there is hardly any subjet more firmly settled in the mathematical and physical landscape than differential forms. Some of the most basic phenomena of mirror symmetry are expressed in the relations between Hodge numbers. i.e. dimensions of cohomology spaces whose elements are represented by harmonic differential forms.

as to distinguished users among physicists, think Ed Witten, or look at the book by John Archibald Wheeler, and others; and the great Raoul Bott, who wrote the book on differential forms with Loring Tu, was an engineer who did applied mathematics as well as topology.


since the use of differential forms is not restricted to physics it may be unfair to expect math books to explain the link, as that would seem the domain of physics books, or books on mathematical physics.

i have also been frustrated in trying to learn how manifolds and forms are used in physics, and to have been lectured at solely about mathematics rather than in how the math expresses the physics. But these were physicists doing the lecturing.

they seemed to take the physics for granted and assumed that what was interesting was learning the mathematical formalism. i wanted to know how it expressed physical phenomena and what those phenomena were.

i sat through a week of a summer course in quantum cohomology and mirror symmetry in this state once.

congratulations for trying to create a dialogue.
 
Last edited:
  • #33
as to the confusion (which may have been explained already here) in such notations as double integral of f(x,y)dxdy, and whetehr it does or does not equal the "same" double integral of f(x,y)dydx, you must always be aware of the definitions.

i.e. different people use this same notation for different things. for some it is a limit of sums of products of vakues of f, times areas of rectangles/ then it does not matter which way you write it, dxdy or dydx.

but for other people. using differential forms, it is a limit of sums of products of values of f times oriented areas of rectangles, measured by the differential form dxdy or dydx. one gives minus the other for oriented area.

this is actually an advantage as you will see if you lok at the formula for change of variables in double integrals in most books. i.e. those people who say that dxdy and dydx are the same, will tell you that when changing variables, you must use oriented changes of variables only, i.e. changes (u(x,y),v(x,y)) such that the jacobian determinant is positive.

this is unnecessary in using the forms version, as the orientation is built into the sign change from dxdy to dydx. i.e, you get a correct change of variables formula in all cases when using the forms version but not when using the old fashioned version we elarned in school.

so you might think iof forms that way: they are the same as the old way but they also include an enhancement to take care of all chjagnes of variables including those changing orientation. so they are more mature than the simpler les sophisticated version.
 
  • #34
I hadn't heard of one-forms until I took GR, and hadn't heard of two-forms (am assuming n-forms are defined now) until this forum. I think that physicists tend to focus on calculational ability rather than mathematical formalism.

Treating [itex] \frac{dy}{dx} [/itex] as simple division will give correct results as long as the derivatives are total (and one has to be careful, can't do it with [itex] \frac{d^2 y}{dx^2} [/itex], that's why the 2 is between the d and the y, not after the y). If you have an expression like
[itex] \frac{dy}{dx} = x^2 [/itex], you can multiply both sides by dx and integrate, because that operation is equivalent to applying [itex] \int dx[/itex] to both sides and applying FTC. One can (in fact must) use the latter approach for higher order derivatives.

In regards to the [itex] \vec{D}= \epsilon \vec{E}[/itex] issue, learning a bit of higher order physics can help. If [itex]\epsilon[/itex] is a function of position(interface between glass and air for instance), it should be written [itex]\epsilon(\vec{r}) [/itex] but it's still a 0-form. If, on the other hand, a medium is non-isotropic (crystals), it becomes a rank 2 tensor. This would make it a 2-form, or "dual" to a 2-form (right?).

I have a question about forms. They're linear maps, no? I was told that a one-form is a linear map from vectors to scalars. Would that make a 2-form a map from vectors (or one-forms) to vectors (or one-forms)? If that were the case I don't see quite why D would be a 2-form, and E a 1-form.

As for mathematicians v. physicists issue in general, I think it all depends on where you start. Physicists try to model physical reality, and use mathematics to do that. Being rigorous isn't necessary all the time, and often obscures understanding. Starting from first physical principals and often empirically derived laws, physicists try to make predictions. Mathematicians don't have empirically derived laws, only axioms. A physicist can always do an experiment to test his result. If they agree, must've made an even number of mistakes in the derivation, and the result is still good. A mathematician often can't test things by experiment.
 
Last edited:
  • #35
2forms are alternating bilinear maps on pairs of vectors. see david bachmans book here elsewhere on geometry of differentil forms.

such alternaing conventions make sure one gets zro for the area of a "rectangle" of height zero, i.e. one spanned by two dependent vectors.
 

Similar threads

Replies
8
Views
261
Replies
4
Views
2K
Replies
43
Views
6K
Replies
4
Views
2K
Replies
11
Views
961
Replies
1
Views
919
Replies
37
Views
3K
Replies
209
Views
10K
Replies
7
Views
2K
Back
Top