Musings on the physicists/mathematicians barrier

  • Thread starter nrqed
  • Start date
  • Tags
    Barrier
In summary, learning differential geometry and topology from a background in physics can be challenging due to the need to connect it with previous knowledge. This can be difficult when there is a perceived contempt from those more well-versed in pure mathematics. Differential forms, as a mathematical tool, may not yet be mature enough for general use and their notation can be confusing. They may ultimately suffer the same fate as quaternions did in physics and be replaced by more applicable methods such as vector calculus.
  • #36
nrqed said:
This is an eye opener for me!

My formation as a physicist has given me the feeling that riemannian sums are the fundamental definition!

I couldn't find my old notes, but I believe this was the way I was introduced to the concept of definite integration. I wish I could better draw pictures, but http://img337.imageshack.us/img337/4536/fundamentalhi1.png .

Let the operation [tex]\int f(x) dx[/tex] be that which finds the family of antiderivatives to f(x), i.e. [tex]\int f(x) dx = F(x) + C \Rightarrow \frac{dF(x)}{dx} = f(x)[/tex], F(x) is the "principal" antiderivative and C is an arbitrary constant.

OK. We want to find the area under the curve of f(x) between the points a and b, with a<b. Denote the function that gives the area between the points a and an arbitrary point x, as [tex]A_a(x)[/tex]. Note x>a. We seek [tex]A_a(b)[/tex] as our final answer. Note that as the area under the function at anyone point is zero, we automatically have [tex]A_a(a) = 0[/tex]

OK, now to examine the function [tex]A_a(x)[/tex] and an arbitrary point. In paticular we want to examine its derivative. Consider the value of the area at [tex]A_a(x)[/tex]. How will the area change as we change x. Let [tex]\Delta x[/tex] be our change in x. Then the area between a and [tex]x + \Delta x[/tex] is given by [tex]A_a(x+\Delta x)[/tex]. The difference between these is the shaded area on the graph [tex]\Delta A[/tex]. Specifically [tex]A_a(x+\Delta x) - A_a(x) = \Delta A[/tex]

Now, look at the area [tex]\Delta A[/tex]. As [tex]\Delta x \rightarrow 0[/tex], we can approximate this area using the area of the trapezium formed by [tex](x,0),(x+\Delta x,0),(x+\Delta x,f(x+\Delta x)),(x,f(x))[/tex]. By the area of a trapezium formula, we obtain [tex]\Delta A \cong \frac{f(x+\Delta x) + f(x)}{2} ((x+\Delta x) - x) \cong \frac{f(x+\Delta x) + f(x)}{2} \Delta x[/tex].

So equating our representations for [tex]\Delta A[/tex], we have;
[tex]A_a(x+\Delta x) - A_a(x) \cong \frac{f(x+\Delta x) + f(x)}{2} \Delta x[/tex]
Dividing by [tex]\Delta x[/tex]
[tex]\frac{A_a(x+\Delta x) - A_a(x)}{\Delta x} \cong \frac{f(x+\Delta x) + f(x)}{2}[/tex]

Now take the limit as [tex]\Delta x \rightarrow 0[/tex] to equate both sides.
[tex]\lim_{\Delta x \rightarrow 0}\frac{A_a(x+\Delta x) - A_a(x)}{\Delta x} = \lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) + f(x)}{2}[/tex]

We can see that the right hand side is the definition of [tex]\frac{d A_a(x)}{dx}[/tex]

It can be seen that the limit of the average becomes;
[tex]\lim_{\Delta x \rightarrow 0} \frac{f(x+\Delta x) + f(x)}{2} = f(x)[/tex]

Therefore we have that:
[tex]\frac{d A_a(x)}{dx} = f(x)[/tex]

And that means, that [tex]A_a(x)[/tex] must be an anti derivative of f(x), [tex]\int f(x) dx[/tex]. i.e.

[tex]A_a(x) = F(x) + C[/tex]

But which C? Well from above, we know that, [tex]A_a(a) = 0[/tex]. So that means;

[tex]A_a(a) = F(a) + C[/tex]
[tex]0 = F(a) + C[/tex]
[tex]\Rightarrow C = - F(a)[/tex]

So we have that the area under the curve f(x) between x and a is given by;
[tex]A_a(x) = F(x) - F(a)[/tex]
Where F(x) is the "principal" antiderivative of f(x). In fact, F(x) can be any antiderivative as the constant differences will cancel. Thus we have that;
[tex]A_a(b) = F(b) - F(a)[/tex]

We traditionally denote [tex]A_a(b)[/tex] as [tex]\int_a^b f(x) dx[/tex] to empahsise that
[tex]F(b) - F(a) = \int f(x) dx \vert_b - \int f(x) dx \vert_a[/tex]. Where [tex]\vert_{d}[/tex] stands for "evaluation at x=d".

Anyway that was how I learned that the area under a curve between a and b is [tex]\int_a^b f(x) dx[/tex]. I only saw the riemannian sum method later, and was initially quite dubious of it. Hopefully this long winded post will be of some use to anyone who gets through it all.
 
Last edited by a moderator:
Physics news on Phys.org
  • #37
The above is a pretty nice synopsis of how integration was thought of by many people pre-Riemann, although it should be noted that integration has always been associated with limits of sums (hence, the elongated "S" symbol, standing for "Sum," that Leibniz -- and everyone since -- used).

Riemann (and Cauchy) were worried about several aspects of this way of thinking about integration:

1. How does one actually defined the area underneath a given curve? For lines and circles, the area comes right from the Euclidean geometry, but how can one rigourously define area for other curves? If there is no such definition, then one can't even define the function A_a.

This is where the limit of sums of the areas of rectangles comes from. It was in the lore since Newton (heck, even Archimedes used a 3d version of this idea to find the volume formulae for some spatial objects), but Riemann is the one who formulated the definition of the Riemann sums and the limits of their areas rigourously.

2. How can one actually tell when a function has an antiderivative? For polynomials and other such nice functions it's obvious. But for most functions -- particularly, noncontinuous and/or nondifferentiable ones -- it's a bit of a tricky question.

This is where the Riemann sums come in handy. Using the Riemann sum definition of area and then proving FTC, one can show that any function that is Riemann integrable does in fact have an antiderivative.

3. Most importantly, Riemann was interested in expanding the current definition of integration so that one could rigourously define integration over a larger group of functions than was possible under the current state of calculus.
 
  • #38
i concur. what you have proved is roughly that: if there is an area function for f>0 such that the area between c and d, divided by d-c, is always between the max and min value of f, then the derivative of thata rea function is f.

but you must define the area function and show it has that property.

of course that property itself forces the definition. i.e. if the area is always squeezed between the areas of upper and lower ectangles, which is what the property says, then the only possible definition is the riemann definition.
 
  • #39
what in the world is going on with my browser here today? what i am seeing is nothing like what you are seeing.
 
  • #40
ObsessiveMathsFreak said:
I couldn't find my old notes, but I believe this was the way I was introduced to the concept of definite integration. I wish I could better draw pictures, but http://img337.imageshack.us/img337/4536/fundamentalhi1.png .

Let the operation [tex]\int f(x) dx[/tex] be that which finds the family of antiderivatives to f(x), i.e. [tex]\int f(x) dx = F(x) + C \Rightarrow \frac{dF(x)}{dx} = f(x)[/tex], F(x) is the "principal" antiderivative and C is an arbitrary constant.

OK. We want to find the area under the curve of f(x) between the points a and b, with a<b. Denote the function that gives the area between the points a and an arbitrary point x, as [tex]A_a(x)[/tex].

Hi agaoin. Thanks for your input. I did read through it all and it is very beneficial to meto have this kind of discussion with mathematicians (as opposed to staying within the circle of non-mathematical physicists).

I guess the question is: what is the starting point one chooses for the definition of the integral. As you know, I am used to seeing it as being defined as a riemannian sum as a starting point (and then proving the interpretation of an area under the curve or proving the fundamental theorem of calculus starting from that).

You have a different starting point, but I am a bit confused in this post because first you define the integration as being an operator giving the antiderivative and then you seem to *define* it as the operator that gives the area under the curve. I know that one can show one from the other but I am not clear about what you see as being the true starting point.

I thought that the operation "integration gives the antiderivative" was your starting point.

My problem with this is that, it seems to me, it is less general than the definition as a riemannian sum. I mean, many integrals can be expressed as infinite sums that can be written down starting from the riemannian sum approach but for which for which there are no simple closed expression for the antiderivative. So if one definition (the riemannian sum) works all the time and the other not, I would think that the first would be used as the fundamental definition.

Of course, as others have pointed out, in *practice* one does not use the summation definition to calculate most integrals. I agree with this, but the fact that one usually uses antiderivative to evaluate integrals does not mean that it is necessarily a more fundamental definition.

The way I think about this is a bit similar to the rule concerning the differentiation of, say, x^n. I think of the definition of a derivative as being the usual limit as delta x ->0 of (f(x+ delta x) - f(x))/(delta x).

Now, of course, if I deiffentiate 40x^7 + 6 x^18 - x^31, I do NOT apply the limit definition, I use the usual trick for powers of x.
So when it comes to doing explicit calculations, the limit definition is almost never used. But still, it is the fundamental definition. The fact that the derivative of x^n is n x^(n-1) is just a consequence.

Similarly, the fact that the integral can be show to correspond to the antiderivative is something that I see *following* (in a simple way) from the definition in terms of a riemmann sum. So that in practice, I of course find the antiderivative when I evaluate simple (=doable in terms of elementary functions) integrals, but in my mind I keep thinking that it's something that can be proven starting from the riemannian sum definition and that it is a useful "shortcut" (like bringing down the exponent and decreasing it by 1 in the case of the derivative of x^n...) But I realize from this thread that I am thinking very differently from the way I do.


Now, what seems to me is that mathematicians prefer to *define* the integration as giving the antiderivative and then to see the riemannian sum as something secondary (and maybe not even necessary).

Hurkyl has even started to show me how *derivatives* could be defined in terms of axioms, such as the chain rule and linearity, etc) without introducing the definition as a limit.

My mental block with all this is twofold.

First, there are many things about differentiation and integration that are fairly easy to understand using the Riemannian sum approach or the limit approach (for derivatives) that are not that obvious without them (for example it's not clear to me how to get from purely "integration = finding antiderivative" to the area under the curve view, and many other things). Now, I am not saying that it's not possible to get all the results I know about proceeding that way, but it's not clear to me and it seems that maybe more and more axioms need to be added to cover everything?! (like in proving that the derivative of ln(x) is 1/x.. )



The second problem is that, considering for example integration, I simply do not see at all (even in principle) how to use the more formal approach of integration=finding antiderivative to even the simplest type of physical applications. For example, as I have mentioned, fidning the E field produced by an infinite line of charge, starting from the knowledge of the E field produced by a single point charge.
If someone could show me how to do this without *starting* from a riemannian sum, I would be grateful. However, it seems to me that it is impossible to do with without starting from a riemannian sum.
Ithink that anybody having done even introductory level calculus physics would agree that the riemannian sum (and the idea of very very small "pieces", which I have called infinitesimals and for which I have been ridiculed :smile:) are the only way to think in the context of any physical application.


I would be curious about how a mathematician would go about *setting up the integral* representing, say, the total mass of a sphere with some mass density [itex] \rho(r) [/itex], say. How do mathematicians show how to do this calculation without starting from a Riemann sum and thinking in terms of "infinitesimal" volume element, small enough so that one can approximate the volume density in that element as constant (which is what I call an infinitesimal volume element ) and then summing over all the volume elements...i.e. doing a riemannian sum?? How do mathematicians do the calculation otherwise??


So in the end the question I have are:

A) Is it possible to simply define differentiation as the usual limit and integrals as Riemann sums? Is there any problem with that?

B) Then, it is just a matter of taste to define instead integration as an operation that gives the antiderivative? But then how does one define the antiderivative in the case of integrations which do not lead to expressions that can be written in closed form (without of course getting into circular reasoning)? Can someone show me a general procedure that would define the antiderivative without involving riemannian sum in such a case?

And can one also work out everything about derivatives without using the limit definition?

C) In actual (physical) applications, such as finding E field of continuous charge distributions, etc, is there any alternative to the riemannian sum approach??



Thanks again for th every stimulating exchanges...


Patrick
 
Last edited by a moderator:
  • #41
nrqed said:
Now, what seems to me is that mathematicians prefer to *define* the integration as giving the antiderivative and then to see the riemannian sum as something secondary (and maybe not even necessary).

Actually, this is exactly the opposite of the way most mathematicians see integration.

Although integration in high school and entry-level college is often introduced this way, it is not very rigorous (as I pointed out above). The easiest rigorous method is via Riemann sums, which virtually all math majors learn about rigourously in their first real analysis course. Later on, one can also explore Lebesgue integration, but that's another story.

The fact that integration acts as anti-differentiation is a consequence of the definition of definite integrals as limits of Riemann sums. This is exactly analogous to the situation with derivatives: one defines them using limits, then proves theorems about them such as the power rule, then often restricts oneself to the *proven* rules (rather than the direct definition) when computing derivatives in practice.

So, I would say that, in the case of integrals, mathematicians and physicists are not particularly different from each other in outlook.
 
  • #42
the point of my post 38 was that if you define integrals as antiderivatives then you must give some conditions under which antiderivatives exist. this is usually via riemaNN SUMS.
 
  • #43
Here's a geometric take on differentiation -- all a derivative is is the slope of a tangent line. So if you can define tangent lines, you can get derivatives.


One way to define a tangent line is through secant lines. Secant lines are easy -- given two distinct points P and Q on a curve, there's a unique line through them. That line is the secant line to your curve through P and Q.

If we take the limit as P and Q both approach some point R, then the secant line through P and Q might converge to some line. That line is nothing more than the tangent line at R.



There's another intuitive idea -- that of a "multiple point". A tangent line to a curve is nothing more than a line that intersects your curve multiple times at a single point. Unfortunately, I don't see at the moment a direct way to rigorously define a multiple point. Though in the purely algebraic context, a multiple point is simply a multiple root of the equation "line = curve".
 
Last edited:
  • #44
Doodle Bob said:
Actually, this is exactly the opposite of the way most mathematicians see integration.

Although integration in high school and entry-level college is often introduced this way, it is not very rigorous (as I pointed out above). The easiest rigorous method is via Riemann sums, which virtually all math majors learn about rigourously in their first real analysis course. Later on, one can also explore Lebesgue integration, but that's another story.

The fact that integration acts as anti-differentiation is a consequence of the definition of definite integrals as limits of Riemann sums. This is exactly analogous to the situation with derivatives: one defines them using limits, then proves theorems about them such as the power rule, then often restricts oneself to the *proven* rules (rather than the direct definition) when computing derivatives in practice.

So, I would say that, in the case of integrals, mathematicians and physicists are not particularly different from each other in outlook.


Ok. Good. That's pretty clear. And that corresponds *exactly* to the view I have always had of integration (defined as a Riemann sum, which can then be used to relate to antidifferentiation, which is then used as a tool to carry out integrals explicitly in most cases).

You have expressed my "philosophy" very clearly. I had been led to think, by reading several posts, that mathematicians considered the definition as Riemann sums secondary and even maybe superfluous, which confused me greatly! But I probably had misinterpreted, simply. Thanks for setting the record straight.

Mathwonk said:
the point of my post 38 was that if you define integrals as antiderivatives then you must give some conditions under which antiderivatives exist. this is usually via riemaNN SUMS.

Ok, that makes sense. So one ends being led back to Riemann sums anyway in order to formalize viewing integration as a way to obtain an antiderivative. That's good to hear.

I am used to think of the integration process as being defined in terms of Riemann sums and *then* to "uncover" that the result is, lo and behold, associated to finding antiderivatives (so the "duality" integration-differentiation comes out as a neat *consequence* of the definition of the integration process).
I had started to feel from thsi thread (and others) that maybe mathematicians view more the integration as being more fundamentally *defined* as an "antidifferentiation" process (which, in other words, would turn the Fundamental theorem of Calculus into an identity), with the "Riemann summation" point of view being a consequence only, not the fundamental starting point.

Thanks to both of you for your comments!

Patrick
 
  • #45
I know I haven't been clear on this, so let me try it again.


If someone said to me: "Develop everything rigorously from scratch", the first thing I would think of for definite integrals would be a limit Riemann sums. (Unless I went down the Lesbegue route, or decided to try and be more creative)


But if someone said to me: "apply definite integration to solve problems", Riemann sums would not commonly be something I think of.

The latter is the point I'm trying to make.
 
Last edited:
  • #46
here is a selection from the introduction to a book on tensors for science students written by professors of mechanical engineering and math.
i found this from the thread on free math books. the book seems very clear and connects the new point of view with the old.

[Bowen and Wang]

"In preparing this two volume work our intention is to present to Engineering and Science
students a modern introduction to vectors and tensors. Traditional courses on applied mathematics
have emphasized problem solving techniques rather than the systematic development of concepts.
As a result, it is possible for such courses to become terminal mathematics courses rather than
courses which equip the student to develop his or her understanding further.

As Engineering students our courses on vectors and tensors were taught in the traditional
way. We learned to identify vectors and tensors by formal transformation rules rather than by their
common mathematical structure. The subject seemed to consist of nothing but a collection of
mathematical manipulations of long equations decorated by a multitude of subscripts and
superscripts. Prior to our applying vector and tensor analysis to our research area of modern
continuum mechanics, we almost had to relearn the subject. Therefore, one of our objectives in
writing this book is to make available a modern introductory textbook suitable for the first in-depth
exposure to vectors and tensors. Because of our interest in applications, it is our hope that this
book will aid students in their efforts to use vectors and tensors in applied areas. "
 
  • #47
in practice, e.g. in diff eq, one usually encounters functions whose antiderivatives are completely unknown. thus one needs a procedure which will not only show they exist, but also gave a way to construct or approximate the antiderivative [e.g. of cos(x^2)]. one is again led back to riemann sums.
 
  • #48
When I've read traditional approaches you can read the whole text and again a functional knowledge. For example I found taht knowing that somethign is a tensor if it's components transform in a certain way still left me wondering
'what exactly is a tensor?'

Luckily a few better texts and reading the posts on these boards (esp. mathwonk's!) has made me confident in my knowledge of what exactly a tensor is (and the difference between the compooenst of a tensor, tensor fields etc) even if my knowledge of tensor calculus is still incomplete. Once you've got a good understanding of what a tesnor is it becomes ten times easier to advance your knowledge on the subject.
 
  • #49
Another slightly tangential thing I'd say is how many times do you see physics texts say that 'X applies locally' and how many times do physics texts say in more than a handwaving way what it means that 'X applies locally'?
 
  • #50
Physics is shortsightedly application driven & math is abstract past meaninglessness.

So mix them? No. It depends on the person. Judging by the number of approaches, I don't think it's possible to be all things to all people.
 
  • #51
Thrice said:
Physics is shortsightedly application driven & math is abstract past meaninglessness.
I'd like to see you justify both of those claims.
 
  • #52
Son Goku said:
I'd like to see you justify both of those claims.
Well it was a caricature. I'm just saying I believe the topics allow for many differences & there's no right approach that everyone should converge to. Even in math you'll find discrete vs analysis people or in physics there's theoretical & experimental types.
 
  • #53
son goku, i also like elementary hands on calculations to begin to understand what a concept means. that's how the subjec=ts began and hoq tgheir discoverers often found them. but after a while one wants to pass to using their proeprties both to understand and to calculTE WITH THEM.

fundamental groups for instance have a basic proeprty, their homotopy invariance. thi shows immediately that a mobius strp and a circle have the same fundamental group, so there is no reason to calculate it again for a mobius strip.as for a circle, the ebst calculation is the notice that the exponentil map is a contractible covering covering space. hence the fundamental group of the circle is essentilly the group of lifts of arcs based at a one point of the circle. such lifts are clasified by their endpoint, which must be an integer. hence the fundamental group is the integers.

similarly the fundamental group of a product is the rpoduct of the fundamental groups. so since the torus is a product of two circles, the fundamental group is a prodct of two copies of the integers.

or one could use the contractible covering map showing the torus is the quoteint space of the plane modulo the integr lattice points in the plane, hence that lattice is the fund group.

etc etc
 
  • #54
mathwonk said:
son goku, i also like elementary hands on calculations to begin to understand what a concept means. that's how the subjec=ts began and hoq tgheir discoverers often found them. but after a while one wants to pass to using their proeprties both to understand and to calculTE WITH THEM.
Interesting, it's probably due to my limited experience but most of the mathematician's at my university generally learn things from the definitions first, an ability I always found very impressive.

Although as you said, either way of doing it (learning by calculating first and then moving to definition or vice-versa) is just a way of moving on into the interesting stuff.

As mathematician what would you say, in general, separates the way mathematics is presented in theoretical physics to the way it is presented in maths?
 
  • #55
Son Goku said:
Interesting, it's probably due to my limited experience but most of the mathematician's at my university generally learn things from the definitions first, an ability I always found very impressive.
No one learns anything from a definition.

A mathematical definition is a thing austere and insurmountable. It's form comes only into focus from shelves above it, reached by winding and circuitous paths that loop around its sheer and unforgiving slopes. None can scale its glassy surface, no crack or foothold exists upon it. It is a cliff unmeant for climbing.

Do not accept ropes of rote let down by those on the definitions tip! To understand mathematics, one must muddy one's boots on the longer, less grandiose routes. For if you rely on dangling ropes to ascend this noble peak, then the time will come when your path leads you to a facade as yet unmastered, and no ropes will come. There you will stand awaiting one, surrounded by muddy but fruitful treks to the summit.
 
  • #56
Differential forms not mature?

Hi, OMF,

ObsessiveMathsFreak said:
My opinion, for what it's worth, is that differential forms is simply not a mature mathematical topic. Now it's rigourous, complete and solid, but it's not mature. It's like a discovery made by a research scientist that sits, majestic but alone, waiting for another physisist or engineer to turn it into something useful. Differential forms, as a tool, are not ready for general use in their current form.

Wow! That's quite an impassioned indictment. Did you not read Harley Flanders, Differential Forms, with Applications to the Physical Sciences?

I am quite confident that you are quite wrong about forms. Not only is the theory of differential forms highly developed as a mathematical theory, it is highly applicable and greatly increases conceptual and computational efficiency in many practical engineering and physics tasks. The elementary aspects of forms and their applications have been taught to undergraduate applied math students at leading universities with great success for many years. (At my undergraduate school, the terminal course for applied math majors was based entirely on differential forms; all engineering students were also required to take this course, as I recall.) I am a big fan of differential forms and feel they are easy to use to great effect in mathematical physics; see for example http://www.math.ucr.edu/home/baez/PUB/joy for my modest attempt to describe a few of the applications I myself use most often.

ObsessiveMathsFreak said:
The whole development of forms was likely meant to formalise concepts that were not entirely clear when using vector calculus alone.

Not really, according to Elie Cartan himself (who introduced the concept of a differential form and was their greatest champion in the first half of the 20th century), the main impetus included considerations like these:

1. the need for a suitable formalism to express his generalized Stokes theorem,

2. the nature desire to express a differential equation (or system of same) in a way which would be naturally diffeomorphism invariant (this is precisely the property which makes them so useful in electromagnetism).

ObsessiveMathsFreak said:
A one-form must be evaluated along lines, and a two-form must be evaluated over surfaces.

Does this reasoning appear anywhere in any differential form textbook? No.

This claim seems very contrary to my own reading experience.

ObsessiveMathsFreak said:
Not even is it mentioned that certain vector fields might be restricted to such evaluations. Once the physics is removed, there is little motivation for forms beyond Stoke's theorem,

Not true at all. I hardly know where to begin, but perhaps it suffices to mention just one counterexample: the well-known recipe of Wahlquist and Estabrook for attacking nonlinear systems of PDEs is based upon reformulating said system in terms of forms and then applying ideas from differential rings analogous to Gaussian reduction in linear algebra. I can hardly imagine anything more practical than a general approach which has been widely applied with great success upon specific PDEs.

http://www.google.com/advanced_search?q=Wahlquist+Estabrook&hl=en

ObsessiveMathsFreak said:
I don't think differential forms are really going to go places. I see their fate as being that of quaternions. Quaternions were origionally proposed as the foremost method representation in physics, but were eventually superceeded by the more applicable vector calculus. They are still used here and there, but nowhere near as much as vector calculus. Forms are likely to quickly go the same way upon the advent of a more applicable method.

I am sorry that you have apparently had such a miserable experience trying to learn how to compute with differential forms! I hope you will try again with a fresh outlook, say with a book like the one I cited above.

Chris Hillman
 
Last edited by a moderator:
  • #57
I've just come back to the forum after almost a year away and found this thread stimulating. The following quotes show why even a mechanical engineer is interested in differential forms:

'The important concept of the Lie derivative occurs throughout elasticity theory in computations such as stress rates. Nowadays such things are well-known to many workers in elasticity but it was not so long ago that the Lie derivative was first recognized to be relevant to elasticity (two early references are Kondo [1955] and Guo Zhong-Heng [1963]). Marsden and Hughes, 1983, Mathematical Foundations of Elasticity.'

'Define the strain tensor to be ½ of the Lie derivative of the metric with respect to the deformation'. Mike Stone, 2003, Illinois. http://w3.physics.uiuc.edu/~m-stone5/mmb/notes/bmaster.pdf

'…objective stress rates can be derived in terms of the Lie derivative of the Cauchy stress…' Bonet and Wood, 1997, Nonlinear continuum mechanics for finite element analysis.

'The concept of the Lie time derivatives occurs throughout constitutive theories in computing stress rates.' Holzapfel, 2000, Nonlinear solid mechanics.

'Cartan’s calculus of p-forms is slowly supplanting traditional vector calculus, much as Willard Gibbs’ vector calculus supplanted the tedious component-by-component formulae you find in Maxwell’s Treatise on Electricity and Magnetism' – Mike Stone again.

'The objective of this paper is to present…the benefits of using differential geometry (DG) instead of the classical vector analysis (VA) for the finite element (FE) modelling of a continuous medium (CM).' Henrotte and Hameyer, Leuven.

'The fundamental significance of the vector derivative is revealed by Stokes’ theorem. Incidentally, I think the only virtue of attaching Stokes’ name to the theory is brevity and custom. His only role in originating the theorem was setting it as a problem in a Cambridge exam after learning about it in a letter from Kelvin. He may, however, have been the first person to demonstrate that he did not fully understand the theorem in a published article: where he made the blunder of assuming that the double cross product v  (  v) vanishes for any vector-valued function v = v(x) .' Hestenes, 1993, Differential Forms in Geometric Calculus. http://modelingnts.la.asu.edu/pdf/DIF_FORM.pdf

Several people on this thread have mentioned Flanders’ Differential Forms with Applications to the Physical Sciences (Dover 1989 ISBN 0486661695) and Flanders himself notes that:

'There is generally a time lag of some fifty years between mathematical theories and their applications…(exterior calculus) has greatly contributed to the rebirth of differential geometry…(and) physicists are beginning to realize its usefulness; perhaps it will soon make its way into engineering.'

However, the formation of engineers is different from that of mathematicians and perhaps even physicists and their aim is usually to get a numerical answer to a _design_ problem as quickly as possible. For example, 'stress' first appears on p.27 of Ashby and Jones’ Engineering Materials, in the context of simple uniaxial structures, but p.617 of Frankel’s Geometry of Physics, in the context of a general continuum. Engineering examples, taken from fluid mechanics and stress analysis rather than relativity or quantum mechanics, usually start with 'Calculate…' rather than 'Prove…'. So many otherwise-excellent books, including Flanders, aren’t suitable for most engineering students. However, what I'm learning here is of great help in trying to put together lecture notes for engineers. So I'd like to add my thanks to those here who've contributed to my limited understanding in this area.

Ron Thomson,
Glasgow.
 
Last edited by a moderator:
  • #58
Hi, Ron,

rdt2 said:
Several people on this thread have mentioned Flanders’ Differential Forms with Applications to the Physical Sciences (Dover 1989 ISBN 0486661695) and Flanders himself notes that:

'There is generally a time lag of some fifty years between mathematical theories and their applications…(exterior calculus) has greatly contributed to the rebirth of differential geometry…(and) physicists are beginning to realize its usefulness; perhaps it will soon make its way into engineering.'

Which he wrote in the 1960s, right? Referring to Cartan's work during the 1920's and 1930's? Indeed, by the 1980s, leading engineering schools such as Cornell were restructuring their undergraduate curricula to expose their students to differential forms.

rdt2 said:
However, the formation of engineers is different from that of mathematicians and perhaps even physicists and their aim is usually to get a numerical answer to a _design_ problem as quickly as possible. For example, 'stress' first appears on p.27 of Ashby and Jones’ Engineering Materials, in the context of simple uniaxial structures, but p.617 of Frankel’s Geometry of Physics, in the context of a general continuum. Engineering examples, taken from fluid mechanics and stress analysis rather than relativity or quantum mechanics, usually start with 'Calculate…' rather than 'Prove…'. So many otherwise-excellent books, including Flanders, aren’t suitable for most engineering students. However, what I'm learning here is of great help in trying to put together lecture notes for engineers. So I'd like to add my thanks to those here who've contributed to my limited understanding in this area.

Interesting. I entirely agree with you about the need to emphasize computational techniques, adding the need to offer plenty of simple but nontrivial examples. I mentioned Flanders because of the books I've seen (yeah, mostly in math libraries, not engineering libraries!), it comes closest to this spirit. In his introduction, he actually makes the same complaint: most students want to see some interesting applications presented in detail more than they want a lengthy exposition of "dry" theory.

In 1999, about the time I wrote the "Joy of Forms" stuff I linked to above, I actually was briefly involved in trying to teach differential geometry in general and forms in particular to graduate engineering students, so "Joy" is no doubt based in part upon that experience. This project resulted in disaster, in great part (I think) because I was directed to plunge in without having prepared a curriculum in advance and without knowing anything about the background of my students (this is certainly not a procedure which I advocated at the time, nor one which I would ever advise anyone else to adopt under any circumstances!).

Despite this failure, I remain entirely convinced that the world would be a much better place if engineering schools were more successful at teaching their students more sophisticated mathematics, [ITALICS]as tools for practical daily use in their engineering work.[/ITALICS] Certainly exterior calculus and Groebner basis methods would top the list, but I'd also add combinatorics/graph theory, perturbation theory, and symmetry analysis of PDEs/ODEs. So I hope you perservere with your lecture notes.

Chris Hillman
 
  • #59
Chris Hillman said:
Wow! That's quite an impassioned indictment. Did you not read Harley Flanders, Differential Forms, with Applications to the Physical Sciences?
I've read a lot of books on differential forms. Not that one, but still many others. Many of which purport to have applications to physical sciences, but usually just throw down the differential forms version of Maxwell's equations by diktat with little or nothing in the way of semantics. Worked examples are few, probably for the reason that the worked out question would be longer than the route taken by regular vector calculus.

Chris Hillman said:
Not really, according to Elie Cartan himself (who introduced the concept of a differential form and was their greatest champion in the first half of the 20th century), the main impetus included considerations like these:

1. the need for a suitable formalism to express his generalized Stokes theorem,

2. the nature desire to express a differential equation (or system of same) in a way which would be naturally diffeomorphism invariant (this is precisely the property which makes them so useful in electromagnetism).

I'm skeptical. I feel the main impetus for differential forms was to formalise something that was never really valid in the first place, namely concepts like; df or equations like
[tex]df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy[/tex]
instead of the actual equation
[tex]\frac{df}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y} \frac{dy}{dt}[/tex]
This was always a precarious point of view, and in my own view the theory of forms does not legitimise the concept. Even Spikav acknowledges that there is some debate in Calculus on Manifolds at the end of Chapter 2;
Calculus on Manifolds said:
It is a touchy question whether or not these modern definitions represent a real improvment over classical formalism; this the reader must decide for himself.
I have decided for myself. I don't approve of differential forms. At least, not as a replacement or improvement for vector calculus. That's just my own opinion, but I would ask others to consider this point of view before imposing forms arbitrarily on undergraduate courses.


Chris Hillman said:
ObsessiveMathsFreak said:
A one-form must be evaluated along lines, and a two-form must be evaluated over surfaces.

Does this reasoning appear anywhere in any differential form textbook? No.
This claim seems very contrary to my own reading experience.
That is what is technically referred to as a contextomy. I will simply refer back to the entireity of the original post.

Chris Hillman said:
I hardly know where to begin, but perhaps it suffices to mention just one counterexample: the well-known recipe of Wahlquist and Estabrook for attacking nonlinear systems of PDEs is based upon reformulating said system in terms of forms and then applying ideas from differential rings analogous to Gaussian reduction in linear algebra. I can hardly imagine anything more practical than a general approach which has been widely applied with great success upon specific PDEs.
All very well, but this discussion is in the context of differential forms being a replacement for vector calculus for ordinary physicists and engineers. As per my original point, I believe froms to be unsuited to this task. Whether by design or immaturity, they are not a suitable topic of study for most physicists involved in the study of either electromagnetism and especially fluid dymanics. They may, like other advanced mathematical topics, be of use in describing new theories or methods, but this thread is about their promotion for more basic studies, as per nrqed's initial post.

If I remember correctly, nrqed's inital post was in the context of several other threads on the topic of differential forms and possibly topology, where the supposed benefits of forms were being lauded to nrqed who, quite rightly, simply didn't see the benefit in the frankly massive amount of formalism required to study these topics. He's absolutely right. Topology in paticular is now a disaster area for the newcomer. 100+ years of invesigations, disproofs, counter examples, theorems and revisions have lead to the axioms and definitions of topology being completely unparsable.

A great many topology books offer nothing but syntax with no sematics at all. Differential forms texts fare little better. To a good physicist, sematics is everything, and hence the subject will appear to the great majority of them to be devoid of use. That's actually a problem with a lot of mathematics, and modern mathematics in paticular. Syntax is presented, but sematics is frequently absent.
 
  • #60
ObsessiveMathsFreak said:
I've read a lot of books on differential forms. Not that one, but still many others. Many of which purport to have applications to physical sciences, but usually just throw down the differential forms version of Maxwell's equations by diktat with little or nothing in the way of semantics. Worked examples are few, probably for the reason that the worked out question would be longer than the route taken by regular vector calculus.

My point exactly. A couple of authors try to give examples from mechanics but they always appear very contrived - suggesting that forms may be fundamentally unsuitable in some areas. If you want to read Marsden and Hughes 'Mathematical Foundations of Elasticity' or suchlike, then knowledge of forms is required. The question is, how many engineers and physicists want to read Marsden and Hughes.

I have decided for myself. I don't approve of differential forms. At least, not as a replacement or improvement for vector calculus. That's just my own opinion, but I would ask others to consider this point of view before imposing forms arbitrarily on undergraduate courses.

I'm less certain. I want to expose students to forms as a complement rather than a replacement for vector calculus. They'll judge in later life whether they're useful or whether, like most of their lecture notes, they can be consigned to the little round filing cabinet.

All very well, but this discussion is in the context of differential forms being a replacement for vector calculus for ordinary physicists and engineers. As per my original point, I believe froms to be unsuited to this task. Whether by design or immaturity, they are not a suitable topic of study for most physicists involved in the study of either electromagnetism and especially fluid dymanics.

Oddly enough, fluid dynamics was one of the areas where I thought that differential forms might have most application. I'm less sure about stress analysis, where the tensors are all symmetric.

Ron.
 
  • #61
ObsessiveMathsFreak said:
All very well, but this discussion is in the context of differential forms being a replacement for vector calculus for ordinary physicists and engineers. As per my original point, I believe froms to be unsuited to this task. Whether by design or immaturity, they are not a suitable topic of study for most physicists involved in the study of either electromagnetism and especially fluid dymanics.

This may very well be true. This topic might indeed be a bit abstruse for the average physics undergrad. However, one should keep in mind that a good century and a half ago, the very same thing could have been said about the relationship between linear transformations and matrices. There was at the time not much use for them among the physicists until quantum physics came around.

I must say, though, having just recently read The Large Scale Structure of Space-Time by Ellis and Hawking that a good knowledge of forms (and other elements of differential geometry) are essential to the understanding of GR.

ObsessiveMathsFreak said:
If I remember correctly, nrqed's inital post was in the context of several other threads on the topic of differential forms and possibly topology, where the supposed benefits of forms were being lauded to nrqed who, quite rightly, simply didn't see the benefit in the frankly massive amount of formalism required to study these topics. He's absolutely right. Topology in paticular is now a disaster area for the newcomer. 100+ years of invesigations, disproofs, counter examples, theorems and revisions have lead to the axioms and definitions of topology being completely unparsable.

A great many topology books offer nothing but syntax with no sematics at all. Differential forms texts fare little better. To a good physicist, sematics is everything, and hence the subject will appear to the great majority of them to be devoid of use. That's actually a problem with a lot of mathematics, and modern mathematics in paticular. Syntax is presented, but sematics is frequently absent.

I've always considered it very bad manners to criticize someone else's discipline as worthless, and the above seems to me very bad manners.

Modern mathematics is indeed very complex, and a very wild field to start out in. However, formalism and logic is the mortar that keeps it all together. Without proofs and rigorous thinking, math is just magic. Hence, a great deal of research seems to be more fancy window-dressing than anything substantial. But, every so often a big theorem comes into view: I'm thinking of two within my mathematical career: the Fermat-Wiles Theorem and the solving of the Poincare conjecture (and hence of Thurston's Geometrization Conjecture).

These results probably don't mean to you (they are after all worthless to electromagnetism) but they mean a great deal to me and other mathematicians.
 
  • #62
Hurkyl said:
Through axioms! You define d/dx to be an operator that:
(1) is a continuous operator
(2) satisfies (d/dx)(f+g) = df/dx + dg/dx
(3) satisfies (d/dx)(fg) = f dg/dx + df/dx g
(4) satisfies dx/dx = 1

and I think that's all you need.
Why like that? Why not:

[tex]\frac{d}{dx} : f \mapsto \left( x \mapsto \lim _{h \to 0}\frac{f(x+h) - f(x)}{h}\right )[/tex]
 
  • #63
Doodle Bob said:
I've always considered it very bad manners to criticize someone else's discipline as worthless, and the above seems to me very bad manners.
I was speaking from an andragogical standpoint.
 
  • #64
ObsessiveMathsFreak said:
I was speaking from an andragogical standpoint.

that may be the case. but i do not see any mention of "adult learners" in this post or any of the others.
 
  • #65
Doodle Bob said:
that may be the case. but i do not see any mention of "adult learners" in this post or any of the others.
I think it's safe to say not many children would be learning differential geometry from textbooks.
 
  • #66
ObsessiveMathsFreak said:
I think it's safe to say not many children would be learning differential geometry from textbooks.

Well, that's a very slippery way of avoiding the essence of my assertion: you spend a great deal of time knocking modern mathematics (and topology in particular) as insignificant technobabble and very little talking about curriculum and undergraduate pedagogy.
 
  • #67
as my 8th grade teacher used to say about our reaction to the class clown: "you're only encouraging him."
 
  • #68
The shortcomings of a technical education: whom is to blame?

Oh dear, I wrote a long reply to OMF, then belatedly noticed a crucial remark:

ObsessiveMathsFreak said:
I was speaking from an andragogical standpoint.

Sigh... Oh well, here's the longish post I wrote predicated on the (mistaken?) assumption that OMF is a twenty-something recent college graduate:

ObsessiveMathsFreak said:
I've read a lot of books on differential forms. Not that one, but still many others.

I take it that one of them was Spivak's book, Calculus on Manifolds? You do realize that the goal of this book was not intended to do what you ask? I will go out on a limb and guess (from your username and the context of this thread) that your undergrad major was math, not physics or engineering. If so, I wonder if you might not have been in the wrong major.

ObsessiveMathsFreak said:
Many of which purport to have applications to physical sciences, but usually just throw down the differential forms version of Maxwell's equations by diktat with little or nothing in the way of semantics. Worked examples are few, probably for the reason that the worked out question would be longer than the route taken by regular vector calculus.

Well, if a worked example was the first thing you wanted, it is certainly too bad that you didn't start with the book by Flanders...

ObsessiveMathsFreak said:
I feel the main impetus for differential forms was to formalise something that was never really valid in the first place, namely concepts like; df or equations like
[tex]df = \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy[/tex]
instead of the actual equation
[tex]\frac{df}{dt} = \frac{\partial f}{\partial x} \frac{dx}{dt} + \frac{\partial f}{\partial y} \frac{dy}{dt}[/tex]

1. Well, I guess this depends upon what you mean by "valid". Is a "linear approximation" invalid simply because it is not an identity?

2. Trust me. While historians of mathematics have apparently not yet tackled the career of Elie Cartan (despite his extraordinary influence on the development of modern mathematics), I probably know more about his interests than you do. In particular, I know something about his interests in Lie algebras, differential equations and general relativity, as well as integration.

For Cartan's work on the central problem in Riemannian geometry (in fact a whole class of problems involving differential equations), try Peter J. Olver, Equivalence, Invariants, and Symmetry, Cambridge University Press, 1995. Notice that this work lies at the heart of the Karlhede algorithm in gtr. For more about Cartan's involvement in the early development of gtr, see Elie Cartan-Albert Einstein : letters on absolute parallelism, 1929-1932. English translation by Jules Leroy and Jim Ritter ; edited by Robert Debever, Princeton University Press, 1979. For more about Cartanian geometry (common generalization of Riemannian and Kleinian geometry), try R. W. Sharpe, Differential geometry : Cartan's generalization of Klein's Erlangen program, Springer, 1997. For "Newtonian spacetime", see the chapter in Misner, Thorne, and Wheeler, Gravitation, Freeman 1973.

It is, or IMO should be, very striking that these sources are almost completely independent of each other. Cartan's work is characterized by a remarkable coherence of purpose and scope, yet adds up to so much that even whole commitees of authors can attempt to explain only bits and pieces.

For an attempted overview of Cartan's influence on modern mathematics, Francophones can try Elie Cartan et les mathématiques d'aujourd'hui, Lyon, 25-29 juin 1984 : the mathematical heritage of Elie Cartan, Société mathématique de France, 1985. For anglophones, an important textbook on mathematical physics, which is contemporary with Cartan's career, which emphasizes the utility of differential forms, and which might provide a few hints about why these techniques should be mastered by any serious student of mathematics, is Courant and Hilbert, Methoden der mathematischen Physik. This book went through various German language editions beginning in 1924. It has been translated into English (Interscience Publishers, 1953-62), and IMO remains valuable to this day!

ObsessiveMathsFreak said:
I have decided for myself. I don't approve of differential forms. At least, not as a replacement or improvement for vector calculus. That's just my own opinion, but I would ask others to consider this point of view before imposing forms arbitrarily on undergraduate courses.

Gosh. You certainly seem to be embittered. That is especially unfortunate since this really is such a lovely subject.

About your experience in school, I'd just comment that I think it is very unfair to assume that faculty make arbitrary decisions when designing curricula. I have spent enough time as a math student (and teacher) that I think I can confidently assure you that decisions of this kind, while never easy, are not made lightly.

ObsessiveMathsFreak said:
this discussion is in the context of differential forms being a replacement for vector calculus for ordinary physicists and engineers. As per my original point, I believe froms to be unsuited to this task. Whether by design or immaturity, they are not a suitable topic of study for most physicists involved in the study of either electromagnetism and especially fluid dymanics. They may, like other advanced mathematical topics, be of use in describing new theories or methods, but this thread is about their promotion for more basic studies, as per nrqed's initial post.

Well, I happened to know two of the mathematicians (John Hubbard and Beverly West) who redesigned the undergraduate curriculum at Cornell two decades ago, and I know that they did not take this responsibility lightly! And decades later, I see that the math courses have been redesigned again (good, these decisions should be revisited at least five times per century), but differential forms remain firmly at the heart of the applied mathematics background for the engineering major. See http://www.engineering.cornell.edu/programs/undergraduate-education/minors/applied-mathematics.cfm:
and note these two courses:
MATH 321 Manifolds and Differential Forms II
MATH 420 Differential Equations and Dynamical Systems

Quite frankly, I feel that this demanding curriculum is one reason why the Cornell Engineering School is one of the best: it ensures that graduates have mastered the techniques they will need to work as engineers (or to go on to graduate work in engineering).

ObsessiveMathsFreak said:
If I remember correctly, nrqed's inital post was in the context of several other threads on the topic of differential forms and possibly topology, where the supposed benefits of forms were being lauded to nrqed who, quite rightly, simply didn't see the benefit in the frankly massive amount of formalism required to study these topics. He's absolutely right. Topology in paticular is now a disaster area for the newcomer. 100+ years of invesigations, disproofs, counter examples, theorems and revisions have lead to the axioms and definitions of topology being completely unparsable.

For those whose minds are not made up, I would offer an alternative take on the question of why math courses are so demanding. New math builds on old math. New ideas which rest upon old ideas are not neccessarily any harder to learn, as long as the student masters the older context first. A mathematically trained intuition is a very different thing from what a random process (natural selection) has equipped most humans with. Humans are adapted to learn, and do so very well, and many humans are probably quite capable of retraining their intuition to the point of being able to apply powerful theories like topology and the theory of manifolds in applications in physics, engineering, and other areas. But this retraining takes time.

Unfortunately, larger social issues force universities to try to churn out their graduates in four years, rather than the six to ten years which in my view would be more reasonable for most undergraduate students. This is really a problem too big for the universities, but I feel that it would be more intelligent to adjust upwards both the standard age when an educated youngish person is expected to enter the workforce, and the standard age when an oldish person is expected to retire.

ObsessiveMathsFreak said:
A great many topology books offer nothing but syntax with no sematics at all. Differential forms texts fare little better. To a good physicist, sematics is everything, and hence the subject will appear to the great majority of them to be devoid of use. That's actually a problem with a lot of mathematics, and modern mathematics in paticular. Syntax is presented, but sematics is frequently absent.

I think that if you accept what I said just above, it may be that our positions are not so different after all. Perhaps our real difference is over whether you should blame the math faculty at your school, or the politicians who consistently fail to tackle important long range social issues in the country where you were (mis?)-educated.
 
Last edited by a moderator:
  • #69
ObsessiveMathsFreak said:
To a good physicist, sematics is everything
That's wrong, of course. Without syntax, you are incapable of doing calculations, or communicating with others.

And besides, one can define sematics for a formal system in terms of the syntax itself, so you can't say that any formalism is inherently devoid of semantics.

But that's not the main reason I'm responding...

A great many topology books offer nothing but syntax with no sematics at all.
I'm going to have to call you on this one. Most of the terms one would learn in elementary topology can evoke an immediate geometric picture: open set, closed set, compact set, interior, exterior, boundary, compact set, connected set, path, pathwise-connected, sequence, sequentially compact, continuous function... Feynman even tells a story how his (mathematical) colleagues would come to him and describe whatever scenario they had been working, and Feynman would generally build a mental picture of what they're describing, and would was generally quite accurate at guessing the result of their analysis.

Of course, one of the strengths of the axiomatic method is that it is syntactic, allowing the reader to interpret it in whatever context he desires. I guess, though, that causes a problem for a reader uninterested in forming those interpretations for himself, despite demanding they exist. :-p
 
Last edited:
  • #70
Doodle Bob said:
Well, that's a very slippery way of avoiding the essence of my assertion: you spend a great deal of time knocking modern mathematics (and topology in particular) as insignificant technobabble and very little talking about curriculum and undergraduate pedagogy.
I was not knocking modern mathematics. I was knocking the way modern mathematics is being taught. It is of course a problem throught mathematics, where we have many people who are very good at it, but very few people who are very good at teaching it.

Chris Hillman said:
Sigh... Oh well, here's the longish post I wrote predicated on the (mistaken?) assumption that OMF is a twenty-something recent college graduate:
Just on this and the previous post. There's no reason to assume from my statement about andragogy that I am not a recent graduate. Nor is it correct to referr to the teaching of undergraduates as pedagogy. Everything from undergraduate up is andragogy. You are teaching people who are adults, and who only learn topics they feel are relavent to them.

Chris Hillman said:
I take it that one of them was Spivak's book, Calculus on Manifolds? You do realize that the goal of this book was not intended to do what you ask? I will go out on a limb and guess (from your username and the context of this thread) that your undergrad major was math, not physics or engineering. If so, I wonder if you might not have been in the wrong major.
Spivak's book was by far the best book on forms that I read. By far the best book on calculus for that matter. I consider it as having completed a lot of things left out or paper over in my calculus education so far. Spivak was good because he was what so many other author were not. That is, precise. He fully explained, in the required mathematical detail, what a form was, what it did, etc, etc. He did fall down a bit on tensors, but I think without the physics behind them, tensors remain too up in the air for full conceptual understanding.

My undergraduate degree was in applied mathematics, and I consider myself an applied mathematician. I see mathematics as a disipline to be learned, studied and indeed advanced in the context of problems, be they from physics, chemistry, statistics, or even philosophical questions. In retrospect I see my degree choice as being a very good one over physics, engineering or even theoretical physics.

Chris Hillman said:
Well, if a worked example was the first thing you wanted, it is certainly too bad that you didn't start with the book by Flanders...
I've taken your recommendation and ordered it. If I see forms working well on a problem, perhaps I'll see them in a new light. But I must mention that I have seen them at work on a good many problems, and I have not yet seen any great advantage in the method.

Chris Hillman said:
Gosh. You certainly seem to be embittered. That is especially unfortunate since this really is such a lovely subject.
I would describe differential forms as many things. Formal certainly. Interesting there is no doubt. They can even be useful when one moves into higher dimensions. But lovely is not a word I would use for a topic that allows for old ghost of maths class past like [tex]dx + dy[/tex] to rise up and walk the Earth once more.

Chris Hillman said:
Unfortunately, larger social issues force universities to try to churn out their graduates in four years, rather than the six to ten years which in my view would be more reasonable for most undergraduate students.
I would strongly think otherwise. Four years is quite a reasonable enough amount of time to spend in any undergraduate degree. Anything more would be far too much.

I understand that in the United States, when people finish their degree, they go on to do six years of coursework to obtain a Phd! I would strongly disagree with this. This is far too much to ask anyone to do. Where I am, the regieme is that Phd's are granted through research. Your research could be, usually, between three and five years. In that time, you truly do learn the skills of your trade, and I can personally say I learned at lot faster, and a lot more by researching than I ever did taking classes.

Most topics would only really require a good solid week in a workshop anyway. Differential forms for example. I spent about a month dipping in and out of it. To be honest I don't think a huge amount more is required in most fields, especially if you may not end up using the topic much. Not just differential forms, any topic. I don't agree with spending ten years in classes. I think you learn more out of them than in, on your own initiative of course.

Chris Hillman said:
I think that if you accept what I said just above, it may be that our positions are not so different after all. Perhaps our real difference is over whether you should blame the math faculty at your school, or the politicians who consistently fail to tackle important long range social issues in the country where you were (mis?)-educated.
I blame the mathematicians. They're not precise enough.

Hurkyl said:
That's wrong, of course. Without syntax, you are incapable of doing calculations, or communicating with others.

And besides, one can define sematics for a formal system in terms of the syntax itself, so you can't say that any formalism is inherently devoid of semantics.
I think syntax and sematics should come together. In synergy. One cannot understand one without the other. I a big believer in introducing every new mathematical theory or concept via a problem, because that is invariably where it originated.


Hurkyl said:
I'm going to have to call you on this one. Most of the terms one would learn in elementary topology can evoke an immediate geometric picture: open set, closed set, compact set, interior, exterior, boundary, compact set, connected set, path, pathwise-connected, sequence, sequentially compact, continuous function...
The terms might evoke intuitive ideas and pictures. The definitions certainly do not. Despite any impressions I may be giving off, I still consider myself a mathematician, and preciseness and exactness are important to me.

In this regard, even though the concept of an open set is perhaps intuative, I need a precise and clear definition to move on. Most topology books, in fact every topology book I have ever read, fails to meet this criterea. While the definitions are probably precise, they are as far from clear and intuative as it is possible to be.

For quite a while, I took a compact set to be a single point or element, as the definition given was; "A set is compact if every open cover has a finite sub-cover". Seeing this in the context of the support of the delta distibution, I took the definition straightforwardly as decribing a point, as the author had used the strict subset notation when describing a subcover. The author has sacraficed clarity for terseness, unneccesarily in my opinion. Simply stating "A set is compact if every open cover is either finite or has a finite sub-cover", would be a perfectly clear definition where of course the compactness of paticluar sets could be inferred immediately without invoking subcovers, and where notational laxity would not cause problems later down the road.

This is only one of the many examples where topology books resemble more a house of mirrors than what they should resemble, which is "Calculus on Manifolds". Definitions, examples and exercises. Explanations wouldn't go amiss either.
 

Similar threads

Replies
8
Views
261
Replies
4
Views
2K
Replies
43
Views
6K
Replies
4
Views
2K
Replies
11
Views
960
Replies
1
Views
918
Replies
37
Views
3K
Replies
209
Views
10K
Replies
7
Views
2K
Back
Top