What Makes Differential Forms Click?

In summary, this author explains differential forms in terms of maps between directed line segments and the real numbers. Differential 2-forms are mappings from oriented triangles to the real numbers. The pull back stuff just "pulls" an integral in x variables "back" to 1 parametrized variable as far as the author can see. If \overline{a} \ =\ (a_1,a_2) & \overline{b} \ =\ (b_1,b_2) then: \ lambda_1 dx \ + \ \lambda_2 dy \ : \ [a_1,b_1]\times [a_2,b_2
  • #1
sponsoredwalk
533
5
*Bit of reading involved here, worth it if you have any interest in, or
knowledge of, differential forms*.

It took me quite a while to find a good explanation of differential forms & I
finally found something that made sense, in a sense. Most of what I've written
below is just asking you to judge it's general correctness, the notation etc..., as
a some of it is based on intuition based on the material I read. At the bottom you'll
see I have an issue with projections & a concern about throwing around minus signs.
Also I can't find a second source that describes forms this way so hopefully someone
will learn something :cool: If anyone find any source with a comparable
explanation please let me know :cool:

A single variable differential 1-form is a map of the form:

[itex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/itex]

When you take the constant 1-form it becomes clearer:

[itex] dx \ : \ [a,b] \rightarrow \ \int_a^b \ \ dx \ = \ \Delta x \ = \ b \ - \ a[/itex]

Okay, didn't know that's what a form was :blushing: Beautiful stuff! In my
favourite kind of notation too!

This looks an awful lot like the linear algebra idea of a linear functional in
a vector space (V,F,σ,I):

[itex] f \ : \ V \ \rightarrow \ F [/itex]

where you satisfy the linearity property.

In more than one variable you can have:

[itex] dx \ : \ [a_1,b_1] \rightarrow \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ = \ b_1 \ - \ a_1[/itex]

[itex] dy \ : \ [a_2,b_2] \rightarrow \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dy \ = \ b_2 \ - \ a_2[/itex]

which leads me to think that the following notation makes sense:

[itex] dx \ + \ dy \ : \ [a_1,b_1]\times [a_2,b_2] \rightarrow \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ + \ dy \ = \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ + \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ dy = \ (b_1 \ - \ a_1) \ + \ (b_2 \ - \ a_2) \ = \ \Delta x \ + \ \Delta y[/itex]

The pull back stuff just "pulls" an integral in x variables "back" to 1
parametrized variable as far as I can see.

If [itex] \overline{a} \ =\ (a_1,a_2)[/itex] & [itex] \overline{b} \ =\ (b_1,b_2)[/itex] then:

[itex] \lambda_1 dx \ + \ \lambda_2 dy \ : \ [a_1,b_1]\times [a_2,b_2] \rightarrow \ \int_{ \overline{a}}^{ \overline{b}} \ \ \lambda_1 dx \ + \ \lambda_2 dy \ = \ \lambda_1 \int_{(a_1,a_2)}^{(b_1,b_2)} dx \ + \ \lambda_2 \int_{(a_1,a_2)}^{(b_1,b_2)} dy = \ \lambda_1 (b_1 \ - \ a_1) \ + \ \lambda_2 (b_2 \ - \ a_2) \ = \ \lambda_1 \Delta x \ + \ \lambda_2 \ \Delta y[/itex]

That's the notable stuff for 1-forms, also that they can be extended to
n dimensions very explicitly with this notation & things don't have to be
constant. The vector parallels (notably Work!) are just jumping out
already!

I'd like to quote the book now:

"Differential 1-forms are mappings from directed line segments to
the real numbers. Differential 2-forms are mappings from oriented
triangles to the real numbers
".

So, by this comment what we're doing with a differential 2-form is
finding the area in a triangle. What do you do when you find area's?
Use the cross product! How does the cross product work? It works
by finding the area contained within (n - 1) vectors & expresses it
via a vector in n-space! :cool: Furthermore from what I gather the whole
theory is integration via simplices - p-dimensional triangles, or at least
the general idea is.

So if we have a positively oriented triangle:

attachment.php?attachmentid=156400&stc=1&d=1303620668.png



Which we denote by [itex] T \ = \ [ \overline{a},\overline{b},\overline{c}][/itex] (This is all done in ℝ² for now)
what is the area of the triangle?

[itex] A \ = \ \frac{1}{2} \cdot b \cdot h \ = \ \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/itex]

Just extend it to 3 dimensions for the calculation & you get the result.

If you go from [itex] \overline{a}[/itex] to [itex] \overline{b}[/itex] to [itex] \overline{c}[/itex] you have

[itex] T \ = \ [ \overline{a},\overline{b},\overline{c}][/itex]

which is defined as a positive orientation & if you go from [itex] \overline{a}[/itex] to [itex] \overline{c}[/itex] to [itex] \overline{b}[/itex] you have

[itex] T \ = \ [ \overline{a},\overline{c},\overline{b}][/itex]

which is defined as a negative orientation.

[itex] dx \ dy \ : \ [ \overline{a},\overline{b},\overline{c}] \ \rightarrow \ 6[/itex]

[itex] dx \ dy \ : \ [ \overline{a},\overline{c},\overline{b}] \ \rightarrow \ - 6[/itex]


This is made clearer with the notation:

[itex] dx \ dy \ : \ T \ \rightarrow \ \int_T \ dx \ dy \ = \ \ \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/itex]

All of this I think I understand.

The next issue is defining 2-forms in 3 dimensional space. There is
talk of projections and such, I don't quite understand what's going on
though.

The projection of a point (x,y,z) onto the x-y plane is (x,y,0).

The projection of the triangle

[itex] T \ = \ [ \overline{a},\overline{b},\overline{c}] \ = \ [(a_1,a_2,a_3),(b_1,b_2,b_3),(c_1,c_2,c_3)][/itex]

onto the x-y plane is

[itex] T \ = \ [ \overline{a},\overline{b},\overline{c}] \ = \ [(a_1,a_2,0),(b_1,b_2,0),(c_1,c_2,0)][/itex].

They say that they will define the differential form [itex]dx \ dy[/itex]
to be the mapping from the oriented triangle [itex]T[/itex] to the
signed area of it's projection onto the x-y plane,
which is the z coordinate of [itex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/itex] :confused:

That doesn't make much sense, but I read on & see that

[itex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a})) \ = \ ( \int_T dy \ dz, \int_T dz \ dx, \int_T dx \ dy)[/itex]

Now, this makes sense in that what is orthogonal to dx dy is something
in the z coordinate, and what is orthogonal to dy dz is in the x
coordinate etc... But what does that justification paragraph actually say?
I get the feeling it's an insight I should know about, I don't understand
what's going on with the projections. I think that if I did I would have
predicted the integrals in the coordinates the way it's set up there!

Let me quote the actual paragraph in it's entirety just in case:

We define the differential 2-form [itex]dx \ dy[/itex] in 2 dimensional
space to be the mapping from an oriented triangle [itex] T \ = \ [ \overline{a},\overline{b},\overline{c}][/itex]
to the signed area of it's projection onto the x,y plane, which is the z
coordinate of
[itex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/itex].
Similarly, [itex]dz \ dx [/itex] maps this triangle to the signed area
of it's projection onto the z,x plane which is the y coordinate of
[itex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/itex].
The 2-form [itex]dy \ dz [/itex] maps this triangle to the signed area
of it's projection onto the y,z plane, the x coordinate of
[itex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/itex].
"Second Year Calculus - D. Bressoud".

Also, based on all of that writing I still don't know why dx dy = - dy dx :frown:

What I mean by this is that after all that good work he just defines
dx dy to be the area of that triangle, I think you can see my problem
lies with the projection issues. I think if I understand what's going on
with the projections I'll get it. Please don't just resort to telling me that
[itex] \ihat{k} \ \times \ \ihat{j} \ = \ \ihat{i}[/itex] :smile:,
I mean I can justify things as they stand in a sense because I know dy dx
goes to the z dimension with the negative of what happens when dx dy goes
to the z-axis (from the algebra involved in the cross product derivation
via the orthogonality of the dot product
) but I still feel like something
is missing or suspect. I don't feel very confident about this because of orientation:


[itex] dx \ dy \ : \ [ \overline{a},\overline{b},\overline{c}] \ \rightarrow \ 6[/itex]

[itex] dx \ dy * \ : \ [ \overline{a},\overline{c},\overline{b}] \ \rightarrow \ - 6[/itex]

I mean dx dy = 6 = - (-6) = - dx dy *, I just feel a little iffy about
throwing out minus signs to justify anti-commutativity issues! I don't
think that dx dy = 6 = - dy dx = dx dy* = - (-6), but that could just be
confusion.

So, the question is just about the general correctness of what I wrote
& then the issue of projections, I couldn't just post a question about
projections because I'm not 100% sure my take on the theory that
leads up to this is 100% accurate (I think it is though!) & to be quite
honest seeing as I have spent ages trying to find someone who would
explain the theory in this way, but having been unable to find anyone
who would, makes me think very few people view this subject this way
& as such would love to see how different it is for someone who takes
the axiomatic, anti-commutative, definitions that are found in nearly all
of the books on google. If this is/isn't new please let me know anyway
(and help if possible :redface:)! :biggrin:
 
Physics news on Phys.org
  • #2
If your 1-form maps "directed line segments to the real numbers", maybe

[tex]f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/tex]

could be written like this:

[tex]f(x) \, dx: \mathbb{R}^2 \rightarrow \mathbb{R}[/tex]

such that

[tex]f(x) \, dx (a,b)=\int_{a}^{b}f(x) \, dx.[/tex]

Then the choice of inputs determines the bounds of the line segment, and whether the first is less than the second determines the orientation. Seems like differential 1-form means the integral of a specific function, but with limits of integration yet to be specified? Like an indefinite integral, except with its two hands held out and a pleading look, as if to say, "Don't forget to give me two numbers! I'm not finished here."
 
  • #3
Rasalhague said:
If your 1-form maps "directed line segments to the real numbers", maybe

[tex]f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/tex]

could be written like this:

[tex]f(x) \, dx: \mathbb{R}^2 \rightarrow \mathbb{R}[/tex]

such that

[tex]f(x) \, dx (a,b)=\int_{a}^{b}f(x) \, dx.[/tex]

Then the choice of inputs determines the bounds of the line segment, and whether the first is less than the second determines the orientation. Seems like differential 1-form means the integral of a specific function, but with limits of integration yet to be specified? Like an indefinite integral, except with its two hands held out and a pleading look, as if to say, "Don't forget to give me two numbers! I'm not finished here."

I know what you're trying to say but I think you made a mistake:

[tex]f(x) \, dx: \mathbb{R}^2 \rightarrow \mathbb{R}[/tex]

This describes a two-dimensional domain whereas:

[tex]f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/tex]

Is a closed one-dimensional interval on ℝ. I was thinking of some notation of the form:

[itex] f(x) \ dx \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} \ | \ x \ \mapsto \ \int_a^b \ f(x) \ dx [/itex]

(Note that this notation [itex] f \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} \ | \ x \ \mapsto \ f(x)[/itex] is just shorthand for [itex] f \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} \ such \ that \ f : \ x \ \mapsto \ f(x)[/itex]), and for:

[itex] dx \ + \ dy \ : \ [a_1,b_1]\times [a_2,b_2] \rightarrow \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ + \ dy \ = \ (b_1 \ - \ a_1) \ + \ (b_2 \ - \ a_2) \ = \ \Delta x \ + \ \Delta y [/itex]

some notation of the form:

[itex] dx \ + \ dy \ : \ \mathbb{R}^2 \ \rightarrow \ \mathbb{R} \ | \ (x,y) \ \mapsto \
\int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ + \ dy \ = \ \Delta x \ + \ \Delta y [/itex]

You know? The notation dx : [a,b] → ∫ab dx = b - a = Δx
implicitly encodes the limits of integration as far as I can tell.
 
  • #4
The standard notation for defining a function goes like this

[tex]\phi : A \rightarrow B \; | \; a \mapsto b[/tex]

where [itex]A[/itex] and B are sets, with [itex]a \in A[/itex] and [itex]b \in B[/itex]. [itex]b[/itex] is the unique element of [itex]B[/itex] which the function associates with [itex]a[/itex]. Or

[tex]\phi : A \rightarrow B \; | \; \phi(a) = b.[/tex]

So the verbal description, a map from the set of directed line segments to the set of real numbers, made me think you might mean

[tex]f(x) \; dx : \left \{ [a,b]: a,b \in \mathbb{R} \right \} \rightarrow \mathbb{R} \; | \; [a,b] \mapsto \int_{a}^{b}f(x) \, dx,[/tex]

or equivalently

[tex]f(x) \; dx : \mathbb{R}^2 \rightarrow \mathbb{R} \; | \; (a,b) \mapsto \int_{a}^{b}f(x) \, dx.[/tex]

I'm fairly sure that's grammatical. Whether it's right is another matter : )

A problem with

[tex] f(x) \ dx \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} \ | \ x \ \mapsto \ \int_a^b \ f(x) \ dx [/tex]

is that it seems to make the input not an interval of the form [a,b], or the two numbers, a and b, needed to specify it, but a single number, x, in which case, since the integral on the right doesn't actually depend on x, a 1-form would be a rather trivial function (of necessity constant). It would raise the question: given that

[tex]f(x) \; dx (x) = \int_{a}^{b}f(x) \, dx,[/tex]

what would be a good letter to denote the 1-form defined by

[tex]\int_{c}^{d}f(x) \, dx \enspace ?[/tex]
 
  • #5
Made a better post, my old one is at the bottom in small writing, probably best ignored
but maybe browse it if what my new post says isn't correct:New post:

There's a serious problem with the notation as it stands & I've been
trying to figure it out over the week but haven't gotten very far.

sponsoredwalk said:
A single variable differential 1-form is a map of the form:

[itex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/itex]

When you take the constant 1-form it becomes clearer:

[itex] dx \ : \ [a,b] \rightarrow \ \int_a^b \ \ dx \ = \ \Delta x \ = \ b \ - \ a[/itex]

Now, if we write a function in standard notation:

[itex] f \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} [/itex]

where

[itex] f \ : \ x \ \mapsto \ f(x) \ = \ y[/itex];

All concatenated into:

[itex] f \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} \ | \ x \ \mapsto \ f(x) \ = \ y[/itex];

we see it's very different from notation of the form:

[itex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/itex].

Now, the notation for a linear functional is of the form:

[itex] f \ : \ \mathbb{V} \ \rightarrow \mathbb{R} \ | \ \overline{v} \ \mapsto \ f( \overline{v}) \ = \ v[/itex]

where [itex] \mathbb{v}[/itex] is a vector space, [itex] \mathbb{R}[/itex] is the reals, [itex] \overline{v} [/itex]
is a vector & v as a scalar, I'm trying to put it all together, i.e. the
interval [a,b], functions of the form Pdx + Qdy + Rdz etc... .

This is going to have to be done carefully, I don't want to lose this!
I have an idea of maybe taking the interval [a,b] = I and mapping it
to a subset of Rⁿ to create functions of the form Pdx + Qdy + Rdz
because:

Definition 5.1.1 A 1−form φ on U ⊆ Rⁿ ( either n = 2 or n = 3) assigns, for every
p ∈ U ⊆ Rⁿ, a is a linear map φ|p : Rⁿ → R.
link:
by this definition I think we can justify the appearance of weird terms
that need to be integrated. Then since the integral of a differential form
is so like a linear functional we can create a map like φ : ℝⁿ → ℝ to get
the scalar value for the integral. What I'm trying to hint at is a
composition of maps going from I → ℝⁿ → ℝ. The following passage
kind of gave me this idea:

Here, where [itex] \alpha \ = \ \sum_i \ f_i dx_i[/itex], is one discussion:

Let U be an open subset of Rⁿ. A parametrized curve in U is a smooth
mapping c : I → U from an interval I into U. We want to integrate over I.
To avoid problems with improper integrals we assume I to be closed and
bounded, I = [a,b]. (Strictly speaking we have not defned what we mean
by a smooth map c : [a,b] → U. The easiest defnition is that c should be
the restriction of a smooth map c⁰ : (a - ε, b + ε) → U defined on a
slightly larger open interval.) Let be a 1-form on U. The pullback c*α is
a 1-form on [a,b], and can therefore be written as c*α = g dt
(where t is the coordinate on ℝ). The integral of over c is now defined by

[itex] \int_c \alpha \ = \int_{[a,b]} \ c* \alpha \ = \ \int_a^b g(t)dt [/itex]
http://www2.bc.cc.ca.us/resperic/mathb6c/DifferentialForms.pdf
By using this idea I think we can create a more rigorous idea, if
c : I → U | t ↦ c(t) & if φ : ℝⁿ → ℝ then we can create φ o c : I → ℝ.
Now I know that's wrong & there's no dx's anywhere & that I think
the c is supposed to be dot-product'ed with dx or something but you
get the gist of what I'm saying.

I don't see how

[itex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/itex]

makes any sense though, a map takes an element of an interval (set)
and maps it to an element of the other set but here it seems to be
mapping an interval to a scalar number & it only makes sense to me
if you make the interval (b - a) into a vector v (an idea which can
extend this stuff into higher dimensions!) & then view dx as a function
taking in v & spitting out a scalar value, i.e.

[itex] dx \ : \ \mathbb{V} \ \rightarrow \mathbb{R} \ | \ \overline{v} \ \mapsto \ dx( \overline{v}) \ = \ b \ - \ a \ = \ \Delta x[/itex]

which is a scalar. So if you are talking about functions like Pdx + Qdy + Rdz
& maps like f(x)dx : ... I mean aren't you going to need an intermediate set,
something like I → ℝⁿ → ℝ? Hopefully that makes sense, let me know what you think!

Btw, this is an aside from the other perplexing question about minus
sings, anti-commutativity & orientation Tough stuff!
--------

Old post:It's very important to sort this notation out, what's going on here?

dx : [a,b] → ∫ab dx = b - a = Δx

This notation is half like the notation f : ℝ → ℝ & half like the notation
f : x ↦ f(x) = y but I mean you're right it's taking an interval as a single
number or something, it doesn't make sense really as it stands.

I read around & collected a few quotes discussing this topic at an
intelligible level:

Here, where [itex] \alpha \ = \ \sum_i \ f_i dx_i[/itex], is one discussion:

Let U be an open subset of Rⁿ. A parametrized curve in U is a smooth
mapping c : I → U from an interval I into U. We want to integrate over I.
To avoid problems with improper integrals we assume I to be closed and
bounded, I = [a,b]. (Strictly speaking we have not defned what we mean
by a smooth map c : [a,b] → U. The easiest defnition is that c should be
the restriction of a smooth map c⁰ : (a - ε, b + ε) → U defined on a
slightly larger open interval.) Let be a 1-form on U. The pullback c*α is
a 1-form on [a,b], and can therefore be written as c*α = g dt
(where t is the coordinate on ℝ). The integral of over c is now defined by

[itex] \int_c \alpha \ = \int_{[a,b]} \ c* \alpha \ = \ \int_a^b g(t)dt [/itex]
http://www2.bc.cc.ca.us/resperic/mathb6c/DifferentialForms.pdf
Here is a discussion from one book:

"Let ω(x,y) = P(x,y) dx + Q(x,y) dy be a differential form in a domain D of ℝ² endowed
with coordinates (x,y)...

... For a 1-form ω = P dx + Q dy + R dz defined in a domain D of ℝ³ we obtain...

link

Here is the discussion from a link online:

A differential form (also denoted as exterior1 differential form) is, informally, an
integrand, i.e., a quantity that can be integrated. It is the dx in ∫dx and the dx dy in
∫∫ dx dy.

More precisely, consider a smooth function F(x) over an interval in R. Now, define
f(x) to be its derivative, that is, f(x) = dF/dx

Rewriting this last equation (with slight abuse of notation for simplicity)
yields dF = f(x)dx, which leads to:

ab dF = ∫ab f(x) dx = F(b) - F(a).

This last equation is known as the Newton- Leibniz formula, or the first fundamental
theorem of calculus. The integrand f(x) dx is called a 1-form, because it can only be
integrated over any 1-dimensional (1D) real interval. Similarly, for a function
G(x; y; z), we have:
[itex]dG \ = \ \frac{ \partial G}{\partial x} \ dx \ + \ \frac{ \partial G}{\partial y} \ dy \ + \ \frac{ \partial G}{\partial z} \ dz[/itex]

which can be integrated over any 1D curve in ℝ³, and is also a 1-form.

link

http://www.math.lsa.umich.edu/~idolga/285stokes.pdf is worth reading
as well (too much tex to re-write!).

Here is another:
Definition 5.1.1 A 1−form φ on U ⊆ Rⁿ ( either n = 2 or n = 3) assigns, for every
p ∈ U ⊆ Rⁿ, a is a linear map φ|p : Rⁿ → R.
http://math.uh.edu/~minru/Riemann09/diffformn.pdf

So everything is all over the place. Sometimes it's f : I ⊆ ℝ → ℝⁿ & other times it's
f : ℝⁿ → ℝ. Maybe you can piece some of the above together with the stuff described
in the first post? I think the first thing I quoted justifies the I = [a,b] → ℝⁿ stuff as
beinga parametrization of the interval [a,b] onto some thing like ω = Pdx + Qdy
but wouldn't it have to be ω(t) = P(t) dx + Q(t) dy or something? In any case then
this has to go something like f : ℝⁿ → ℝ | t ↦ ∫f dx = b - a = Δx (which doesn't
make much sense as I wrote it but you get the gist!). It has to be something like that
because it's the form of a linear functional. That's confused though because my first post
had dx : [a,b] → ∫ab dx = b - a = Δx ! What does it even mean?

Just putting this out there as I figure it out, let me know if you come up with anything!
 
Last edited by a moderator:
  • #6
This is pretty good:

3. What do forms do?
But what does a 1-form do? For example, a function f can be applied to
a number x to produce another number, f(x). What can a form be applied
to? The answer is: a k-form, where 0 k 3, acts on k-tuples of
vectors (v¹, . . . , v) and the output it produces is a real number.
For example, if ω is a 1-form and v ³ a vector, then ω(v) is defined
and is just a real number. Similarly, if Φ is a 2-form, then for any two
vectors v,w, Φ(v,w) is a real number, and so on.

Furthermore, ω(v + w) = ω(v) + ω(w) and ω(tv) = tω(v), for any scalar
t. In other words, ω is a linear function on ℝ³.

It’s a little more complicated with 2-forms Φ, since they depend on two
vectors. Now, if we fix w, then v Φ(v,w) is a linear function on ℝ³ and
if we fix v, then w Φ(v,w) is also a linear function. This can be
summarized by saying that Φ is bilinear. Moreover, Φ(w,v) = −Φ(v,w),
i.e., Φ is anti-symmetric. (Note that Φ(v,v) is always zero.)

But how do we compute ω(v)? For that, we need to know what dx(v),
dy(v), and dz(v) are. The answer is easy: dx(v) is just the x - (i.e., first)
component of v, dy(v) is the y-component of v, etc... .

For example if ω = y dx + z dy − Πdz and v = (−1, 0, 2), then
ω
(v) = y dx(v) + z dy(v) − Π dz(v) = −y − 2Π.

...

6. Integration of forms
A k-form can be integrated over a (piecewise smooth) k-dimensional
object. For instance, 1-forms are integrated over curves, 2-forms over
surfaces, and 3-forms over 3-dimensional solids. We will only define the
integral of a 1-form ω over a curve C. If C is parametrized by
γ : [a, b] ³, i.e., C = {γ(t) : a t b}, then

[itex] \int_C \ \omega \ = \ \int_a^b \ \omega( \gamma ' (t))dt [/itex]

Observe that ω(γ'(t)) is just a scalar function of t and the integral on the
right-hand side is just the ordinary Riemann integral. It can be shown that
[itex]\int_C \ \omega [/itex] does not depend on the choice of the
parametrization of C.
http://www.math.sjsu.edu/%7Esimic/Fall10/Whatis/diff-forms.pdf" seems to answer the question about
projections & areas (I think, have to do it all properly).


So there's a looming question about

dx : [a,b] → ∫f(x) dx = b - a = Δx

or rather

f(x)dx : [a,b] → ∫f(x) dx

and why my first post on this has the author defining differential forms
to be integrals as just described here while the last few posts have the
authors defining forms without mentioning integrals :confused: I think [a,b] is
just the x-axis vector whose length is (b - a) & so:

ω = y dx + xdy : [a,b] x [c,d] ℝ | v = (b - a, d - c) ↦ y dx(v) + xdy(v).

But since
f(x)dx : [a,b] → ∫f(x)dx it could be something like:

ω = y dx + xdy : [a,b] x [c,d] ℝ | v = (b - a, d - c) ↦ y dx(v) + xdy(v).

I don't know.

/tired...
 
Last edited by a moderator:
  • #7
Thanks for the links. Currently reading Garrity, as he's describing clearly some things that have confused me before, and I've just dipped into a few of the others so far. I have high hopes we'll crack this... eventually!

Just a small observation to be going on with. (I don't know if this is relevant.) I notice that the "k-form" defined in section 3 of the passage you quoted in your last post is not the kind of "k-form" which is the entity being integrated in section 6. The first kind of k-form is an element of /\k(Rn), that is, a covariant alternating tensor. The second kind of k-form is a covariant alternating tensor field, conceived as a function from Rn to /\k.

For example, a 0-form, in this context, a 0-CAT is a real number, whereas a 0-CAT field is a real valued function on a subset of Rn. It only makes sense to integrate the latter.

It seems to be pretty commonplace to use the same name for both ideas. Garrity begins by defining a k-form as a CAT, and a differential k-form as a CAT field, as does Bachman, if I remember rightly. But once he gets into talking about CAT fields, he tends to omit the "differential" and just talk about CAT fields as k-forms too. So, given that his differential k-form is not a kind of k-form, the logic of this made me chuckle:

each elementary k-form dxI is in fact a k-form. (Of course this would have to be the case, or we wouldn't have called them elementary k-forms in the first place.)
 
  • #8
A couple more passing observations on Garrity's book:

Although he defines Ai as the rows of A, on p. 117, and doesn't explicitly redefine them as columns till p. 121, already by the foot of p. 118, in his abstract definition of a k-form (on this occasion, definition 6.2.1, literally a covariant alternating tensor), he's using Ai to mean a column, since there are k of them, whereas there are n rows.

Definition 6.2.1, on p. 118, characterises a k-form simply as a kth order tensor. The definition needs to include one more property: the value of omega(A) changes sign under any interchange of columns of A.
 
  • #9
I don't really understand your issue with tensors & tbh I don't understand tensors, I'm
really hoping to do all of this without recourse to tensors or charts or alternating maps
unless it's absolutely essential, a lot of it doesn't seem to be essential at the moment.
I think we can do it! :cool:

Here is a definition I've been hoping to find:

Definition: Let Ω be an open set of ℝⁿ.

(i) A vector field in Ω is a map F : Ω → ℝⁿ.

(ii) A differential form ω in Ω is a map ω : Ω → L(ℝⁿ,ℝ) that associates to every x ∈ Ω a linear map ω(x) : ℝⁿ → ℝ.

Hence, in coordinates a differential form can be written as

ω(x) = ∑ᵢⁿ ωᵢ(x) dxᵢ, ωᵢ(x) := < ω(x), eᵢ >, x ∈ Ω,

and a vector field as F(x) = (F(x)¹, F(x)², . . . , F(x)ⁿ).

If ω is a differential form on Ω, and F is the (unique) vector field on Ω such that

< ω(x) , h >= F(x) • h (∀h ∈ ℝⁿ) (∀x ∈ Ω),

we say that F is the vector field associated to ω or that ω is the differential form associated
to F. We say that a differential form is of class Cˣ if its components in a basis are of class
Cˣ. Notice that a differential form is of class Cˣ if and only if its associated vector field is
of class Cˣ.
link

So taking "A differential form ω in Ω is a map ω : Ω → L(ℝⁿ,ℝ) that associates to every
x ∈ Ω a linear map ω(x) : ℝⁿ → ℝ." we can form something like:

ω : Ω → L(ℝⁿ,ℝ) | x ↦ w(x) = ∑ᵢⁿ ωᵢ(x) dxᵢ

Note that ωᵢ(x) := < ω(x), eᵢ >, x ∈ Ω.

But there's still an issue here that becomes apparent when you look at the next definition:

Definition 11.2.1 Differential form:
A differential p-form on a set S ⊂ ℝⁿ is a function ω: S → Ap, i.e. ω(x) = ∑(i1 < ... < ip) ω(i1 ... ip)(x)dxi1⋀...⋀dxip
where ω(i1 ... ip) : ℝⁿ → ℝ.
http://www.southalabama.edu/mathstat/personal_pages/windham/AC-2.pdf

So ω: S → Ap | xw(x) = ∑ωᵢ(x)dxᵢ⋀...⋀dx,, (not filling in all the subscripts!).

Both definitions have good and bad points as far as I can judge, I'd be interested in seeing
what you can make of these two, maybe you could explain how this all fits together
because I feel the first definition doesn't include the k-th dimension (what dimension
does a linear map L(ℝⁿ,ℝ) have? Maybe it becomes L(ℝⁿ,ℝp) when more
dxᵢ⋀...⋀dx forms are involved? But I honestly don't know whether this is just describing
the < ω(x), eᵢ > part or what).

Notice none of this even mentions the tangent space though, I mean if there issues as
big as the ones in this thread that need to be dealt with I just don't think adding things
that are based on very complex ideas into the mix is a recipe for success.

I think the issue with the first definition is about L(ℝⁿ,ℝ), this is, I presume, just ℝ
ultimately & it becomes ℝp if L(ℝⁿ,ℝp) is actually valid as I think
the second definition I gave justifies.

Also, note that none of this includes an integral sign :confused: The reason I started
this thread was because one book was throwing around integral signs (my first post)
& making things look justifiable but it's coming into conflict with about 6 other definitions
in this thread :smile:

In any case the main breakthrough is notation of the form

ω : Ω → L(ℝⁿ,ℝ) | x ↦ ω(x)

even if it is still fraught with a little ambiguity it's getting there, it still carries this
composition of maps Ω → ℝⁿ → ℝ (or Ω → ℝⁿ → ℝp?) idea there though
but the notation could just be describing the ωᵢ(x) := < ω(x), eᵢ > which clearly is
something like L(ℝⁿ,ℝ) (i.e. a linear functional) so, err... wtf...?
 
  • #10
Hi sponsoredwalk, after getting a bit stuck with Garrity, I checked out some of your other links, and eventually went back to Bachman. I think I might actually be getting somewhere at last! Thanks for starting this thread which spurred me take up this subject where I left off a few months ago. I want to finish Bachman and try a whole bunch of exercises to make sure I understand, then I'll write up what I've manages to glean so far. Bachman deals with the simple case of differential forms on ℝn; I still have some work to do to get the full machinery for integrating over general manifolds - but it feels like it's within reach!

sponsoredwalk said:
Both definitions have good and bad points as far as I can judge, I'd be interested in seeing what you can make of these two, maybe you could explain how this all fits together because I feel the first definition doesn't include the k-th dimension (what dimension does a linear map L(ℝⁿ,ℝ) have?

I think the first definition in your previous post is specifically defining a differential 1-form. The second is just a generalisation of this definition to differential k-forms. Both are talking specifically about differential forms on (an open set of) ℝn; eventually we'll want to get to even more general business of differential forms on a manifold not necessarily embedded/imbedded/immersed/included in ℝn (memo to self: check the difference between these terms). But ℝn is the place to start. We'll need what we learn here to make sense of the generalisation.

L(ℝn,ℝ) has dimension n. It's the dual space to ℝn. (See below.)

sponsoredwalk said:
Maybe it becomes L(ℝⁿ,ℝp) when more
dxᵢ⋀...⋀dx forms are involved? But I honestly don't know whether this is just describing
the < ω(x), eᵢ > part or what).

No, when ω is a differential p-form, ω(x) belongs to L(ℝpn,ℝ), that is, the set of linear functions from the Cartesian product of p copies of ℝn to ℝ. The p-forms comprise a subspace of L(ℝpn,ℝ).

sponsoredwalk said:
I don't really understand your issue with tensors & tbh I don't understand tensors

Basically my "issue with tensors" was just a grumble at the sloppiness of authors who write "vector" when they mean "vector field", or "scalar" when they mean "scalar field". Your latest definition talks about a "differential [1-]form ω" and its value at x, "ω(x)". In his definitions, Garrity calls an object like ω a "differential 1-form", and an object like ω(x) a "1-form". But he's soon using the term "1-form" to refer also to ω. It seems common for authors to blur the distinction in this way, which I find adds to the struggle of trying to understand what they mean - even if it's not so ambiguous in practice, still a portion of my brain that could be concentrating on learning has to be diverted to standing guard, holding two possible meanings in mind till it's quite sure. But the terminology "differential 1-form" versus "1-form" lends itself to such blurring, especially when other authors make "1-form" refer to an object like ω even in their formal definitions. (Grumble, grumble, rant, grumble...)

Short intro to tensors: Say we have a vector space, V. Its dual space is another vector space, V*, whose vectors are scalar-valued linear functions of vectors. The elements of this second vector space are called dual vectors or covectors. A type (p,q) tensor over V is just a linear, scalar-valued function of some number, p, of covectors and some number, q, of vectors. If the underlying vectors space is ℝn, then elements of L(ℝqn,ℝ) are type (0,q) tensors. Scalars themselves are defined as type (0,0) tensors. Say we have a vector, a, in V, and a covector, b, in V*; then the covector, b, is a type (0,1) tensor, and we can regard the vector, a, as a type (1,0) tensor by defining a(b) = b(a).
 
  • #11
sponsoredwalk said:
Notice none of this even mentions the tangent space though, I mean if there issues as big as the ones in this thread that need to be dealt with I just don't think adding things that are based on very complex ideas into the mix is a recipe for success.

These definitions identify ℝn with each of its own tangent spaces. Here the ℝn which x belongs to, and which Ω, the domain of ω, is a subset of, is the underlying manifold. The ℝn which is the domain of ω(x) is the tangent space at x.

This proliferation of identifications where the lovely ℝn is involved is one reason why it can sometimes make things clearer to consider how some of these things are defined for a general manifold. Spivak has a nice quip in Differential Geometry, Vol. 1, that goes something like, "The only time this will be confusing - and it will be confusing - is when the manifold itself is ℝn."
 
  • #12
Rasalhague said:
Bachman deals with the simple case of differential forms on ℝn

Oh, actually he does move on to the more general theory in Chapter 7.
 
  • #13
got a question

given a differential 1-form and a metric 'g' . how the hell can i define the Laplace Rham operator

[tex] \nabla = (*d)(d*) [/tex] for it ?

take for example [tex] A= fdx + gdy [/tex]
 

FAQ: What Makes Differential Forms Click?

What are differential forms?

Differential forms are mathematical objects that are used to represent geometric concepts, such as curves, surfaces, and volumes, in a way that is independent of coordinates. They provide a powerful and elegant way to describe and manipulate multivariable functions.

Why is it important to study differential forms?

Studying differential forms is important because they provide a unified framework for many areas of mathematics, including calculus, geometry, and topology. They also have practical applications in physics, engineering, and other sciences.

How are differential forms different from traditional calculus?

Differential forms are different from traditional calculus in that they are defined without reference to a specific coordinate system. This makes them more flexible and easier to work with, since they do not depend on a particular choice of coordinates.

What is the role of exterior calculus in differential forms?

Exterior calculus is a mathematical tool used to study differential forms. It allows for the manipulation and calculation of differential forms using operations such as differentiation, integration, and wedge products. It also provides a geometric interpretation of differential forms.

How can differential forms be applied in real-world problems?

Differential forms have numerous applications in real-world problems, such as in physics, engineering, and computer graphics. They can be used to describe the motion of fluids, the behavior of electric and magnetic fields, and the shape of objects. They also play a crucial role in the development of algorithms for computer graphics and animation.

Back
Top