Momentum operator as the generator of translations.

In summary: I think this is a big part of the quantum world that has not been explored fully.In summary, researchers have detected a photon that has split in two. This is a rare event, and may be related to the quantum world's complexity. Regarding teleportation, researchers say that spin orientations must be taken into account for a full understanding of the quantum world.
  • #36
alexepascual,
If you have access to such resources, you may find an systems engineering text helpful for understanding these Fourier relationships. Specifically, what I have in mind is a junior or senior level electrical engineering major's understanding of the topic. The book I have is called "Circuits, Signals, and Systems," and I'm sure there are hundreds of other good books as well. I just think that considering a concrete system as what is doing the transform may help.

A good systems engineering text should derive the time and frequency shifts, as well as many other usual relationships. In engineering, this is done in such a concrete and straightforward way that it is worth looking into, even from the theoretical standpoint as an aspiring physicist. Basically, it should go through what Eye has done, but with diagrams and things to go along with it in a very symbolic way.
 
Physics news on Phys.org
  • #37
alexepascual said:
... I would like to hear more about the "simple polinomials". Are we talking about polinomials of the form
a0x0+a1x1+a2x2+...anxn..
where each xn is a basis for the space?
Would in that case the operator d/dx consist of a line of numbers n-1 parallel to the diagonal? Am I on the right track?

Eye_in_the_Sky said:
"Yes" (where the full set {xn|n=0,1,2,...} is a basis), "yes", and "yes". At the next opportunity, I will attempt to post more on this matter.
Let bi be a basis. Then, (using the "summation convention" for repeated indices) any vector v can be written as

v = vibi .

In this way, we can think the of vi as the components of a column matrix v which represents v in the bi basis. For example, in particular, the vector bk relative to its own basis is represented by a column matrix which has a 1 in the kth position and 0's everywhere else.

Now, let L be a linear operator. Let L act on one of the basis vectors bj; the result is another vector in the space which itself is a linear combination of the bi's. That is, for each bj, we have

[1] Lbj = Lijbi .

In a moment, we shall see that this definition of the "components" Lij is precisely what we need to define the matrix L corresponding to L in the bi basis.

Let us apply L to an arbitrary vector v = vjbj, and let the result be w = wibi. We then have

wibi

= w

= Lv

= L(vjbj)

= vj(Lbj)

= vj(Lijbi) ... (from [1])

= (Lijvj)bi .

If we compare the first and last lines of this sequence of equalities, we are forced to conclude that

[2] wi = Lijvj ,

where, Lij was, of course, given by [1].

Now, relation [2] is precisely what we want for the component form of a matrix equation

w = L v .

We, therefore, conclude that [1] is the correct "rule" for giving us the matrix representation of a linear operator L relative to a basis bi.

-----------------------------------------

Now, let L = d/dx, and bn = xn-1, n = 1,2,3... .

In this context, rule [1] above becomes

[1'] Lxn-1 = Lmnxm-1 .

But

Lxn-1

= (d/dx)xn-1

= (n-1)xn-2

= (n-1) deltam,n-1 xm-1 ,

so that

Lmn = (n-1) deltam,n-1 .

This is equivalent to (no summation)

Ln,n+1 = n , with all other components equal to 0.
 
  • #38
I have been away from the forum the last three days and haven't had time to think about the topic.
Turin:
Thanks for your suggestion. I'll try to get a hold of the systems engineering book you mention. In other circumstance I was having trouble with a topic in thermodynamics and found that an engineering book explained things in a way that was more clear to me.
Eye:
I'll be printing out your last two posts and probably will have a chance to read them and think about what you say sometime today or tomorrow.
I'll let you know as soon as I do so.
 
  • #39
Turin,
I got the book you suggested from a library. Thanks for your suggestion, I'll tell you later if it helped.
Eye,
First I would like to apologize for my lack of familiarity with the Fourier transform.
Yesterday I thought I had understood your derivation. But today I looked at it again, and found out that the way I was making it work (the intermediate steps I was filling-in) make an asumption that may not be warranted.
The way I chose to demonstrate that (df/dx)f(x) <--> ik F(k) is to multiply both sides of the correspondence [1] (my eq. number) by ik. If ik were a constant I guess this would be legal.

The sequence would be: (my equation numbers)
By definition:
[1] f(x) <--> F(k)
[2] f(x) = 1/sqrt{2 pi} integral {eikx F(k) dk}
[3] F(k) = 1/sqrt{2 pi} integral {eikx f(x) dx}

Now make:
[4] f'(x) = ik f(x)
Then there is some F'(k) such that:
[5] f'(x) <--> F'(k)
where (by [3]): F'(k) = 1/sqrt{2 pi} integral {eikx f'(x) dx}
Using [4]: F'(k) = 1/sqrt{2 pi} integral {eikx ik f(x) dx}
Pulling ik out: F'(k) = ik 1/sqrt{2 pi} integral {eikx f(x) dx}
By [2]: f'(x) <--> F'(k) = ik F(k)
Which can also be expressed:
[5] ik f(x) <--> ik F(k)
Taking the derivative of [2]:
[6] (d/dx)f(x) = ik f(x)
Substituting [6] in [5]:
[7] (d/dx)f(x) <--> ik F(k) (your equation [3])

The problem I see is that f(x)<-->F(k) makes a correspondence between two functions. This means that all values of k are used in the correspondence. But which value of k do we use on the left side of [5]?.
In order to see this better, I thought that the Fourier transform should be ammenable to being visualized as a matrix. I looked for: "Fourier Transform in matrix form" in google and found a few results. Interestingly, two of those entries had to do with image processing. One was a description of a book: "Image processing: the fundamentals". I looked for it in Amazon.com and, besides having a positive review, the table of contents appeared very interesting. I have ordered this book as a loan from other university.
I'll be waiting for your comments on the above equations.
 
Last edited:
  • #40
I will let Eye take care of the other details, but for now:

alexepascual said:
... I thought that the Fourier transform should be ammenable to being visualized as a matrix.
ABSOLUTELY! In fact, there is a discrete Fourier transform (DFT) that is exactly a numerical matrix and a fast Fourier transform (FFT) that is a radix for further algorithmic optimization. You may find the DFT to hold the specific explanation that you're looking for. Even the straight-up continuous time Fourier transform is basically a matrix, T, in the sense that you could find the ω,t component as:

Tω,t = <ω|T|t> = <ω|F{t}[ω]> = integral of ωF{t}[ω]dω

Basically, the components turn out to be:

Tω,t = e-iωt
(the Kernel of the transformation).
 
  • #41
I have kept thinking about this problem and can't find a solution.
But I noticed the following:
I arrived (through a dubious path) to the following
[5] ik f(x) <--> ik F(k)
Now, if I start off with { ik F(k) } and plug that into the definition of the inverse FT. I should get f'(x)= ik f(x)
f'(x) = 1/sqrt(2pi) integral {eikx F'(k) dk]
f'(x) = 1/sqrt(2pi) integral {eikx ik F(k) dk]
But here is where the problem shows, because I can't pull ik out of the integral sign because k is a variable, not a constant in this case.
 
  • #42
Turin
I had not read yours when I sent my last post (7 minutes later)
Thanks for your info about the discrete Fourier transform. I'll try to learn more about it.
I was browsing this morning through the Circuits, Signals and Systems book. I found it very interesting, but it'll involve learning a lot about electronics. Although I have read about the subject in the past on my own, I have never taken an electronics course.
In the short term, I'll try to get the most out of this book by reading some of the chapter introductions and by looking directly at the sections on the diferent transforms.
Thanks again,
Alex
 
  • #43
Concerning the required knowledge of electronics:
Perhaps there is another book by that same name. The one that I have (though not with me right now) has an entire chapter almost entirely devoted to the Fourier transform (and called "The Fourier Transform" if I remember correctly).


alexepascual said:
I arrived (through a dubious path) to the following
[5] ik f(x) <--> ik F(k)
I suppose I may be unclear on the meaning of "<-->". But, if it is supposed to mean "transforms into," then I don't agree with this statement. The way to get the transformation of the derivative is to simply take the derivative of the inverse transform of some function and then infere the transform. To start out with some definitions:

f(t) = Tinv{F(ω)}[t]
= (√2π)-1 integral of {dωeiωtF(ω)}

=>

df/dt = (d/dt)Tinv{F(ω)}[t]
= (√2π)-1 (d/dt) integral of {dωeiωtF(ω)}

Since the integration is over ω, the derivative wrt t can be taken inside the integral (the integration and differentiation commute):

= (√2π)-1 integral of {dω(d/dt)(eiωtF(ω))}

Then, since only the Kernel depends on t, the differentiation only operates on the Kernel:

= (√2π)-1 integral of {dω(d/dt)(eiωt)F(ω)}
= (√2π)-1 integral of {dω(iωeiωt)F(ω)}
= (√2π)-1 integral of {dωeiωt(iωF(ω))}

Let (iωF(ω)) = G(ω) (some other function of ω):

= (√2π)-1 integral of {dωeiωtG(ω)}
= Tinv{G(ω)}[t]

Taking the Fourier transform of both sides:

T{df/dt}[ω] = T{Tinv{G(ω)}[t]}[ω]
= G(ω)
= (iωF(ω))
= (iω)F(ω)

The result:

T{df/dt}[ω] = (iω)F(ω)

This shows that the Fourier transform of the time derivative of a function is equal to (iω) times the transform of the function. In other words, differentiation in the position basis is multiplication in the momentum basis.
 
Last edited:
  • #44
Thanks Turin,
I'll have to go over your post and think about it. But I think it'll probably answer my question.
With respect to the book, I think it is the only one with that title. The author is Siebert and it is published by MIT press / MacGraw-Hill. Chapter 13 is titled: Fourier Transforms and Fourier's Theorem.
When I posted my comment, I had just browsed through the book, and I got the impression that most of it was intimately related to electronic systems. Now that I examined some of the chapters in more detail, I see that probably I can go directly to the chapter on the Fourier transforn and understand it with my present (little) knowledge of electronics. On the other hand, there is always a little of a culture shock when going from physics books to electronics books, which is in a way good because it forces you to look at the same topics from a different angle.
I see that one of the differences with the physics books is the inclusion of discrete transforms, which I don't remember seing in physics. Probably part of the reason for this is their use in digital systems, digital signal processing, etc. Probably discret math in physics becomes more important at the Plank scale, but that is just my speculation (I don't know anything about quantum loop gravity)(a long way to go before I get there).
 
  • #45
Turin,
Your notation is a little different to what I am used to.

In your first post:
(1) When you write F{t}[&omega;], I wonder what you mean. What is the difference between the square brackets and the curly brackets?
(2) I looked at my linear algebra book and it says that the kernel of a transformation is the portion of the domain that is mapped to zero. Is this a different use of the word "kernel" or is it connected with your comment that e-iωt is the kernel of the transformation?

In your second post:
(3) Now you write {F(&omega;)}[t] , slightly different notation but the square bracket seems to have the same function. Probably if you just tell me how you read it aloud I'll understand.
 
  • #46
OK, I ignored the square brackets and was able to follow your reasoning.
I still would appreciate your explanation about the notation and the "kernel".
Thanks again Turin,
Alex
 
  • #47
That nasty double arrow

The intended meaning of "f(x) <--> F(k)" is "f(x) and F(k) are a Fourier transform pair". The x-space function is written on the left and the k-space function on the right [1]. So, reading the arrow from left to right gives

f(x) --> F(k) ..... f(x) goes to F(k) via a Fourier transform ,

and reading from right to left gives

f(x) <-- F(k) ..... F(k) goes to f(x) via an inverse Fourier transform .

If you know that the arrow holds true for one direction, then it must also hold for the other.

The corresponding Fourier integrals are [2]

f(x) --> F(k) ..... F(k) = 1/sqrt{2 pi} Integral { e-ikx f(x) dx } ,

f(x) <-- F(k) ..... f(x) = 1/sqrt{2 pi} Integral { eikx F(k) dk } .

Note that one of the transforms has a "+ikx" in the exponential (the inverse transform, according to my definition), while the other has a "-ikx" [3],[4].
_________________________
[1] Thus it would not make sense to multiply both sides of the double arrow by the same thing ... like, for example, ik.

[2] Some books use an asymmetrical convention with regard to the numerical constant in front of the integral, putting a 1/(2 pi) at the front of one of the transforms and just a 1 at the front of the other.

[3] Some books use the opposite convention with regard to (+/-)ikx in the exponential.

[4] In an earlier post, such a sign was missed out. Both transforms we written with a "+" sign.
By definition:
[1] f(x) <--> F(k)
[2] f(x) = 1/sqrt{2 pi} integral {eikx F(k) dk}
[3] F(k) = 1/sqrt{2 pi} integral {eikx f(x) dx}
-------------------------------------------------------------
-------------------------------------------------------------

The above explains what I meant by the double arrow. Instead of providing a means to express Fourier-type relationships between objects in a clear and compact way, it only added confusion to an already uncertain situation. ... Sorry about that.
 
  • #48
Eye:
I didn't have trouble with the double arrow. Turin asked to be sure he was interpreting correctly (which he was). But he preferred to use the operator notation. My only difficulty was in deriving your correspondence [3].
You said: "...and from this [3] follows" ( I didn't see how it followed and tried to explain it using wrong arguments). Turin made me see my error and now everything is clear.
I did have a little trouble with Turin's notation but managed to follow his argument anyway. So now I think the whole topic of momentum as the generator of translations is quite clear to me.

Eye and Turin:
Thanks a lot for all your help.
Alex
 
  • #49
Regarding what 'kernel' means. It actually is all related, but let's stay clear of the linear algebra meaning for the moment, and keep it simple.

In signal processing, and other theories (like Sturm-Liouville functions) the idea is to find a set of functions that form a sort of basis. But I diverge...

Lets say we are interested in defining what an integral transform is. Generically it is a map that takes one function f(x) into another function say g(t).

So
f(x) = integral (a..b) K(x,t) g(t) dt

The function K(x,t) is called the kernel of the transformation, and along with the limits of integration, uniquely define the properties of how one maps into the other.

In the case of Fourier analysis, that kernel is exp (-i x t), for other transforms its something else (there are many transforms with interesting properties, like the Laplace transform, the wavelet transform, etc etc)
 
  • #50
Alex,
You should probably ignore my posts if you are comfortable with your understanding. They are very confusing. And shame on me for not defining anything; that was sure sloppy. And, out of habit, I used the frequency-time transformation rather than the momentum-position transformation (the same, but different notation).

If you're interested, though, in my most recent post:

T{f(t)}[&omega;] represents the transformation from the function f(t) in the time domain to a function in the frequency domain. To complete the statement:

F(&omega;) = T{f(t)}[&omega;]

The "T" is usually a capital cursive "F" that denotes "Fourier transform." I used a "T" to spare some confusion that would have arisen per the rest of my notational scheme.

The curly braces contain the function to be transformed (as well as the variable of integration implied by the argument of that function).

The square braces display the domain into which the transformation is to take the function. Quite often this is left out, as it is obvious from the context or more explicit expression in terms of the integral.

f(t) is the function in the time domain. F(&omega;) is the corresponding function in the frequency domain (the transform of f(t)).

Haelfix provides an explanation of my use of "kernel." Though, I am unclear how one can say this is not a linear algebraic application. I was always under the impression that that is exactly what the kernel is: the elements of the transformation matrix.
 
  • #51
Turin,
Your posts are not confusing. I just was unfamiliar with the notation you used. As a matter of fact, I think the only thing I believe I didn't understand was the square brackets. But your post made very clear where I was erring and how to fix it. The fact that you used time domain instead of space domain didn't confuse me at all either. Your explanation about the notation in your last post makes everything absolutely clear.
With respect to the kernel, I understand very well your definition. This definition coincides with the one given by Haelfix but I can't make a connection with the one given in my linear algebra book. I did a Google search on the "kernel of a transformation" and the definitions I saw coincided with the linear algebra book.
According to this definition, the kernel is that sub-set of the domain that is mapped by the transformation to zero. If we think of the transformation as represented by a matrix and the domain and range composed by vectors (functions), then this other definition would define the kernel as a set of vectors (functions) while your definition would consider it as an element of the transformation matrix. Maybe these are two completely different meanings of the same word, or maybe they are somehow connected or equivalent but I can't see the relationship.
 
  • #52
turin said:
Haelfix provides an explanation of my use of "kernel." Though, I am unclear how one can say this is not a linear algebraic application. I was always under the impression that that is exactly what the kernel is: the elements of the transformation matrix.
Haelfix isn't saying that T is not a linear transformation. Rather, there is a concept in linear algebra which is designated by the same term "kernel" (also "null space") but refers to something else. For that context, the definition is:

Defintintion: Let A be a linear transformation from a vector space V into a vector space W. Then, the "kernel" (or "null space") of A is the set
KA = {v Є V | A(v) = 0}.

It turns out that KA is a linear subspace of V, and A is invertible iff A is "onto" and KA = {0}.

Thus, in your sense of "kernel", the "kernel" of T is e-iωt, whereas in the other sense, the "kernel" ("null space") is the set consisting of [the equivalence class of functions corresponding to] the zero vector. (Note: You can ignore the preceding square-bracket remark if it troubles you (or, better yet, ask (if it troubles you)).)
 
Last edited:
  • #53
OK, I think I get it. The Haelfix version of "kernel" must be the set consisting of only the zero vector since there is an inverse Fourier transform? It doesn't "trouble" me, but I don't know what an "equivalence class" is. Care to enlighten us?
 
  • #54
turin said:
OK, I think I get it.
You got it.

----------

turin said:
... I don't know what an "equivalence class" is.
When we choose to represent the "vectors" of our Hilbert space by (square-integrable) "functions", a potential difficulty may arise.

Suppose we have two functions, f and g, which are equal "almost everywhere" (i.e. the set of points for which f(x) ≠ g(x) has "measure zero"), then as far as all [of the "tame"] integrals are concerned, f(x) and
g(x), under the integral, will give the same result. Thus, for all practical purposes, f and g are considered to represent the same "vector" ... yet, at the same time, they may be distinct "functions" (i.e. we may not have f(x) = g(x) at every single point x).

In simple terms, the condition "f is almost everywhere equal to g" can be expressed as

[1] Integral { |f(x) - g(x)| dx } = 0 .

This condition puts our "functions" into "groups", or "classes", of "equivalent functions". These "groups" are the "equivalence classes" corresponding to the "equivalence relation" defined by [1]. In this way, a formal "vector" of the Hilbert space is represented by a particular "equivalence class", and when we wish to do a calculation, we can simply pick any "function" in the given "equivalence class" (... and it doesn't matter which one we pick).

[Note: In the above, a formal definition of "equivalence relation" has not been given. Nor has a formal demonstration been given that condition [1] satisfies such a definition. Neither has a formal definition of "measure" and "measure zero" been given, nor a formal demonstration that condition [1] is equivalent to "the set of points for which f(x) ≠ g(x) has measure zero".
... But these things are basically trivial to do, and yet, may be the cause for a "small headache" to some, as well as, being construed (by some) to be "a complete waste of time".]
 
Last edited:
  • #55
I don't want to deam the issue of equivalence class to be "a waste of time," but it seems to me that there can only be one of the functions out of the equivalence class that could survive the further restriction of being physically meaningful. Am I incorrect to interpret the distinctions as removable discontinuities?
 
  • #56
turin said:
Am I incorrect to interpret the distinctions as removable discontinuities?
It would be more accurate to describe the distinctions as occurring at "isolated points". Suppose that x = xo is such a point of distinction. Then, if f(xo-) = f(xo+), yes, we are dealing with a "removable discontinuity". If, however, f(xo-) ≠ f(xo+), then, even though we have a rule for removing the "distinction", we cannot apply the term "removable discontinuity".


... it seems to me that there can only be one of the functions out of the equivalence class that could survive the further restriction of being physically meaningful.
Yes, I think that your statement is basically true. In a physical situation which demands that a function be continuous on some interval, all members of the corresponding "equivalence class" will differ on that interval only by "removable discontinuities". On the other hand, if a physical situation (probably involving an idealization of some kind) calls for some function to have a "step discontinuity", the physical context would probably tell us that the value of the function at the "step", say x = xo, is quite irrelevant; we would probably just make f(xo) = f(xo-) or f(xo+).

So ... it sounds to me like you may be suggesting that, in a general sort of way, the "physically distinct solutions" are, so to speak, in a one-to-one correspondence with the "equivalence classes". Yeah, this makes sense ... never thought of it, though.
 
  • #57
Perhaps this issue comes to its climactic import when decomposing a function that is physical into a basis of functions that may not themselves be physical (or the other way around).
 
  • #58
Quite independently of whether or not such a climax - or one of similar import - can/cannot or will/will not occur, the real question (I would say) is: Why does the mathematician feel compelled to speak of "equivalence classes" instead of the "functions" themselves?

... Any idea?
 
  • #59
Excuse me Eye, but it seemed to me that you yourself answered this question in post #54, earlier in this thread. You have ONE thing to represent, and several functions do it, and the differences between them you want to ignore, so you form equaivalence classes. You were so clear back there, I can't understand what your difficulty is here.
 
  • #60
I apologize. I didn't realize that my mention of "equivalence classes" and initial egging-on was going to be the spearhead of such a lengthy tangent. I'm not having a difficulty here. I posed the question to Turin, because I thought, somehow, the main point was being lost. So, in order to return, once again, to the main point I posed the question:

Why does the mathematician feel compelled to speak of "equivalence classes" instead of the "functions" themselves?

I posed the question, not because I didn't have an answer, but rather, to give Turin something more to think about.

As for this question having already been answered in post #54, I see only two (distinct) statements there, each of which (only) appears provide an answer for why the mathematician feels "compelled":

(i) without equivalence classes "a potential difficulty may arise";

(ii) "for all practical purposes" a pair of almost-everywhere equal functions are considered to be the same.

But (i) only hints at an answer, while (ii) opts out of giving that answer.

That answer is: If we speak only of "functions", then a linear transformation, such as the Fourier transform, when construed as a mapping of "functions" to "functions" will be MANY-to-ONE, and therefore, have no well-defined inverse. By speaking of "equivalence classes" instead, this difficulty is removed.

This then explains why I felt compelled to [parenthetically] mention "equivalence classes" in the context of the "kernel" (i.e. "null space") of the Fourier transform back in post #52.
 
  • #61
Great discussion. I must embarassingly admit that I did not see this answer which now seems so obvious. This uncovers another concern of mine, though, which may or may not be related. I guess the best way to pose the issue is in terms of the null vector, but it is really a concern about the feasibility of referring to functions as vectors in the first place.

If I understand the null vector to be the function f(x) = 0, yet this is only one member of the equivalence class of functions fi(x) such that:

integral {|fi(x)|2} = 0,

then this does make me uncomfortable (especially when I try to throw the Dirac-delta into the mix). The main source of my concern, I suppose, is the permission for this null vector to have several (an arbitrarily large number of large) nonzero components (something that I don't imagine agrees with the notion of discrete vectors). If I just squint my eyes until the integral has done its job, then everything seems OK. But something just doesn't seem quite right with this in my gut.

OK, so we really mean that a particular equivalence class of functions is a vector. I think I just answered my own concern (after Eye put the words into my mouth, that is).
 
Last edited:
  • #62
turin said:
The main source of my concern, I suppose, is the permission for this null vector to have several (an arbitrarily large number of large) nonzero components (something that I don't imagine agrees with the notion of discrete vectors).

The null vector continues to have all-zero components. There may be in its equivalence class other vectors that do not share this characteristic, which is not essential to the equivalence relation. Consider that the man you know has a beard, but he is in a group labeled MEN with other men, and they don't all have beards. Likewise the number 2 is prime, but it is in an equivalence class, the even numbers, and none of the other numbers in that class are prime.
 
  • #63
It just seems strange to me that a vector with zero magnitude could be anything but a null vector.
 
  • #64
A lightlike four-vector in relativity has its time component equal and negative to its spatial component, so they cancel out, even though neither of them is zero. This is the reason lightlike trajectories are called null trajectories.
 
  • #65
Now I feel like an idiot. I was completely thinking in terms of Euclidean 3-vectors when I made my comparison.
 
  • #66
turin said:
This uncovers another concern of mine, though, which may or may not be related.
It is related.


I guess the best way to pose the issue is in terms of the null vector, but it is really a concern about the feasibility of referring to functions as vectors in the first place.
Consider the set S = { f | f : RC } (i.e. S is the set of (arbitrary) "functions" from "real numbers" to "complex numbers"). Let C be the "scalars". Then S is a vector space over C. (Review http://www.ncrg.aston.ac.uk/~jamescj/Personal/Downloads/AM20LM/AM20LM_Handout_A_2003.pdf.) This example shows quite simply and unambiguously that there should be no concern with regard to the feasibility of referring to "functions" as "vectors".

Next, consider our vector space of square-integrable functions, but without the modification induced by the equivalence relation. Let's call this space F (to emphasize that each "function" corresponds to a distinct "vector").

Now ... in F, what kind of sense be made out of a statement like [1] below?

[1] f(x) = Σn anφn(x) , φn a basis .

This statement has a serious problem. For suppose we have a candidate basis (say, for example, that the φn are the energy eigenfunctions for a simple harmonic oscillator). We can then set some of the an ≠ 0 in order to obtain some function f Є F. And now ... we take this function f and change its value at exactly one point. This gives us a new function, and if the φn are really a basis on F, then we must be able to get this new function by merely changing the values of the an without "touching" any of the φn's.
... Is that possible? ... How can we possibly cause the function f to change at one - and only one - point, merely by changing the an's? That is impossible. And from this, we see that a statement like [1] has no meaning in F ... because F has no such basis.

But once we modify F, by means of our equivalence relation, a statement like [1] can then make perfect sense. (... And this is yet another (related) reason why the mathematician is compelled to speak of "equivalence classes" instead of the "functions" themselves.)

Let us refer to this modified space as E (to emphasize that each distinct "equivalence class" corresponds to a distinct "vector").

--------------------

Next.

Here is how you expressed your concern in terms of the null vector:


The main source of my concern, I suppose, is the permission for this null vector to have several (an arbitrarily large number of large) nonzero components (something that I don't imagine agrees with the notion of discrete vectors).
You have now introduced another concept, that of "component". You have mentioned it in two distinct senses:

(c) relative to a continuous parameter "x" ;

(d) relative to a discrete index "n" .

In alluding to (d), you imply that a statement like [1] above makes sense. In that case, you definitely cannot be thinking of your vector space along the lines of F, but rather, more along the lines of E. ... Now, what about (c)? Were you thinking along the lines of F, or did you mean E?

Let's go one level deeper:

- "component" in the sense of (c) can live in F and can live in E ;

- "component" in the sense of (d) can live only in E, but not in F.

Let's go one more level deeper. Consider the following statement:

[2] "equal components" is a necessary and sufficient condition for "equal vectors"

(i.e. two "vectors" are the same iff their corresponding "components" are the same).

With regard to "components" in the sense of (c) (i.e. relative to a continuous parameter "x"), where does statement [2] hold? ... In F, or in E? Well, statement [2] is true only in F ... but not in E. And that is no surprise - for, in going from F to E, we decided to consider entire groups of "vectors" to be a single "vector". That is to say, in going from F to E, statement [2] has become

[2'] "equal components" is a sufficient, but not necessary, condition for "equal vectors".

... These remarks should be sufficient to clear up all levels of confusion to be found in the last quoted passage above. Specifically:

The null vector is granted permission to have (an arbitrarily large number of large) nonzero "x"-type "components" only in E, the space where those components don't matter ... and in F, where those components do matter, the concept of discrete "n"-type "components" has no meaning.

--------------------

I now leave you with a question:

What is a suitable redefinition of "component" in the sense of (c), whereby statement [2] does hold in E?

--------------------
 
Last edited by a moderator:
  • #67
turin said:
It just seems strange to me that a vector with zero magnitude could be anything but a null vector.
That seems strange to me, too! Because, by definition, a vector with zero magnitude is nothing but a null vector.

Definition: u is a "null vector" iff ║u║ = 0 .

Turin ... you meant to say something different from what what you said meant. That is to say, what you meant to say was:

It just seems strange to me that a vector with zero magnitude could have anything but all of its "components" equal to zero.

So, selfAjoint came along gave you an example of both (what you meant to say as well as what what you said meant).


selfAdjoint said:
A lightlike four-vector in relativity has its time component equal and negative to its spatial component, so they cancel out, even though neither of them is zero. This is the reason lightlike trajectories are called null trajectories.
... But in order to accomplish that task, selfAdjoint was forced to depart from a Euclidian metric, to which you, Turin, responded ...


turin said:
Now I feel like an idiot. I was completely thinking in terms of Euclidean 3-vectors when I made my comparison.
... and that just sends my head spinning, because the Hilbert-space metric is Euclidean ... implying that you were thinking in the right terms but just in the wrong "dimension" and "measure", and that what you really wanted answered wasn't. :devil:
 
  • #68
Simultaneously I have been clarified and confused by orders of magnitude. It is quite a strange mental sensation.

Eye,
In answer to the question with which you left me, I would suggest defining "compenents" as integrals over regions of the domain. That way the integration could eliminate the discrepancies between functions in an equivalence class as vectors (though the functions themselves would still remain distinct). Even if this is appropriate, though, I still have no clear picture in my mind of the specific details of such a definition.
 
  • #69
turin said:
Simultaneously I have been clarified and confused by orders of magnitude.
... :biggrin:

--------------------------


I would suggest defining "compenents" as integrals over regions of the domain. That way the integration could eliminate the discrepancies between functions in an equivalence class as vectors (though the functions themselves would still remain distinct).
Since an integral is defined over an interval, using that would eliminate more than just the "discrepancies". ... What we really need is an "infinitesimally small" interval - i.e. limits. So, I would suggest the following definition:

(c') the "x-component" of f is given by a joint specification of f(x-) and
f(x+) .

--------------------------

Now, regarding:


It just seems strange to me that a vector with zero magnitude could be anything but a null vector.
I think what you really wanted to say was:

It just seems strange to me that a vector with zero magnitude could have some nonzero "components".

According to the above definition (c'), we now have that every function in the equivalence class of the null vector does have all of its "x-components" equal to zero. ... But, of course, we didn't need (c') to realize that the "isolated discontinuities" in each of those functions were of no consequence. We already knew that they contribute nothing to an integral, implying, for each f in the "null class",

║f║ ≡ sqrt ( Integral { |f(x)|2 dx } ) = 0 ;

i.e. those functions f really do have zero "length".

--------------------------

I have only one more (concluding) remark to make, and it relates to something you pointed out earlier:


... it seems to me that there can only be one of the functions out of the equivalence class that could survive the further restriction of being physically meaningful.
I return to this point, not because of the part about "physically meaningful", but because of the part about "restriction". Perhaps we could have "redefined" our space by means of specific "rules" telling us which functions are "in" and which functions are "out". A good start would have been to say, "Well, we only want piecewise continuous functions which have no removable discontinuities." Then we would have had to try to figure out what kind of "rule" to use at a step discontinuity ... which I won't even try to think about.

In short, equivalence classes are able fix the problem with very little effort.


(Note that equivalence classes have an import and utility over and above that (of our case here) of "removing" annoyances.)

--------------------------
 
Last edited:
  • #70
Doing space time symmetries through Group theory. I completely understand how do derive, say SO(2), from its generators.

And I understand how, as so(3) obeys the same commutation relations as Angular Momentum, that Angular Momentum is just the generator of rotations.

Now, I wish to use the same logic to show that the Generator of spatial translation is just linear momentum in spacetime. I tried the usual method:

Start with the group of all Spatial Translations, [tex]\mathrm{G}[/tex] with [tex]x^i \in \mathbb{R}[/tex] [tex]\{x^0=t,x^1=x,x^2=y,x^3=z\}[/tex] and [tex]p^i \in \mathfrak{g}[/tex] form the Lie Algabra.

A quick way would be to use the infinitesimal group element way:

[tex]g_i(x^i)=e^{ix^i p^i}=1+x^i \frac{ (i p^i)}{1!}+(x^i)^2 \frac{ (i p^i)^2}{2!}+ \cdots[/tex]

and so if [tex]e^{ix^i p^i}[/tex] acts on some [tex]f(X)[/tex], and we wish to translate it by [tex]x^i[/tex] to [tex]f(X+x^i)[/tex] we would have to make [tex]p^i[/tex] take the value:

[tex]ip^i = \partial_{x^i}[/tex]

as per Taylor Series of [tex]f(X+x^i)[/tex].

Is this sloppy/wrong?

Is there a nicer way to show this that follows the Rotational Transformations and Group Theory closer?

I'm looking for something analogous to the way infinitesimal generators are defined:

[tex]\tau^i = \frac{1}{i}\partial_{x^i}(F(x^i))\Big|_{\{x^i=0\}}[/tex]

where [tex]F(x^i)[/tex] become the Rotation Matrices,[tex]R_i(\theta)[/tex], of SO(3) in the Angular Theory.
 

Similar threads

Replies
3
Views
1K
Replies
8
Views
355
Replies
21
Views
3K
Replies
11
Views
2K
Replies
6
Views
2K
Replies
33
Views
3K
Replies
19
Views
5K
Replies
14
Views
2K
Back
Top