Understanding Projectors in Quantum Mechanics: A Mathematical Approach

In summary: Sum_n P_{e_n} = P_{\sum_n e_n} ...This is where things get a little fuzzy. It looks like you are saying that for every pair (\psi, \psi_n) in H,Sum_n P_{e_n} = P_{\sum_n e_n}
  • #71
Because { [tex] u_n_i [/tex] } spans the Hilbert space, any u can be expressed as:
[tex] \sum_n \sum_i c_n_i u_n_i [/tex]

[tex] \sum_n P_n ( \sum_j \sum_i c_j_i u_j_i ) = [/tex]
[tex] \sum_n \sum_j \sum_i c_j_i P_n ( u_j_i ) = [/tex]
( because
[tex] P_n ( u_j_i ) = u_n_i [/tex]
when n =j,
[tex] P_n ( u_j_i ) = 0 [/tex]
when n , j not equal,
)
[tex] \sum_n \sum_i c_n_i u_n_i [/tex]

That takes care of
[tex] \sum P_n = I [/tex]
.
 
Physics news on Phys.org
  • #72
About Q:
1) Our universe can be regarded as a system being observed for years.
2). We have noted every "position" in this system can be described by three real values by setting a a 3-dim reference "coordinate" or frame.
3) These three real values [tex] (q_x, q_y, q_z ) [/tex] have to be regarded as eigenvalues of 3 different observables ( i. e. Operators [tex] Q_x, Q_y, Q_z [/tex] ) if this approach of Hilbert Space and states is to be used to investigate the system.
4). These real values [tex] (q_x, q_y, q_z ) [/tex] have been observed to form three continuous real lines ( three continuous spetra ).
5). By the arguments above and the expansion postulate, the base of our system shall be constituted of a Hilbert Space spanned by these three sets of eigenvectors, which has a minimum one-to-one relationship with the eigenvalues by assumptions.
6) To Simplify our analysis, we can just look at anyone of the three observables, said Q_x.
To be continued ...
7). For
 
  • #73
Back in post #67, I "messed up"! There I wrote:

Now, let's look at your first "solution".
One solution to it is:
[tex] a = a_1 = a_2 [/tex]
i. e. they are mutiplied by the same phase factor.
Can we say that the statement is correct in this case? ...
-----

The continuation of what I wrote there is wrong! The kets |φ1'> and |φ2'> are specific kets, not a general "family" of kets. This means that as long as a1 = a2, then there exists a value of a (namely, a = a1 (= a2)) such that |ψ'> = a|ψ>. Thus, the overall conclusion should have been:

The statement is incorrect in all cases, except: (i) when |φ1> and |φ2> represent the same state, or (ii) a1 = a2 in the relations |φk'> = akk> (k = 1,2).

This means that your first "solution" is OK. I would only have added another sentence something like this:

So, we see that if a1 = a2 , then |ψ'> and |ψ> represent the same state.

... As you can see, the general "structure" of our question is like this: given "a1" and "a2", does there exist an "a"?

--------------

Now, however, this means that there is still a slight difficulty with your second "solution":
If
[tex] a not = a_1 [/tex]
, then
[tex] | \varphi_1 > = ( a_2 -a ) c_2 / ( a - a_1 ) c_1 | \varphi_2 > [/tex]
; i. e. [tex] | \varphi_1 > [/tex] and [tex] | \varphi_2 > [/tex] have to be the same state.
In the first "solution", you have covered the case a1 = a2. So, now you need to cover the case a1 ≠ a2. This must be the starting point for the second "solution". ... How can we continue?

Well ... suppose there exists an a. Since, a1 ≠ a2, then a must be different from one of the ak. Suppose for definiteness – without loss of generality – that a ≠ a1. Then, ... [the rest of your "solution" is fine].

(Do you understand the meaning of the expression "suppose for definiteness – without loss of generality"?)

-----
... Sorry about the confusion on my part.

Sometime soon I hope to fix that post (#67).
-----------
But now I see that "editing privileges" have changed, so it will just have to stay that way!
______________
 
Last edited:
  • #74
I).
I think you have a typo here. Did you miss a subscriptor n here in this paragraph.

N.2.2) We now enumerate three special cases for A:

(1) A has a discrete, nondegenerate spectrum; then

A = ∑n an|un><un| .
Yes, I missed a subscript. (I would go put that in now, but there's no more "edit" for me for that post!)
__________________
II)
Also, For acontinuous spectrum, why are the eigenvectors called " generalized"? Does this have anythings to do with the fact that most likely they will be "generalized" function such as dirac-delta function isntead of regular functions?
Yes, that is the basic idea. More generally, we can say that the resulting "eigenvectors" will not be square-integrable; we will have <a|a> = ∞, and so, technically the "function" a(x) ≡ <x|a> will not belong to the Hilbert space of "square-integrable" functions. Nevertheless, this difficulty is not in any way serious. In a self-consistent way, we find that we are able write <a|a'> = δ(a - a') .
__________________
III). Answer to the Exercise. It's actually a little difficult because you did not define eigenprojectors.
Yes, some details have been omitted.

Here are some (but not all) of those details (for the discrete case). [Note: By assumption, A is a self-adjoint operator according to a technical definition which has not been stated in full (however, in post #20 of this thread, part of such a definition was given). Thus, in the following, certain related subtleties will not be given full explicit attention.]

Recall (b) and (c) from N.2.1):
(b) the eigenvectors corresponding to distinct eigenvalues are orthogonal;

(c) the eigenvectors of A are complete (i.e. they span HS).
Define En = { |ψ> Є HS │ A|ψ> = an|ψ> } . From (b), En En' , for n ≠ n'; so,

(b') En כ En' , for n ≠ n' .

From (c), it follows that

(c') the vectors of UnEn span HS ,

Now, from (b') and (c'), it follows that for any |ψ> Є HS there exist uniquen> Є En such that

|ψ> = ∑nn> .

From the uniqueness of the |ψn>, we can define the "eigenprojectors" Pn by

Pn|ψ> = |ψn> .

... Alternatively, we can do it this way. Each En itself is a (closed (in the sense of "limits")) linear subspace of HS, and, therefore, has a basis, which we can set up to be orthonormal. Let |unk> be such a basis, where k = 1, ... , g(n) (where g(n) is the degeneracy (possibly infinite) of the eigenvalue an). We then define

Pn = ∑k=1g(n) |unk><unk| .

You can then convince yourself that this definition of Pn is independent of choice of basis.
__________________

So, let's continue.
So, I just gave a guess on this:

[tex] P_n ( \sum_i c_i u_n_i + b u_\bot ) = \sum_i c_i u_n_i [/tex]
where
[tex] u_n_i [/tex] are eigenvectors of [tex] a_n [/tex] and [tex] u_\bot \bot [/tex] all [tex] u_n [/tex] .
This "guess" was fine.

Next:
If i does not equal to j, then
[tex] P_i P_J ( \sum_k c_k u_i_k + \sum_l c_l u_j_l + b u_\bot ) = [/tex]
[tex] P_i ( \sum_l c_l u_j_l ) = 0 [/tex]
; so [tex] P_i P_j = 0 [/tex] .
Just one small problem: you already used "c" in the first sum over "k"; in the sum over "l" you need to use a different letter, say, "cl" → "dl". Then, your answer is fine.

Next:
The question could be asked is how I can prove all u can be decomposed to
[tex] \sum_k c_k u_i_k + \sum_l c_l u_j_l + b u_\bot [/tex]
.

I think setting
[tex] c_k = < u_i_k | u > [/tex]
[tex] c_l = < u_j_l | u > [/tex]
and
[tex] b u_\bot = u - \sum_k c_k u_i_k - \sum_l c_l u_j_l [/tex]
could take care of that.
Yes. Your idea works just fine. (Note: You need to make the change "cl" → "dl", or something like it.)
__________________

And now for the next part.
Because { [tex] u_n_i [/tex] } spans the Hilbert space, any u can be expressed as:
[tex] \sum_n \sum_i c_n_i u_n_i [/tex]

[tex] \sum_n P_n ( \sum_j \sum_i c_j_i u_j_i ) = [/tex]
[tex] \sum_n \sum_j \sum_i c_j_i P_n ( u_j_i ) = [/tex]
( because
[tex] P_n ( u_j_i ) = u_n_i [/tex]
when n =j,
[tex] P_n ( u_j_i ) = 0 [/tex]
when n , j not equal,
)
[tex] \sum_n \sum_i c_n_i u_n_i [/tex]

That takes care of
[tex] \sum P_n = I [/tex]
.
Yes!
__________________
 
  • #75
About Q:
1) Our universe can be regarded as a system being observed for years.
2). We have noted every "position" in this system can be described by three real values by setting a a 3-dim reference "coordinate" or frame.
3) These three real values [tex] (q_x, q_y, q_z ) [/tex] have to be regarded as eigenvalues of 3 different observables ( i. e. Operators [tex] Q_x, Q_y, Q_z [/tex] ) if this approach of Hilbert Space and states is to be used to investigate the system.
4). These real values [tex] (q_x, q_y, q_z ) [/tex] have been observed to form three continuous real lines ( three continuous spetra ).
5). By the arguments above and the expansion postulate, the base of our system shall be constituted of a Hilbert Space spanned by these three sets of eigenvectors, which has a minimum one-to-one relationship with the eigenvalues by assumptions.
6) To Simplify our analysis, we can just look at anyone of the three observables, said Q_x.
To be continued ...
7). For
1) All we really care about (for the moment, at least) is whether or not our "model" will "explain" the "observed phenomena".

2) This is an essential part of our "model". It doesn't necessarily have to apply to the universe as a whole, but only some part of it (e.g. the "laboratory").

3) Yes. But we still have to define the Hilbert space and set up an "eigenvalue equation".

4) Yes ... at least to some (very good) approximation. We will use this hypothesis in our "model".

5) This is curious. I was expecting to define the Hilbert space first, and then define Q on it afterwards. I was expecting something like: let H be the space of all square-integrable functions R3C ... and then we set up an "eigenvalue equation"

Q f(q) = q f(q) .

Afterwards, we would then "find out" (or "show") that the components of Q are is self-adjoint, etc ... .

6) This is actually the "starting point" of E.2.2). You have taken some time to consider its 'justification' – that is good.

--------------------------

I think I'm just going to give the answer I had in mind. (I basically said it already in the previous post.) When we set up the eigenvalue equation for Q, we find that there are no "square-integrable" functions f such that [Qf](q) =qf(q) . So, strictly speaking, with respect to our Hilbert space, Q has no eigenfunctions. But then again, we can still make 'sense' of the "eigenvalue equation" ... and so, we say it has been "generalized", and that the solutions f are "generalized" eigenfunctions.
 
  • #76
Eye,

Yes, it's strange. We can no longer edit our previous responses.

I agree that you caught me. I shall use a different letter for it.

Actually, I have no idea what you mean by this --
"suppose for definiteness – without loss of generality"?
Would you mind elaborating it?

About Q, I guess I actually went back to justify the Hilbert space. There could be more I can explore it a little latter; you have already noticed that. Basically, what will the Hilbert space look if I build it with the eigenvectors of the Q operator? Since we know its eigenvectors are not square integrable in the real line, This hilbert space might be bigger than the space of square integrable functions. Some thorough knowledge of functional analysis and probability theory might be needed here.

Actually, that also can lead to another question. That justification brings me a Hilbert space with probability decomposition but not necessarily Complex value coefficiened. I think the need of Complex value coefficient seems to be explanable by the scattering of electron diffraction or its interference.

My first response to this question was like this:
I did not submit it because I am seeing some holes there, but it's wotrthy to show a different perspective in approaching this question.
I was not sure the scope of your question, so I decided to go back to discuss what are "positions" observed ( as real value eigenvaues as we can see ) and the assumed Q operator associate with it. .

--->
In order to define [tex] Q | \psi > [/tex], you have to have a background manifold, then you can say [tex] Q | \psi > = q | \psi > [/tex], where q is not a constant.

In order to look for an answer of
[tex] Q | \psi_n > = q | \psi_n > = q_n | \psi_n > [/tex]
where [tex] q_n [/tex] needs to be a constant.

If | \psi_n > is a function f(q) of the coordinate q of the manifold M, then [tex] q*f(q) = q_n*f(q) [/tex] ; [tex] | \psi_n > [/tex] will have to be a function f(q) that is one when q equals to [tex] q_n [/tex] and zero elsewhere.

---> Then this will lead to f(q) is not a square integrable function.
 
  • #77
This is what I was going to continue on the Hilbert space built by the Q eigenvectors:

7). We can see two approches to expand the space now.
a. Discreet approach : [tex] | | u > = \sum_n a_n | u_n > [/tex] or
[tex] A = \sum_n a_n | u_n > < u_n | [/tex]
b. Continuous approach : [tex] | u > = \int a_n | u_n > [/tex] or
[tex] | A = \int a | u > < u| du [/tex]

Note I used a for the coefficient instead of u. It sounds better to me in that it shows that's a function of u but dependent on A. It's more comparable to the discreet notation too. Hope you agree.

8). Of course, if by the postulation N.2.2) (3), we only need to continue with 7.b.

I have to pause here before I can continue.
 
  • #78
9). I paused to ponder about what this integration means. I think, it's a path integral of a single parameter of a family of operators | q > < q | ( u )( I now use q for the ket instead of u, and use u as the parameter to denote this family of operator ) and a(u) is a certain coeffient of A for the subcomponent of the operator | q > < q |.

Now, we will have to think what is the integration and differential of operators.

In a completely abstract setup with infinite and possibly uncountable basis, to define an operator's integration will have to deal with something like examing the change [tex] | q > < q | ( u ) | \psi > [/tex] of the faimily of operator | q > < q | ( u ) for all [tex] \psi [/tex] in H. I will assume I do not need to go so far.

10). Bottom line here is that we will face a problem that the eigenvector [tex] | q_0 > [/tex] of the eigenvalue [tex] q_0 [/tex] is unable to be represented by such an integral except when setting the a(u) as a function such that its value is [tex] \infty [/tex] when u = q_0 and zero elsewhere, but its integration will be one.

11). In a way, if we already assume only square integrable functions are legitimate coefficients, then this is basically admitting there is no true eigenvalues or eigenvectors for operator Q. The eigenvalue we have observed as a point q_0 might be actually [tex] q_0+\triangle q [/tex].

12). Note I am able to separate q and u in the integration. The equation of with equating q and u actually is basically a special case when the parameter u was set to be the coordinate itself.

13). Now back to 7.a, I will try to show whether there is possibility that we can set up a Hilbert space that includes the square integrable functions and the generalized functions in a different way.

Pause...
 
  • #79
14). Back to 9), I have said that using
[tex] A = \int a(u) | q > < q | ( u ) du [/tex]
, we can represent a general representation of "mixed" states.
We will also find out if we trsanform the parameter to another parameter v. then
[tex] A = \int a(v) | q > < q | ( v ) (du/dv) dv [/tex] .
So the coefficient for a new parameter v will be a(v)*(du/dv).
The coefficient will change with the introduction of a different parameter.
If we intend to standardize the coefficient, the easiest choice will be using q as the standard parameter, and so a(q) could be used to represent a state, and
[tex] A = \int a(q) | q > < q | ( q ) dq [/tex]
or
[tex] A = \int a(v) | q > < q | ( v ) (dq/dv) dq [/tex] .

15. If we compare this to a discreet case in that
[tex] A = \sum_n a_n | q_n > < q_n | [/tex]
, we can note it's like we place [tex] q_n [/tex] on a real line and
[tex] \int_\infty^{q_0} a(q) | q > < q | ( q ) dq \cong [/tex]
[tex] \sum_{q_n <= q_0} a_n | q_n > < q_n | [/tex]

16. Back to 7.a, if we want to build a Hilbert space in which the state can be a discreet sum of eigenvectors of the "position" eigenvalues, we will write
[tex] \sum_n a_n | q_n > < q_n | [/tex] .
Looking into 15, is there a way we can have both forms of summation and integration coexist. I believe I have seen this in probability theory, you can set a probability distribution like this [tex] P([- \infty, a ]) [/tex] where [tex] P([- \infty, - \infty ]) = 0 [/tex] and [tex] P([- \infty, \infty ]) = 1 [/tex]; also it shall be a incresing function. This probability distribution is not necessary continuous or differentiable at everywhere, where it's differentiable the derivative will be square integrable and where it's not dfferentiable it will have "jump" points whose "generalized" derivatives just work similar to delta function.

Pause
 
  • #80
17) So, we can see the derivative of [tex] P([ - \infty, q]) [/tex], denoted as f(q), is related to the a(q). To clarify their relationship, I need to add the conditions for a(q) that you might forget. For a mixed state, in a discreet case, [tex] \sum_n a_n = 1 [/tex] ; so for a continuous case, I would say [tex] \int a(q) dq = 1 [/tex] is needed.
By that, we can see a(q) is f(q).
Note, a(q) is not the wavefunction [tex] \psi(q) [/tex] then, beacuse
[tex] \int \overline{\psi(q)} \psi(q) dq = 1 [/tex]
If we want to relate them, then
[tex] \overline{\psi(q)} \psi(q) = a(q) [/tex]
seems to be a possible solution.
Actually, there is an issue here, which is related to the exercise you show as [tex] c_1 \psi_1 + c_2 \psi_2 | = c_1 \psi_1 \prime + c_2 \psi_2 \prime [/tex] .
 
  • #81
19). To illustrate this, I need to differntiate [tex] | q_n \prime > [/tex] from [tex] | q_n > [/tex] in that [tex] | q_n \prime > = a_n | q_n > [/tex] where [tex] | a_n | = 1 [/tex] but [tex] a_n |= 1 [/tex] .
First, [tex] | q_n \prime > < q_n \prime | = | q_n > < q_n | [/tex] .
so
[tex] \int a(q \prime ) | q \prime > < q \prime | dq \prime = \int a(q) | q > < q | dq [/tex] .
No way to distinguish two "mixed" states from this point of view.
If we look from the perspective of a ket, compare
[tex] \int c(q \prime ) | q \prime > dq \prime [/tex]
to
[tex] \int c(q) | q > dq [/tex]
, even if c is the same function, they could be two different kets by the exercise we have shown in that even if [tex] | q \prime > [/tex] and [tex] |q > [/tex] are the "same", their complex linear combinations are not the "same", and the integration here can be viewed as a continuous linear combination of infinitely many "same" kets. Note these kets are assocaied with a "pure" state though.
 
  • #82
Yes, it's strange. We can no longer edit our previous responses.
For me, it is not only "strange", but also, "too bad". This means that (apart from any 'embarrassment' that incorrect posts will remain "permanently" on line) the data base, as a whole, as a 'resource' for someone who just "surfs-in" (looking for information) will no longer be as reliable as it could have been. This is unfortunate. Someone "surfing" the net may arrive at a post in some thread and think that what is written there is correct without realizing that several posts later on a comment has been made explaining how that post was in fact incorrect.

I was envisioning that this website would become a real reliable "source" of accurate information. Now, I see that as far my own posting is concerned, this will only be possible with additional 'care', over and above the usual amount, to make sure that posts are placed "correctly" at the onset (or shortly thereafter). Given my own limits of "time" and "knowledge", such a constraint may prove to be too demanding.
_______________
Actually, I have no idea what you mean by this --
"suppose for definiteness – without loss of generality"?
Would you mind elaborating it?
Sometimes, in the midst of a mathematical proof, one reaches a stage where a certain proposition P(k) will hold for at least one value of k. This particular value of k, however, is 'unknown' but nevertheless 'definite'. (For example, in the case of your "solution" above, the proposition P(k) was simply "a ≠ ak", and this had to be true for at least one of k =1 or k =2.)

Moreover, it is sometimes the case that the continuation of the proof proceeds in 'identical' fashion regardless of the particular value of k for which P(k) is true. (This was indeed the case for your "solution".) So, instead of saying that P(k) is true for some 'definite' value of k, say k = ko, where ko is 'unspecified', one says "suppose for definiteness that P(1) is true", and since the proof is the 'same' for any other 'choice' of k, one adds the remark "... without loss of generality".

The statement is, therefore, a sort of 'shorthand' which allows one to bypass certain 'mechanical' details and go straight to the essential idea behind the proof.
_______________
Basically, what will the Hilbert space look if I build it with the eigenvectors of the Q operator? Since we know its eigenvectors are not square integrable in the real line, This hilbert space might be bigger than the space of square integrable functions. Some thorough knowledge of functional analysis and probability theory might be needed here.
In the "functional analysis" approach, one begins with a Hilbert space of square-integrable functions RC. The 'justification' for this comes about from the Schrödinger equation (in "x-space") coupled with the Born probability rule that ψ*(x)ψ(x) is the "probability density", where the latter of these implies that the (physical) wavefunctions are all square-integrable. Thus, the probability P(I) of finding the particle in the (non-infinitesimal) interval I is given by

P(I) = (ψ, PIψ) ,

where PI is the "projector" defined by

[PIψ](x) ≡
ψ(x) , x Є I
0 , otherwise

and we have defined an "inner product"

(φ, ψ) = ∫φ*(x)ψ(x) dx .

This 'family' of projectors PI already contains in it the idea of |q><q|, since they are connected by the simple relation

P(a,b) = ∫ab |q><q| dq .

... Now, let's look more closely at what you say:
This hilbert space might be bigger than the space of square integrable functions.
Here, you are suggesting the idea of "building" a space from the |q>'s in such a way that those objects themselves are included in the space. I have never thought abut such a proposition in any detail. Nevertheless, the original Hilbert space would then be seen as "embedded" in a larger 'extended' vector space which would include the |q>'s (and whatever else).

For the record, you may want to know the 'technical' definition of a "Hilbert space" H:

(i) H is a "vector space";

(ii) H has an "inner product" ( , );

(iii) H is "complete" in the "induced norm" ║ ║ ≡ √( , );

(iv) H is "separable".

The last of these is usually not included in the definition. I have put it in here, since the Hilbert spaces of QM are always "separable". You may want to 'Google' some these terms or check at mathworld or Wikipedia, or the like.

Note that such a notion of an "extended" space is used in what is called a "rigged" Hilbert space. I do not know much about such a construction and, in particular, I am unsure as to what its 'utility' is from a 'practical' point of view.

There is also the "Theory of Distributions" (or "Distribution Theory"), which deals with this idea of "generalized" functions (i.e. "distributions") in a formally rigorous way.
_______________
Actually, that also can lead to another question. That justification brings me a Hilbert space with probability decomposition but not necessarily Complex value coefficiened. I think the need of Complex value coefficient seems to be explanable by the scattering of electron diffraction or its interference.
So far, we have been viewing the situation from a "static" perspective. As soon as we admit "motion" into the picture, then complex-valued coefficients come into play by way of necessity.

Think of a (time-independent) Hamiltonian, and the Schrödinger equation

ihbart|ψ(t)> = H|ψ(t)> .

With |φn> a basis of eigenkets such that H|φn> = Enn> , we then have general solutions of the form

|ψ(t)> = ∑n exp{ -iEnt / hbar } cnn> .

There is no way 'around' this. The coefficients must be complex-valued.

Your example of "diffraction" or "interference" appears (to me) to be a special case of this general fact. On the other hand, we know that such problems can be 'treated' by the formalism of "classical optics", in which case the use of complex-valued coefficients is merely a matter of 'convenience', and not one of 'necessity' (so, I'm not so sure that this is in fact a 'good' example).
_______________
In order to define [tex] Q | \psi > [/tex], you have to have a background manifold, then you can say [tex] Q | \psi > = q | \psi > [/tex], where q is not a constant.
You mean: Q|ψq> = q|ψq>, where q is not a constant.
---> Then this will lead to f(q) is not a square integrable function.
Yes. ... And as I mentioned above, "Distribution Theory" handles this 'difficulty' in a perfectly rigorous way.
_______________
 
Last edited:
  • #83
7). We can see two approches to expand the space now.
a. Discreet approach : [tex] | | u > = \sum_n a_n | u_n > [/tex] or
[tex] A = \sum_n a_n | u_n > < u_n | [/tex]
b. Continuous approach : [tex] | u > = \int a_n | u_n > [/tex] or
[tex] | A = \int a | u > < u| du [/tex]

Note I used a for the coefficient instead of u. It sounds better to me in that it shows that's a function of u but dependent on A. It's more comparable to the discreet notation too. Hope you agree.
Here are some "notational" details:

The 'continuous analogue' of the notation for the 'discrete case'

[1] A = ∑n an|un><un|

is

[1'] A = ∫ a(s)|u(s)><u(s)| ds .

What you wrote, i.e. (note: I have put "a" → "a(u)")

[2'] A = ∫ a(u)|u><u| du ,

is the analogue of

[2] A = ∑n an|n><n| .

Finally, the analogue of

[3'] A = ∫ a |a><a| da

is

[3] A = ∑a_n an|an><an| .
_______________
9). I paused to ponder about what this integration means. I think, it's a path integral of a single parameter of a family of operators | q > < q | ( u )( I now use q for the ket instead of u, and use u as the parameter to denote this family of operator ) and a(u) is a certain coeffient of A for the subcomponent of the operator | q > < q |.
I don't see how it can be construed as a "path integral". In a path-integral formulation of the problem for a particle moving in one-dimension, the single parameter q is construed a function of time, i.e. q(t), where that function is varied over all 'possible' functions on t Є [t1, t2] subject to the constraint δq(t1) = δq(t2) = 0. We would then have

<q(t2)|q(t1)> = a path integral .

But here, the 'closest' thing I can see is

<q'|q> = δ(q' - q) .

In short, a "path integral" can come into play once we consider the "time evolution" of the quantum system. Right now, we are only concerned with the situation at a single 'given' time.
_______________
Now, we will have to think what is the integration and differential of operators.

In a completely abstract setup with infinite and possibly uncountable basis, to define an operator's integration will have to deal with something like examing the change [tex] | q > < q | ( u ) | \psi > [/tex] of the faimily of operator | q > < q | ( u ) for all [tex] \psi [/tex] in H. I will assume I do not need to go so far.
Now, the "family" of operators you are considering, what I will call |q><q|, is very much like a 'derivative' of the projector PI which I mentioned before; i.e.

[PIψ](x) ≡
ψ(x) , x Є I
0 , otherwise .

Let us define E(q) ≡ P(-∞,q). Then, 'formally' we have

dE(q) = |q><q| dq .

The LHS is the 'formal' expression for the "differential of the spectral family" in the context of "functional analysis"; the RHS is the "Dirac" equivalent.
_______________
10). Bottom line here is that we will face a problem that the eigenvector [tex] | q_0 > [/tex] of the eigenvalue [tex] q_0 [/tex] is unable to be represented by such an integral except when setting the a(u) as a function such that its value is [tex] \infty [/tex] when u = q_0 and zero elsewhere, but its integration will be one.

11). In a way, if we already assume only square integrable functions are legitimate coefficients, then this is basically admitting there is no true eigenvalues or eigenvectors for operator Q.
Yes. And this is where "Distribution Theory" comes in.


... The eigenvalue we have observed as a point q_0 might be actually [tex] q_0+\triangle q [/tex].
I don't see this (... unless we take the limit Δq → 0).
_______________
_______________

... As it turns out, unfortunately, starting this week and continuing on for the next several months(!), I will become very busy. Consequently, I will have little time for any significant activity in the Forum here. I have already reduced my posting to only this thread alone (over the last few weeks).

This week, however, I still do hope to at least get to the next two postulates and connect them to the original issue which was of concern – "expectation values", "mixed states", and the "Trace" operation. If you recall, it was matters of this kind which caused me to ask you if you had gone over the postulates in a clear, concise way.

... After that, there will be only one more postulate, that of "time evolution". If we deal with that here, I must tell you in advance that my input into this thread will 'evolve' only very slowly.
_______________
 
  • #84
Eye,

Thanks for your reply.

I think the conecpt of projector of interval is more straightforward and better than my approach, even though I think the way I approach it can be proved the same eventually. There is just a misunderstnding here. maybe I shall not use the word "path integral"; I did not mean to associate that integration with any time parameter. The parameter is just any real line in this case. Of course, in this case, I will have to be able to define how to integrate a operator function of a real line. The idea of projector of interval and so the point projector being its derivative takes care of the issue of what is the integration here.
 
  • #85
Eye,

Sorry about this stupid question.

But what does LHS and RHS stand for? I can't find it in mathworld.

Thanks
 
  • #86
Actually, I did a little bit verification here to see how this is derived.

The probability P(I) of finding the particle in the (non-infinitesimal) interval I is given by

P(I) = (ψ, PIψ) ,
-----------------------------------------

First, in a discreet case,
[tex]P(I) = \sum_n (\psi, q_n) (q_n , \psi) [/tex]
Take [tex] P_I \psi = \sum_n (\psi, q_n) \q_n [/tex] ,
[tex] (\psi, P_I \psi ) = \sum_n \overline{( \psi, q_n )} ( \psi , q_n) = [/tex]
[tex] \sum_n ( q_n , \psi ) ( \psi , q_n) = [/tex]

Then, translate into continuous case,
[tex]P(I) = \int_a^b (\psi, q) (q ,\psi) dq [/tex]
Take [tex] P_I \psi = \int_a^b (\psi, q) | q > dq [/tex] ,
[tex] (\psi, P_I \psi ) = ( \psi, \int_a^b ( \psi, q ) | q > dq ) = [/tex]
[tex] \int_a^b \overline{( \psi, q )} ( \psi , q) dq = [/tex]
[tex] \int_a^b ( q , \psi ) ( \psi , q) dq = [/tex]

Now, this looks better.
 
Last edited:
  • #87
Sammywu said:
Eye,

Sorry about this stupid question.

But what does LHS and RHS stand for? I can't find it in mathworld.

Thanks
"LHS" stands for "left-hand-side"; "RHS" stands for "right-hand-side". :smile:
 
  • #88
Eye,

Thanks. I actually thought they could stand for some special Hilbert spaces.

Any way, your mention of "rigged" Hilbert space probably is what I was led to do with a ket defined as a function series { f[SUB}n[/SUB] } and
[tex] lim_{ n \rightarrow \infty } \int_{-\infty}^\infty f_n = 1 [/tex]
. So all kets can be treated as a funcion series. Just as you said, it might not be of any practical use. I guess there is no need to continue.

Any way, I have gone thru an exercise showing me that I can construct a "wavefunction" space with any observed continuous eigenvalues.

Note the arguments applied is not specific to "position" but applicable to any continuous eigenvalues.
 
  • #89
I am not sure whether this is too much, but I found I can go even further; something is interesting here.

21). I can represent a ket in such a way:
[tex] \int \psi(q) | q > dq [/tex]
This shows that the wavefunction is actually a abbreviated way of this ket.

The eigenvactor of an eigenvalue q_0 can be then written as
[tex] \int \delta(q_0) | q > dq [/tex] .

Or in general, I can extend this into a sample such as a function series { f_n }
in that:

[tex] lim_{ n \rightarrow \infty } lim_{ q \rightarrow \q_1 , q_2 } f_n ( q) = \infty [/tex]
and
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^q_1 f_n ( q) dq = a_1 [/tex]
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^q_2 f_n ( q) dq = 1 [/tex]

22). I can even check what shall the inner products of two kets without clear prior definition of inner products:

[tex] < \psi_1 | \psi_2 > = [/tex]
[tex] < \int \psi_1(q) |q > dq | \int \psi_2(q \prime) | q \prime > dq \prime > = [/tex]
[tex] \int \overline{\psi_1(q)} < q | \int \psi_2(q \prime) | q \prime> dq \prime > dq = [/tex]
[tex] \int \overline{\psi_1(q)} \int \psi_2(q \prime) < q | q \prime > dq \prime dq = [/tex]
 
  • #90
Response to posts #79-81

14). ... I have said that using
[tex] A = \int a(u) | q > < q | ( u ) du [/tex]
, we can represent a general representation of "mixed" states.
Now, wait just a moment! How did we get onto the subject of "states" in a decomposition like that of above? Up until now, we have been talking about "observables". ... "Mixed states" will come soon.
______________
... If we intend to standardize the coefficient, the easiest choice will be using q as the standard parameter ... and
[tex] A = \int a(q) | q > < q | ( q ) dq [/tex]
Yes, the easiest choice of "notation" is

A = ∫ a(q) |q><q| dq .

Note, however, that such an operator is merely a function of Q. Specifically, A = a(Q). In other words, the matrix elements of A, in the "generalized" |q>-basis, are given by

<q|A|q'> = a(q) δ(q – q') .

(It turns out that: any linear operator L is a 'function' of Q iff [L,Q] = 0. (This, of course, applies to a spinless particle moving in one dimension.))

BUT ...

In all of this, I am getting the feeling that each of us is misunderstanding what the other means. In the above, if you 'meant' that A is some self-adjoint operator whose spectrum is (simple) continuous, then 'automatically' we can write

[1] A = ∫ a |a><a| da

with no difficulty whatsoever. There is no reason to write it any other way, because by 'hypothesis'

[2] A|a> = a|a> .

The exact analogue of these expressions in the corresponding (nondegenerate) discrete case is

[1'] A = ∑a a |a><a| ,

and

[2'] A|a> = a|a> .

In the discrete case, however, we modify the notation by introducing an index like "n" because somehow 'it is more pleasing to the eye'. But to do an analogous thing in the continuous case is completely uncalled for, since doing so will introduce a new element of "complexity" which provides no advantage whatsoever. ... Why should we write "a" as a function of some parameter "s", say a = w(s), and then have da = w'(s)ds? ... We will get nothing in return for this action except additional "complexity"! (Note that changing the "label" for the generalized ket |a> → |u(a)> introduces no such difficulties.)
______________
16. ... we will write
[tex] \sum_n a_n | q_n > < q_n | [/tex] .
Looking into 15, is there a way we can have both forms of summation and integration coexist.
Yes. Given a self-adjoint operator A, then "the spectrum of A" (i.e. "the set of all eigenvalues ('generalized' or otherwise) of A") can have both discrete and continuous parts. A simple example of such an observable is the Hamiltonian for a finite square-well potential. The "bound states" are discrete (i.e. "quantized" energy levels), whereas the "unbound states" are continuous (i.e. a "continuum" of possible energies).
______________
17) ... For a mixed state, in a discreet case, [tex] \sum_n a_n = 1 [/tex] ; so for a continuous case, I would say [tex] \int a(q) dq = 1 [/tex] is needed.
Hopefully, soon we will be able to talk 'sensibly' about "mixed states". Once we do that, you will see that a 'state' like

ρ = ∫p(q)|q><q|dq (with, of course, p(q) ≥0 (for all q) and ∫ p(q) dq = 1) ,

is not 'physically reasonable'.

So far, we have explained only "pure states", as given by our postulate P1. Recall:
P0: To a quantum system S there corresponds an associated Hilbert space HS.

P1: A pure state of S is represented a ray (i.e. a one-dimensional subspace) of HS.
When we get to discussing "mixed states", we will not explain them in terms of a "postulate", but rather, those objects will be introduced by way of a 'construction' in terms of "pure states". I have already alluded to such a "construction" in post #46 of this thread. There I wrote:
A pure state is represented by a unit vector |φ>, or equivalently, by a density operator ρ = |φ><φ|. In that case, ρ2 = ρ.

Suppose we are unsure whether or not the state is |φ1> or |φ2>, but know enough to say that the state is |φi> with probability pi. Then the corresponding density operator is given by

ρ = p11><φ1| + p22><φ2| .

In that case ρ2 ≠ ρ, and the state is said to be mixed. Note that the two states |φ1> and |φ2> need not be orthogonal (however, if they are parallel (i.e. differ only by a phase factor), then we don't have mixed case but rather a pure case).
______________
______________

Sammy, I am hoping to post a response your posts #84,86,88,89 by Monday. After that I hope to get at least one more postulate out. (There are also (at least) two items from our previous exchanges which I wanted to address.)
 
  • #91
23). In trying to evaluate 22), I found I need something clearer about how to represent all vectors. Let me put all eigenvalues in one real line; for each q in this real line, we associate an eigenvector [tex] ^\rightarrow{q} [/tex] with it. I want to avoid using | q > for now, because | q > is actually a ray. also, remember there are many vectors as [tex] c * ^\rightarrow{q} [/tex] where | c | =1 can be placed here; let's just pick anyone of them.

So, now with a function c(q), we can do a vector integration over the q real line as:
[tex] \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq [/tex]
Note q in c(q) and dq is just a parameter and [tex] ^\rightarrow{q} [/tex] is a vector, and also viewed a vector function paramterized by q.

24).Refering back to 21), all vectors can be represented now by:
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c_n(q) ^\rightarrow{q} dq [/tex]

25). In particular, let
[tex] \delta_n( q - q_0 ) = 1/n for q_0 - 1/2n <= q <= q_0 +1/2n [/tex] and 0 elsewhere,
the eigenvector for q_0 can be represented as:
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q - q_0) ^\rightarrow{q} dq [/tex]

26). And, for other vectors, c_n(q) can be set to a constant function c(q);
we can verify its consistency with the normal representation:

[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c_n(q) ^\rightarrow{q} dq = [/tex]
[tex] \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq = [/tex]
[tex] \int_{ - \infty }^\infty c(q) lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q \prime -q) ^\rightarrow{q \prime } dq \prime dq = [/tex]
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \int_{ - \infty }^\infty c(q) \delta_n(q \prime - q) ^\rightarrow{q \prime} dq dq \prime = [/tex]
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty ( \int_{ - \infty }^\infty c(q) \delta_n(q \prime - q) dq ) ^\rightarrow{q \prime} dq \prime =[/tex]
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c(q \prime) ^\rightarrow{q \prime} dq \prime [/tex]

27). For inner products of c and d,
[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , \int_{ - \infty }^\infty d(q \prime ) ^\rightarrow{q \prime } dq \prime ) = [/tex]
[tex] \int_{ - \infty }^\infty \overline{d(q \prime)} ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime} ) dq \prime [/tex]

28). Now, I have to discuss what shall it be for
[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime } ) [/tex]

First, if we look into the inner products of two eigenvectors [tex] \rightarrow{q \prime } [/tex] and [tex] \rightarrow{q } [/tex], we can first think about what shall be the innerproduct betwen
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(u - q) ^\rightarrow{u} du [/tex]
and
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(v - q \prime) ^\rightarrow{v} dv [/tex]
.

Comparing it to a discreet case, I guess this could be
[tex] | ( q , q \prime) | = lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) \delta_i( u - q ) du [/tex]

So, in general,
[tex] ( q , q \prime) = e^{ia} lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_n( u - q \prime) \delta_n( u - q ) du [/tex]

The phase factor is put into show the possibility of two out-of-phase eiegnvector. For now, we can assume our standard basis are in-phase vectors.

With this, we can further translate the inside part of 27) to.

[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime} ) = [/tex]
[tex] \int_{ - \infty }^\infty c(q) ( ^\rightarrow{q} , ^\rightarrow{q \prime} ) dq = [/tex]
[tex] \int_{ - \infty }^\infty c(q) lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_n( u - q \prime) \delta_n( u - q ) du dq = [/tex]
[tex] lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) \int_{ - \infty }^\infty c(q) \delta_j( u - q ) dq du = [/tex]
[tex] lim_{ i \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) c(u) du = [/tex]
[tex] c(q \prime)

Placing that into 27), I got
[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , \int_{ - \infty }^\infty d(q \prime ) ^\rightarrow{q \prime } dq \prime ) = [/tex]
[tex] ( \int_{ - \infty }^\infty \overline{d(q \prime)} c ( q \prime) dq \prime [/tex]
 
  • #92
Eye,

Thanks for the reply. It defintely caught my misunderstandings and stimulated my thoughts too.
 
  • #93
Eye,

Now I really know what you were showing me. I definitely went on a different direction. You are showing me that a self-adjoint operator can be decompsed into an integration of its eigenvalues multiplied by its eigenprojectors.

So, [tex] Q = \int q | q> < q | dq [/tex] . Defintely correct.

And can I do this?
[tex] Q | \psi > = ( \int q | q> < q | dq ) | \int \psi(q) |q > dq = \int q \psi(q)| q > dq [/tex]
By the above EQ., if we see [tex] \psi(q) [/tex] representing [tex] | \psi >[/tex] , then
[tex] Q | \psi > = q * \psi ( q ) = q * | \psi > [/tex].

This is of course due to the [tex] \psi ( q ) [/tex] is the coeffient when choosing the eigenvectors of Q as basis.

If the energy eigenvectors are chosen as the basis, then we can write
[tex] H | \psi > = E * | \psi > [/tex]
, because Hamiltonian's eigenvalues are energies.

While I use
[tex] | \psi > = \int c(q) |q > dq [/tex]
because I treat q as a paramter here, but I think I saw another notation in this way
[tex] | \psi > = \int c(q) d |q > [/tex]
Do you have any comments on that?
 
Last edited:
  • #94
Just make some conclusions on my deduction:

I. After including the phase factor consideration, [tex] | q_0 > [/tex] as the normal eigenvector of the eigenvalue [tex] q_0 [/tex] can be denoted as:

[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q - q_0) e^{ik(q)} | q> dq [/tex]
, or
[tex] \int_{ - \infty }^\infty \delta(q - q_0) e^{ik_0} | q> dq [/tex]

This can be checked against that it shall be representable as
[tex] \sum_n c_n \psi_n = [/tex]
[tex] \sum_n c_n \int_{ - \infty }^\infty \psi_n( q ) | q> dq [/tex]
where [tex] \psi_n [/tex] is the normal eigenvector of Hamiltonian, because this infinite summation can be viewed the limit of a function sequence ( To use correct math. term, I think I probably shall say sequence instead of series , series is reserved for infinite summation. correct ? ) as well.

II. If we do not simplify the 3-dim eigenvalues observed into the discussion of anyone of them, then we will find out the three measurables form a 3-dim vector, and we will need to think about what is a "vector operator" or a "vector observable".

We will have more to explore such as, what does the rotation of the vector operator mean, what shall two vector operators' inner or scalor product and their outer product be.
 
  • #95
Responses to posts #84,86,88,89

About the object "|q><q|" which you referred to as a "point projector". Do you realize that "|q><q|" is not a "projector"? ... A necessary condition for an object P to be a projector is P2 = P. But

(|q><q|)2 = |q><q| δ(0) = ∞ .

On the other hand, PI defined by

[PIψ](x) ≡
ψ(x) , x Є I
0 , otherwise ,

or equivalently,

PI ≡ ∫I |q><q| dq ,

does satisfy PI2 = PI.
______________

According to the "Born rule", the probability for finding the particle in the interval I is given by

P(I) = ∫I ψ*(q)ψ(q) dq .

But ψ(q) ≡ <q|ψ>, so that

P(I) = ∫I <ψ|q><q|ψ> dq

= <ψ| { ∫I|q><q|dq } |ψ>

= <ψ|PI|ψ> .

The last expression corresponds to (ψ, PIψ) in the notation of "functional analysis".

Apart from some 'notational difficulties', you have performed this verification correctly:
[tex]P(I) = \int_a^b (\psi, q) (q ,\psi) dq [/tex]
Take [tex] P_I \psi = \int_a^b (\psi, q) | q > dq [/tex] ,
[tex] (\psi, P_I \psi ) = ( \psi, \int_a^b ( \psi, q ) | q > dq ) = [/tex]
[tex] \int_a^b \overline{( \psi, q )} ( \psi , q) dq = [/tex]
[tex] \int_a^b ( q , \psi ) ( \psi , q) dq [/tex]
____

As for what you say regarding a discrete case:
[tex]P(I) = \sum_n (\psi, q_n) (q_n , \psi) [/tex]
Take [tex] P_I \psi = \sum_n (\psi, q_n) \q_n [/tex] ,
[tex] (\psi, P_I \psi ) = \sum_n \overline{( \psi, q_n )} ( \psi , q_n) = [/tex]
[tex] \sum_n ( q_n , \psi ) ( \psi , q_n) [/tex]
... the picture you are presenting of a "discrete" position observable in terms of "discrete" eigenkets does not make.

To make the position observable "discrete" we want to have a small "interval" In corresponding to each "point" qn, say

qn = n∙∆q , and In = (qn-∆q/2 , qn+∆q/2] .

Then, our "discrete" position observable, call it Q∆q, will be a degenerate observable. It will have eigenvalues qn and corresponding eigenprojectors (not kets!) PI_n. That is,

Q∆q = ∑n qnPI_n .
______________
21). I can represent a ket in such a way:
[tex] \int \psi(q) | q > dq [/tex]
This shows that the wavefunction is actually a abbreviated way of this ket.
Yes. In general,

[1] |ψ> = ∫ ψ(q) |q> dq ,

which is equivalent to

[2] ψ(q) = <q|ψ> .

Thus, ψ(q) is the "component" of |ψ> in the ("generalized") |q>-basis.

Relative to an 'ordinary' basis we have two similar expressions:

[1'] |ψ> = ∑n cnn> ,

[2'] cn = <φn|ψ> .

The requirement that |ψ> Є H, i.e. <ψ|ψ> < ∞ , in terms of [1] (and [2]) means

∫ |ψ(q)|2dq < ∞ ,

whereas in terms of [1'] (and [2']) it means

n|cn|2 < ∞ .

Everything is the 'same' in both cases, except for the fact that <q|q> = ∞ , whereas <φnn> = 1. That is, each |q> is not a member of H, whereas each |φn> is. Indeed, as you say:
The eigenvactor of an eigenvalue q_0 can be then written as
[tex] \int \delta(q_0) | q > dq [/tex] .
... and <q|q> = ∫|δ(qo)|2dqo = δ(0) = ∞ .
______________
Any way, I have gone thru an exercise showing me that I can construct a "wavefunction" space with any observed continuous eigenvalues.
Yes. The position observable Q is used the most frequently for this purpose. The next most frequently used is the momentum observable P.
______________
22). I can even check what shall the inner products of two kets without clear prior definition of inner products:

[tex] < \psi_1 | \psi_2 > = [/tex]
[tex] < \int \psi_1(q) |q > dq | \int \psi_2(q \prime) | q \prime > dq \prime > = [/tex]
[tex] \int \overline{\psi_1(q)} < q | \int \psi_2(q \prime) | q \prime> dq \prime > dq = [/tex]
[tex] \int \overline{\psi_1(q)} \int \psi_2(q \prime) < q | q \prime > dq \prime dq = [/tex]
In the above, the internal consistency of our formulation is brought out as soon as we write

<q|q'> = δ(q - q') .

Then, the last integral becomes

∫ψ1*(q) ∫ψ2(q') δ(q - q') dq' dq

= ∫ ψ1*(q) ψ2(q) dq ,

which is just what we want for <ψ12>.
______________
 
  • #96
Response to post #93

(Note: I am deferring a response to post #91 until later.)

Sammywu said:
Now I really know what you were showing me. I definitely went on a different direction. You are showing me that a self-adjoint operator can be decompsed into an integration of its eigenvalues multiplied by its eigenprojectors.

So, [tex] Q = \int q | q> < q | dq [/tex] . Defintely correct.
Yes ... when Q is a self-adjoint operator with pure continuous (nondegenerate) spectrum.
____________
And can I do this?
[tex] Q | \psi > = ( \int q | q> < q | dq ) | \int \psi(q) |q > dq = \int q \psi(q)| q > dq [/tex]
Yes, but use distinct integration variables in each of the integrals, say q in the first and q' in the second, so you can then show the 'computation' explicitly, like this:

Q|ψ>

= (∫ q|q><q| dq) (∫ ψ(q')|q'> dq')

= ∫dq q|q> ∫ψ(q')<q|q'> dq'

= ∫dq q|q> ∫ψ(q') δ(q - q') dq'

= ∫ qψ(q)|q> dq [E1] .
____________
By the above EQ., if we see [tex] \psi(q) [/tex] representing [tex] | \psi >[/tex] , then
[tex] Q | \psi > = q * \psi ( q ) = q * | \psi > [/tex].
No. The relation Q|ψ> = q|ψ> would mean that |ψ> is an eigenket of Q, something you do not wish imply. In words, what you want to express is this: "the action of Q on |ψ> when depicted in the q-space of functions is multiplication by q".

That is easy to do. Given any ket |φ>, then it's q-space representation is just <q|φ>, which we write as φ(q). Now, we want |φ> = Q|ψ> in q-representation, which is therefore just

<q|(Q|ψ>) = <q| (∫ q'ψ(q')|q'> dq') , using [E1] above

= ∫ q'ψ(q') <q |q'> dq'

= ∫ q'ψ(q') δ(q - q') dq'

= qψ(q) .

Alternatively, from Q|q> = q|q>, we have (Q|q>) = (q|q>), which becomes <q|Q = <q|q*. But Q = Q and q* = q, so <q|Q = q<q|. Therefore,

<q|Q|ψ> = q<q|ψ> = qψ(q) .

This is of course due to the [tex] \psi ( q ) [/tex] is the coeffient when choosing the eigenvectors of Q as basis.
Yes,

ψ(q) is just the q-component of |ψ> in the generalized |q>-basis.
____________

Compare this last statement with the case of a (non-"generalized") discrete basis.

In a discrete basis |φn>, what is the φn-representation of |ψ>? ... It is just <φn|ψ>. And if we write |ψ> = ∑n cnn>, we then have <φn|ψ> = cn. So,

cn is just the n-component of |ψ> in the |φn>-basis.

... In the discrete case, this is 'obvious'. The continuous case should now be 'obvious' too.

Perhaps a 'connection' to "matrices" may offer further insight. So here we go!
____________

Note that, in what follows, no assumption is made concerning the existence of an "inner product" on the vector space in question. It is therefore quite general. (Note: I am just 'cutting and pasting' from an old post.)
_____

Let bi be a basis. Then, (using the "summation convention" for repeated indices) any vector v can be written as

v = vibi .

In this way, we can think the of vi as the components of a column matrix v which represents v in the bi basis. For example, in particular, the vector bk relative to its own basis is represented by a column matrix which has a 1 in the kth position and 0's everywhere else.

Now, let L be a linear operator. Let L act on one of the basis vectors bj; the result is another vector in the space which itself is a linear combination of the bi's. That is, for each bj, we have

[1] Lbj = Lijbi .

In a moment, we shall see that this definition of the "components" Lij is precisely what we need to define the matrix L corresponding to L in the bi basis.

Let us apply L to an arbitrary vector v = vjbj, and let the result be
w = wibi. We then have

wibi

= w

= Lv

= L(vjbj)

= vj(Lbj)

= vj(Lijbi) ... (from [1])

= (Lijvj)bi .

If we compare the first and last lines of this sequence of equalities, we are forced to conclude that

[2] wi = Lijvj ,

where, Lij was, of course, given by [1].

Now, relation [2] is precisely what we want for the component form of a matrix equation

w = L v .

We, therefore, conclude that [1] is the correct "rule" for giving us the matrix representation of a linear operator L relative to a basis bi.
_____

The above description of components is quite general. It relies on the following two "facts" concerning a "basis" bi:

(i) any vector can be written as a linear combination of the bi,

(ii) the coefficients in such a linear combination are unique.

Now, here is an exercise:

Draw the 'connection' between what was just described above to that of our Hilbert space.

Your answer should be short and straight 'to the point'. To show you what I mean, I will get you started:

bi = |bi>

v = |v>

vi = <bi|v>

etc ...
___

... What about a continuous basis, say |q>?
____________

Now getting back to your post:
If the energy eigenvectors are chosen as the basis, then we can write
[tex] H | \psi > = E * | \psi > [/tex]
, because Hamiltonian's eigenvalues are energies.
This is the same mistake you made above with "Q|ψ> = q|ψ>", which you now know is wrong ... right?
____________
While I use
[tex] | \psi > = \int c(q) |q > dq [/tex]
because I treat q as a paramter here, but I think I saw another notation in this way
[tex] | \psi > = \int c(q) d |q > [/tex]
Do you have any comments on that?
The object "|q>" in each formula is obviously not the same.

In the first formula, we have "|q>dq" in an integral which produces a vector of the Hilbert space (provided that ∫|c(q)|2dq < ∞). The interpretation of "|q>" is, therefore, that of a "vector density" in the Hilbert space, while "dq" is the associated "measure". Their product, "|q>dq", then has the interpretation of an "infinitesimal vector".

In the second formula, we see "d|q>". Its interpretation is that of an "infinitesimal vector". I will change the notation to avoid confusion and write "d|q>" as "d|q)". An appropriate definition of "|q)" in terms of the usual "|q>" is then

|q) = ∫-∞q |q'> dq' .

Thus, |q) is also in a class of "generalized vector". If we now take |q) as the "given", then from it we can define |q> ≡ d|q)/dq.

From the perspective of any calculation I have ever performed in quantum mechanics, the "|q>" notation of Dirac is superior.
 
Last edited:
  • #97
Eye,

I am still digesting your response. So it's going to take me a while to answer that exercise.

Just respond to some points you made:

1) I did not know | q > < q | is not a projector. I have to think about that.

2). I did hesiate to write [tex] Q | \psi > = q | \psi > [/tex] in the same reasons you mentioned, but in both Leon's Ebook and another place I did see their mentioning about the Q's defintion is [tex] Q | \psi > = q | \psi > [/tex]. Just as I mentioned, the only reason I could see this "make sense" is by either
[tex] Q | \psi > = \int q |q > < q> dq [/tex] or
[tex] Q | \psi > = q \psi ( q ) [/tex] in the form of wavefunvtions.

3). I think your defining that [tex] \psi ( q ) = < q \ \psi > [/tex] actually will make many calculations I did in showing in general
[tex] < \psi \prime | \psi > = \int \overline{\psi \prime ( q ) } \psi ( q ) dq [/tex]
much more straighforward thamn my cumbersome calculations.

But one problem is then what is < q | q >. The discrete answer will be it needs to be one. Which of course will lead to some kind of conflicts in a general requirement of
[tex] \int \overline{\psi ( q ) } \psi ( q ) dq = 1 [/tex] .

This is probably related to the eigenprojector [tex] P_{I_n} [/tex] you mentioned.

4). Actually, I noticed my deduction has a contradition unresolved.

There is an issue to be resolved in my eigenfunction for "position" [tex] q_0 [/tex] as
[tex] lim_n { \rightarrow \infty } \int \delta_n(q - q_0) | q> dq [/tex]
. The problem here is whether the norm of \delta_n( q - q_0 ) shall be one or its direct integration shall be one.
If the norm shall be one, then it shall be altered to be its square root then.
 
  • #98
Eye,

Answer to the exercise:

What you show here is a vector space with a basis, and the Hilbert space is a vector space with inner product, so I think what behind here is how to establish the relationship between an arbitrary basis and an inner product.

I). Discrete case:

I.1) From an arbitray basis to an inner product:

For two vector v and w written as [
tex] v = \sum_i b_i [/tex] and [tex] w = \sum_i b_j [/tex]
with any basis [tex] b_i [/tex], we can define an inner product as [tex] ( v , b_i ) = v_i [/tex] and we can deduct from there that
[tex] ( v , w ) = \sum_i v_i \overline{w_i} [/tex].
This inner product will satisfy all condition required for an inner product and { [tex] b_i [/tex] } becomes an orthonormal basis automatically.

If
[tex] b_i \prime = L b_i = \sum_j L_{ij} b_j[/tex]
transforms a basis [tex] b_i [/tex] to [tex] b_i \prime [/tex] and [tex] b_i \prime [/tex] happens to be orthonormal in the inner product we defined, L shall be an unitary transformation. ( I haven't proved this yet, but I think this shall be right ).

I.2) From an inner product to a basis:

Let any two [tex] \psi_1 , \psi_2 \in H [/tex], set
[tex] b_1 = \psi_1 \div ( \psi_1, \psi_1 ) [/tex].
Set
[tex] \psi_2 \prime = \psi_2 - ( \psi_2 , b_1 ) b_1 [/tex]
.

If [tex] \psi_2 \prime [/tex] is not zero, then set
[tex] b_2 = \psi_2 \prime \div ( \psi_2 \prime , \psi_2 \prime ) [/tex].
I can establish an orthonormal basis { [tex] b_1 , b_2 [/tex] } for the space spanned by [tex] \psi_1 , \psi_2 [/tex] .

Taking in a [tex] \psi_3 [/tex] with
[tex] \psi_3 \prime = \psi_3 - ( \psi_3 , b_1 ) b_1 - ( \psi_3 , b_2 ) b_2 [/tex]
not zero, we can set
[tex] b_3 = \psi_3 \prime \div ( \psi_3 \prime , \psi_3 \prime ) [/tex]
and span the space even larger.

Continuing this process, we can establish an orthonormal basis as long as the Hilbert space has finite or infinite but countable dimension.

Does separability contribute to ensure its countability?
 
  • #99
II) For a continuous spectrum:

II.1) From any basis to an inner product:

For two vector v and w written as [tex] v = \int v(q) | q> dq [/tex] and [tex] w = \int w(q) | q > dq [/tex] with any continuous vector density basis [tex] | q > [/tex], we can define an inner product as [tex] ( v , w ) = \int v(q) \overline{w(q)} dq [/tex]. This inner product will satisfy all condition required for an inner product and { [tex] | q > [/tex] } shall be a generalized orthonormal basis automatically.
( I need to work out the detail of ( v , |q> ) later ).

If [tex] | p > = L | q > = \int L(p,q) | q > dq [/tex] transforms a basis [tex] | q > [/tex] to [tex] | p > [/tex] and [tex] | p > [/tex] happens to be orthonormal in the inner product we defined, L shall be an unitary transformation. ( Again, pending detail proof. )

If L is unitary, then
[tex] | q > = \overline{L^T} | p > = \int \overline{L(q,p)} | p > dp [/tex]
So for [tex] v = \int v(q) | q> dq [/tex], v can be transformed to
[tex] v = \int v(q) \int \overline{L(q,p)} | p > dp dq = [/tex]
[tex] \int \int v(q) \overline{L(q,p)} dq | p > dp [/tex]

So
[tex] \int v(q) \overline{L(q,p)} dq [/tex]
become the coefficient representing in |p> .

I.2) From an inner product to a basis:

The process of this part is almost exactly the same as the discrete one.
I need to figure out how separability contribute to ensure its countability.
 
  • #100
Addentum to II.1).

When dealing with [tex] ( \psi , q) = < q | \psi > [/tex], there shall be an extra care because |q> could be representing two different things here.

Inside the integral
[tex] \int \psi(q) | q > dq [/tex]
, it's a " vector density".

And we aslo use it to denote the eigenvector or eigenket of "position", in this case it's a normal vector not a "vector density".

Strictly speaking, for
[tex] \psi (q) = ( \psi, q ) = < q | \psi > [/tex]
, if |q> is a "vector density" here, then it's not an inner product but rather an "inner product density".

But with this in mind, I am able to write the eigenket as
[tex] | q > = \int \delta(q \prime - q ) |q \prime > dq \prime [/tex]
, or more precisely if considering phase factors,
[tex] | q > = \int \delta(q \prime - q ) e^{ik(q \prime)} | q \prime >d q \prime = [/tex]

[tex] lim_{n \rightarrow \infty} \int \frac{e^{- \frac{ n^2 ( q \prime - q)^2 }{2} }}{ \pi^{ \frac{1}{4} } \frac{1}{n} } e^{ik(q \prime)} | q \prime > d q \prime [/tex]
.

Here I think using Gausian wave function as the approximate function sequence could be the best. And I have chosen a factor in such a way that
their norms could be one always.

Any way, with this we can say that the "inner product density" of an eigenket |q> and the vector density [tex] | q \prime > [/tex] of the position operator is,
[tex] < q \prime | q > = \delta ( q \prime - q ) e^{ik(q)} e^{-ik(q \prime)} [/tex]

And the "inner product" of two eigenkets |q> and [tex] | q \prime > [/tex] shall be
[tex] \int \delta ( q \prime \prime - q \prime ) \delta ( q \prime \prime - q ) e^{ik ( q \prime \prime)} e^{-ik \prime (q \prime \prime)} dq \prime \prime [/tex]

I will see whether I prove that they will be the same value any way?

And the "inner product" of an eigenket |q> and any ket [tex] \psi [/tex] shall be
[tex] < \psi | q > = \int \delta ( q \prime - q ) \overline{\psi( q \prime )} e^{ik(q \prime )} dq \prime [/tex]
 
Last edited:
  • #101
Eye,

Actually after I read through your response, I already understand why
[tex] < q | Q | \psi > = q \psi(q) [/tex]
.

Now I figured out that you expect me to explore more in that A as a self-adjoint linear operator here and about this EQ.

First, in discrete case.
We know [tex] A = \sum a_n P_{\psi_n} [/tex].

For
[tex] A \psi_n = a_n \psi_n [/tex]
, So
[tex] A_{ij} = a_i \delta_ij [/tex]

Fo any vector [tex] v = \sum_i v_i \psi_n [/tex] ,
[tex] < \psi_n | A | v > = < \psi_n | A | \sum_i v_i \psi_n > = [/tex]
[tex] \sum_i v_i < \psi_n | A | \psi_i > = [/tex]
[tex] v_n a_n [/tex]
 
  • #102
Now, in the continuous case.
We know [tex] Q = \int q | q> < q | dq [/tex].

In analogous to [tex] A_{ij} = < b_j | A | b_i > = a_i \delta_{ij} [/tex],
The component of Q as [tex] < q | Q | q \prime > [/tex] is
[tex] q \prime \delta( q \prime - q) [/tex]

Fo any vector
[tex] v = \int v(q \prime ) | q \prime > dq \prime [/tex] ,
[tex] < q | Q | v > = < q | Q | \int v(q \prime ) | q \prime > dq \prime > = [/tex]
[tex] \int v(q \prime ) < q | Q | q \prime > dq \prime = [/tex]
[tex] \int v(q \prime ) q \prime \delta ( q \prime - q ) dq \prime = [/tex]
[tex] v( q ) q [/tex]
 
  • #103
Response to post #91

Sammywu said:
23). In trying to evaluate 22), I found I need something clearer about how to represent all vectors. Let me put all eigenvalues in one real line; for each q in this real line, we associate an eigenvector [tex] ^\rightarrow{q} [/tex] with it. I want to avoid using | q > for now, because | q > is actually a ray. also, remember there are many vectors as [tex] c * ^\rightarrow{q} [/tex] where | c | =1 can be placed here; let's just pick anyone of them.

So, now with a function c(q), we can do a vector integration over the q real line as:
[tex] \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq [/tex]
Note q in c(q) and dq is just a parameter and [tex] ^\rightarrow{q} [/tex] is a vector, and also viewed a vector function paramterized by q.
But |q> is not a ray. (From what you have written in your later posts, it appears to me that you now realize this.) I see no difference at all between "q" and "|q>".
____________
24).Refering back to 21), all vectors can be represented now by:
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c_n(q) ^\rightarrow{q} dq [/tex]
Yes, even the generalized ones like q. As you point out:
25). In particular, let
[tex] \delta_n( q - q_0 ) = 1/n for q_0 - 1/2n <= q <= q_0 +1/2n [/tex] and 0 elsewhere,
the eigenvector for q_0 can be represented as:
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q - q_0) ^\rightarrow{q} dq [/tex]
____________
26). And, for other vectors, c_n(q) can be set to a constant function c(q);
we can verify its consistency with the normal representation:

[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c_n(q) ^\rightarrow{q} dq = [/tex]
[tex] \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq = [/tex]
[tex] \int_{ - \infty }^\infty c(q) lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q \prime -q) ^\rightarrow{q \prime } dq \prime dq = [/tex]
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \int_{ - \infty }^\infty c(q) \delta_n(q \prime - q) ^\rightarrow{q \prime} dq dq \prime = [/tex]
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty ( \int_{ - \infty }^\infty c(q) \delta_n(q \prime - q) dq ) ^\rightarrow{q \prime} dq \prime =[/tex]
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c(q \prime) ^\rightarrow{q \prime} dq \prime [/tex]

27). For inner products of c and d,
[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , \int_{ - \infty }^\infty d(q \prime ) ^\rightarrow{q \prime } dq \prime ) = [/tex]
[tex] \int_{ - \infty }^\infty \overline{d(q \prime)} ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime} ) dq \prime [/tex]
Apart from a typo in a couple of indices, these relations look fine. (I must, however, point out that I have never seen the use of such "limits" in expressions which also involve objects like "|q>" (or, as you are writing, "q"). Usually, these limits are used only in the "function-space" representation of the Hilbert space in order 'justify' (or 'explain') the use of "distributions". Once that has been accomplished, then there is no longer any need to bring those limits into the picture when dealing with the "formal" space of "bras" and "kets", because the meanings of these objects are defined by 'correspondence' with the (now "generalized") function-space representation.)
____________
28). Now, I have to discuss what shall it be for
[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime } ) [/tex]

First, if we look into the inner products of two eigenvectors [tex] \rightarrow{q \prime } [/tex] and [tex] \rightarrow{q } [/tex], we can first think about what shall be the innerproduct betwen
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(u - q) ^\rightarrow{u} du [/tex]
and
[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(v - q \prime) ^\rightarrow{v} dv [/tex]
.

Comparing it to a discreet case, I guess this could be
[tex] | ( q , q \prime) | = lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) \delta_i( u - q ) du [/tex]

So, in general,
[tex] ( q , q \prime) = e^{ia} lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_n( u - q \prime) \delta_n( u - q ) du [/tex]

The phase factor is put into show the possibility of two out-of-phase eiegnvector. For now, we can assume our standard basis are in-phase vectors.
I do not see why you are bringing phase factors into the picture here. The objects |q> are just the generalized eigenkets of Q. This means

[1] Q|q> = q|q> , and <q|q'> = δ(q - q') .

The second relation tells us that: (i) <q|q> = ∞; and (ii) for q ≠ q', <q|q'> = 0. There is no 'room' here for phase factors.

On the other hand, once we have designated one such "family" |q>, we can then talk about another such family, say |u(q)> ≡ eiø(q)|q>. Clearly, relations [1] will also be satisfied for |u(q)>, i.e.

[1'] Q|u(q)> = q|u(q)> , and <u(q)|u(q')> = δ(q - q') .

But, which "family" is the 'real' |q> ... "|q>" or "|u(q)>"? From this perspective, the answer is: Whichever one we want! It is much like the situation with imaginary numbers: Which is the 'real' i ... "i" or "-i"?
____________
With this, we can further translate the inside part of 27) to.

[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime} ) = [/tex]
[tex] \int_{ - \infty }^\infty c(q) ( ^\rightarrow{q} , ^\rightarrow{q \prime} ) dq = [/tex]
[tex] \int_{ - \infty }^\infty c(q) lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_n( u - q \prime) \delta_n( u - q ) du dq = [/tex]
[tex] lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) \int_{ - \infty }^\infty c(q) \delta_j( u - q ) dq du = [/tex]
[tex] lim_{ i \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) c(u) du = [/tex]
[tex] c(q \prime)

Placing that into 27), I got
[tex] ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , \int_{ - \infty }^\infty d(q \prime ) ^\rightarrow{q \prime } dq \prime ) = [/tex]
[tex] ( \int_{ - \infty }^\infty \overline{d(q \prime)} c ( q \prime) dq \prime [/tex]
Again, this looks fine (except for a typo in a couple of indices).
 
  • #104
Response to post #94

Just make some conclusions on my deduction:

I. After including the phase factor consideration, [tex] | q_0 > [/tex] as the normal eigenvector of the eigenvalue [tex] q_0 [/tex] can be denoted as:

[tex] lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q - q_0) e^{ik(q)} | q> dq [/tex]
, or
[tex] \int_{ - \infty }^\infty \delta(q - q_0) e^{ik_0} | q> dq [/tex]
As I said in the previous post, there is no 'room' here for phase factors. Let's look at your last expression for |qo>. It is

∫ δ(q - qo) eik_o|q> dq = eik_o|qo> .

Thus, |qo> = eik_o|qo>; so eik_o = 1.
______________
This can be checked against that it shall be representable as
[tex] \sum_n c_n \psi_n = [/tex]
[tex] \sum_n c_n \int_{ - \infty }^\infty \psi_n( q ) | q> dq [/tex]
where [tex] \psi_n [/tex] is the normal eigenvector of Hamiltonian, because this infinite summation can be viewed the limit of a function sequence ( To use correct math. term, I think I probably shall say sequence instead of series , series is reserved for infinite summation. correct ? ) as well.
This is an infinite sum. Nevertheless, each successive additional term in a "series" gives rise to a "sequence".
______________
II. If we do not simplify the 3-dim eigenvalues observed into the discussion of anyone of them, then we will find out the three measurables form a 3-dim vector, and we will need to think about what is a "vector operator" or a "vector observable".
The simplification involves the idea of a "tensor product" of Hilbert spaces.
____
We will have more to explore such as, what does the rotation of the vector operator mean, what shall two vector operators' inner or scalor product and their outer product be.
Yes. These are good questions.
 
  • #105
Eye,

You know that after you pointed out the objects as "vector density" and "infinitesmal vector", I have made some corrections on my late posts.

Actually, that was what initially confussed me. In my mind, I have this [tex] \psi_q [/tex] as the generalized eigen vector of eigen value q for the position operator Q. I have it confussed with |q>, which is the "vector density" for it.

Now I organized them, so I realized [tex] d \psi_q [/tex] is the "infinitismal vector" as "d|q)" in your writing.

The "|q>" as the "vector density" is [tex] d \psi_q/dq \prime [/tex] in my thought and "d|q)/dq" in your writing.

Now in that, this falls into place:

[tex] \psi = \int \psi(q) |q> dq = \int < q | \psi > < q | dq = \int | q > < q | \psi > dq [/tex]
for any [tex] \psi [/tex] .

In particular,
[tex] \psi_q \prime = \int | q > < q | \psi_q \prime > dq [/tex]
for the generlized eigenvector of eigenvalue [tex] q \prime [/tex] .

In order for the above EQ. to be true, My first impression was
[tex] < q | \psi_q \prime > = \delta ( q \prime - q ) dq [/tex]
, but actually it turns out wrong, it will give an object more like [tex] |q \prime> [/tex] in the sense of
[tex] \int \delta ( q \prime - q ) O(q) dq = O ( q \prime ) [/tex]
.

Actually, if I use
[tex] d \psi_q \prime /dq \prime \prime = | q \prime > [/tex]
, I can get
[tex] \psi_q \prime = \int | q \prime > dq \prime \prime [/tex]
.

I can put this into the previous formula
[tex] \psi_q \prime = \int | q > < q | \int | q \prime > dq \prime \prime > dq = [/tex]
[tex] \int \int | q > < q | q \prime > dq dq \prime \prime [/tex]
If
[tex] < q | q \prime > = \delta ( q - q \prime ) [/tex]
then this does make back to
[tex] \psi_q \prime = \int | q \prime > dq \prime \prime [/tex]
.

This is basically my self-verification on the relationship between
[tex] <q|q \prime> [/tex]
and
[tex] < q \prime | \psi_q > [/tex].

Because in my mind,
[tex] < \psi_q | \psi_q > =1 [/tex]
so
[tex] < q | q > = < d \psi_q \div dq \prime | d \psi_q \div dq \prime > =1 [/tex]
, but apparently it does not come out so.

II) I brought in the phase factor, because I thought [tex] e^ia \psi_q [/tex] are also an eigenvector.
Even if we take [tex] | q > = e^ia d \psi_q/dq [/tex], we still have a phase factor left there.
I did mentioned that we can set aside a standard set of |q> as a basis, so we do not need the phase factor in this |q> but the phase factors will then appear as an "explicit" part in [tex] <q|\psi> [/tex].
As I am saying in my mind, there shall be multiple [tex] \psi_q [/tex] by the difference of [tex] e^ia [/tex]. For example, if we use the "momentum" representation to represent this "position" eigenvector, you can always multiply a phase facor such as [tex] e^{iwt} [/tex] to it.
 
Last edited:

Similar threads

Replies
19
Views
4K
Replies
5
Views
2K
Replies
13
Views
2K
Back
Top