- #36
PeterDonis
Mentor
- 47,478
- 23,758
dyn said:I get the following matrix
$$\begin{pmatrix} 0&0&0&0&1\\ 0&0&0&1&0\\ 0&0&1&0&0\\ 0&1&0&0&0\\ 1&0&0&0&0 \end{pmatrix}$$
That's what I get too.
dyn said:f I ask what is P in the case Peikx = e-ikx , is that a valid question ? Because its not related to the matrix above and its actually e-2ikx
It's a valid question, but you seem to be confused about what the question is actually asking. I was trying to make that point earlier but I didn't do a good job; let me try again from a different starting point.
First, we have to be clear about what we're talking about. We are talking about a Hilbert space, which is a space of thingies usually called "vectors" that have certain properties, and another kind of thingies called "operators" that act on the vectors in certain ways and have certain other properties. But there are (at least) two very different ways of representing the two different kinds of thingies, which could be anything for all we know, in terms of mathematical objects we are familiar with.
Consider the equation ##P \psi(x) = \psi(-x)##. What does it mean? The first thing it means is that we are interpreting the thingies in the Hilbert space as functions, in this case functions from the complex numbers to the complex numbers. More precisely, as square integrable functions from the complex numbers to the complex numbers. And this works because it turns out that the space of square integrable functions from the complex numbers to the complex numbers satisfies all of the properties that define a Hilbert space.
Under this interpretation, an operator like ##P## is a map that takes functions into functions. So the equation ##P \psi(x) = \psi(-x)## is saying that ##P## maps the function ##\psi(x)## into another function that has the same value at ##-x## as ##\psi## has at ##x##, for all ##x##. So if we know that the function ##\psi(x)## is ##e^{ikx}##, then obviously the function ##P \psi(x)## is ##e^{-ikx}##. But we didn't figure that out by multiplying ##\psi(x)## by anything, still less by multiplying it by some element of a matrix, or multiplying it by a function that made it into ##\psi(-x)## (which is how you got ##e^{-2ikx}##). Nothing we've done so far has anything to do with multiplying or matrices. We just asked ourselves, what function will have the same value at ##-x## that ##e^{ikx}## has at ##x##, and the answer popped out by inspection. And it's pretty much that easy for any function that you have a formula for: just plug in ##-x## wherever ##x## appears.
What is not so easy, using this representation, is to figure out if an operator like ##P## is self-adjoint. It can be done, but it can't be done by inspection the way figuring out ##P \psi(x)## from the formula for ##x## can. So if we're interested in the self-adjointness of an operator (or in many other properties of operators), we would like to find a different kind of representation. And there is one: we can represent vectors in the Hilbert space as column vectors or row vectors, and operators as matrices.
With this representation, the Dirac bra-ket notation is much more commonly used, so we would write our operator equation as ##P | x \rangle = | - x \rangle##. (Actually, I have just pulled a bit of a fast one here, as we will see below; but it will do for now.) And what this is saying is that the operator ##P## corresponds to the matrix that, if we the column vector ##| x \rangle## by it, gives us the column vector ##| - x \rangle##. So, for example, if we take the column vector
$$
\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix}
$$
which describes ##| x = 2 \rangle## (at least in the labeling convention I have just implicitly adopted), and multiply it by the matrix you obtained for ##P##, we get the column vector
$$
\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}
$$
which describes ##|x = -2 \rangle##.
And of course, as you saw, once we have a matrix representation for ##P##, it's simple to show that ##P## is self-adjoint; you just do it by inspection, by looking at the matrix and seeing that it's the same as its conjugate transpose. That's why the matrix representation is preferred for questions like that.
But what happened to ##e^{ikx}##, you say? [Actually, this doesn't quite apply here, because you're actually working in the momentum basis, not the position basis -- see post #40. But all this is still worth keeping in mind since the position basis is another valid basis that is natural to use.] Well, remember that, if we go back to the wave function representation, the function ##\psi(x) = e^{ikx}## describes a state of momentum ##k##; but what we just did in the matrix representation was to apply the operator ##P## to a state with position ##x##! So ##e^{ikx}## disappeared because we weren't looking at that state in the matrix representation; we were looking at a different state, whose wave function representation would be ##\delta(x)##--and we would then have ##P \delta(x) = \delta(-x)## as the direct translation into the wave function representation of what we wrote above in the matrix representation. And the reason I picked the state ##| x \rangle##, or ##\delta(x)##, to start with in the matrix representation is that it's the easiest kind of state to write down in that representation, especially if we are going to try to write the result of applying ##P## to it. In fact, in the matrix representation, there isn't even a way to write down the exact state that is ##e^{ikx}## in the wave function representation. [Edit: not in the position basis, but there is in the momentum basis -- see post #40.]
So the answer to your question is that, if we are using the wave function representation, ##P## is just a map from functions to functions, and you figure out what functions it maps to what other functions by inspection--just look at the formula and plug in ##-x## wherever ##x## appears. It has nothing whatever to do with multiplying or matrices. We only drag in matrices and multiplication (by matrices) because it makes it much easier to see that ##P## is self-adjoint.
Last edited: