Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Thread starter Phrak
  • Start date Start date
  • Tags Tags
    Tensors
  • #51
comote said:
\psi represents a vector in Hilbert space, just like |\psi\rangle...
Caution should be exercised with the usage of that kind of notation. The notation \psi always represents either an operator or a scalar. Its never used to represent a vector. A quantity is denoted a vector when it appears using the ket notation. Otherwise it'd be like writing A as a vector without placing an arrow over it or not making it boldfaced to denote that its a vector.

Pete
 
Physics news on Phys.org
  • #52
comote said:
I admit I have not seen the Bra-Ket notation in its' full glory.

When you say L^2 is not big enough are you referring to the fact that the eigenvectors of continuous observables don't live in L^2 or are you referring
to something else?
In such case the Hilbert space would not be big enough. It would be too big.

Pete
 
  • #53
pmb_phy said:
Actually L2 is too big since it includes all functions which are not only square integrable but which are also non-continuous. Wave functions must be continuous.

Pete
Unless, of course, they aren't. :-p The usage in this thread -- 'element of a Hilbert space' -- is the usage I am familiar with. It's also the definition seen on Wikipedia.

Now, a (Schwartz) test function is required to be continuous (among other things). Maybe you are thinking of that?
 
  • #54
pmb_phy said:
I guess what I'm saying (rather, the teachers and text from which I learned QM) is that, while it is true that all quantum states that are representable by a ket is an element of a Hilbert space it is not true that all elements of a Hilbert space correspond to a quantum state that are representable by a ket.
Hold on a second. I question whether or not that is true. An element of the Hilbert space must be a ket and therefore must represent some quantum state, right?

Pete
 
Last edited:
  • #55
pmb_phy said:
But specifically I have the following statement in mind. From Quantum Mechanics by Cohen-Tannoudji, Diu and Laloe, page 94-95
It sounds like they've defined wavefunctions to be elements of the Hilbert space.

(and then proceeded to argue a certain class of test functions can serve as adequate approximations)
 
  • #56
Hurkyl said:
It sounds like they've defined wavefunctions to be elements of the Hilbert space.

(and then proceeded to argue a certain class of test functions can serve as adequate approximations)

Prior to that paragraph the authors wrote
Thus we are led to studying the set of square-integrable functions. These are the functions for which the integral (A-1) converges. This set is called L2 by mathematicians and it has the structure of a Hilbert space.

Pete
 
  • #57
to get back to phrak's question.

I don't think that your approach is very usefull. Given a finite dimensional vector spae and a basis for this, let's say \{ v^1,v^2,...,v^n \} then there is a canonical basis for it's dual defined by

w_j(v^i) = \delta_j^i

You write:

\left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)

but if you represent c as a 2 x 1 columnvector, and g_ij is a 3 x 3 matrix, how is it defined, and how do they work on the ket? For it to make a little sense you need to represent the ket as a n x 1 vector and to make g_ij transform it into a 1 x n vector with all the numbers in it complex conjugated.

I think my problem is also why this technique would be smart if it even worked?
 
  • #58
mrandersdk said:
to get back to phrak's question.

I don't mind the side discussions at all. In fact they're probable better foder anyway! :smile:.

I don't think that your approach is very usefull. Given a finite dimensional vector spae and a basis for this, let's say \{ v^1,v^2,...,v^n \} then there is a canonical basis for it's dual defined by

w_j(v^i) = \delta_j^i

You write:

\left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)

But I should have written :-p
\left< \psi \right| c* = g_{ij} \left( c \left| \psi \right> \right)

but if you represent c as a 2 x 1 columnvector, and g_ij is a 3 x 3 matrix, how is it defined, and how do they work on the ket? For it to make a little sense you need to represent the ket as a n x 1 vector and to make g_ij transform it into a 1 x n vector with all the numbers in it complex conjugated.

c is a N x 1 column vector whos entries are 2 x 1 column vectors of real entries.
g_ij is an N x N matrix having 2 x 2 maxtrix entries. In othonormal basis the diagonal elements consist of rho, where all the other elements are zero.

The 2 x 1 column vectors serve as complex numbers. with entry 1,1=real part and 2,1=complex part. It's simply a trick to associate each g_ij and g^ij with an implicite conjugation operation and each mixed metric with the Kronecker delta.

You can test that this is true in the orthonormal basis
g_{ij}g^{jk} = \delta_i^k

The equation you've quoted is actually a mess, comprised of mixed notation, that was intended to serve as a transition equation to get you to
\hat{V} = gV where \hat{V} is the adjoint of V

I think my problem is also why this technique would be smart if it even worked?

'Can't really tell until you get there, what the veiw is like.
 
Last edited:
  • #59
Last edited by a moderator:
  • #60
It is first a column vector when some basis is choosen. Let me give you a simple example, because this have nothing to do with the ket space, this is for all vector spaces.

lets say we are working in R^3, then we have a vector fx. v = (1,1,1) WRONG!

This is completely meaningless, what does this array mean, nothing. First when i give you some basis, you can make sense of this, many people don't see this because we are thought everything from some canonical basis in the first place.

A vector in a vector space is an element satisfying some things, choosing a basis makes it isomorphic to R^n, and then choosing the standard basis implicit for R^n, we can make calculations with arrays like (1,1,1), but this element can mean a lot of things, it could be 1 apple, one orange and one banana (if someone could give this space a proper vector space structure).

So if column vector is referring to some array, it only makes sense given a basis.

It is like when people say that a matrix is just a linear transformation, this isn't actually the complete truth, the linear transformation between two vector spaces are one to one with the matrices (of appropiate size). Thus one should be very carefull when saying statements like these. I know a lot of authors use the same words for matrices and linear transformation, and that is fine as long as it is made clear or the author know what he means.
 
  • #61
By the way the reason people say that ket are column vectors and bras are row vectors is because they can write.

|\Psi> = (v_1,v_2,v_3)^T

and

<\Psi| = (v_1^*,v_2^*,v_3^*)

and then

<\Psi|\Psi> = (v_1^*,v_2^*,v_3^*) . (v_1,v_2,v_3)^T = v_1^2+v_2^2+v_3^2

but you could write them both as column vectors if you would, then just the define the iner product as abocve, in the finite case a vector space is isomorphic to it's dual, but because of matrix multiplication, it is easier to remember it like that because placing the vectors beside each other in the right order, makes sense and give the inner product.

But before given a basis this column or row vector doesn't make sense, because what does v_1,v_2 and v_3 describe, they how much we have of something, but of what, that is what the basis tells you.

So to say that a ket is a column vector is false, but it is often used because not all physicist are into math, and it is the easiest way to work with it.

So an operator that works on a ket, that is

A|\psi>

is not an matrix, in the finite case though choosing a basis, then you can describe it by a matrix, and the state as a column vector (or row if you like). This "matrix" is what i denoted above with A_x, but this was in the infinit case so, it may not be totally clear that that is like an infinit matix.
 
  • #62
It's not so much that we want to actually represent bras and kets as row and column vectors -- it's that we want to adapt the (highly convenient!) matrix algebra to our setting.

For example, I was once with a group of mathematicians and we decided for fun to work through the opening section of a book on some sort of representation theory. One of the main features of that section was to describe an algebraic structure on abstract vectors, covectors, and linear transformations. In fact, it was precisely the structure we'd see if we replaced "abstract vector" with "column vector", and so forth. The text did this not because it wanted us to think in terms of coordinates, but because it wanted us to use this very useful arithmetic setting.

Incidentally, during the study, I pointed out the analogy with matrix algebra -- one of the others, after digesting my comment, remarked "Oh! It's like a whole new world has opened up to me!)


(Well, maybe the OP really did want to think in terms of row and column vectors -- but I'm trying to point out this algebraic setting is a generally useful one)

Penrose did the same thing with tensors -- formally defining his "abstract index notation" where we think of tensors abstractly, but we can still use indices like dummy variables to indicate how we are combining them.
 
  • #63
Any Hilbert space is self-dual, even infinite dimensional ones.

We assume a canonical basis and then when we are interested in the values of some observable quantity we can represent our vectors in a basis that is more convenient. By doing a change of basis you are not fundamentally changing the vector in any way, you are just changing the way it is represented.

I agree in saying that a Ket is a column vector is not technically correct, but read what I write carefully . . . a Ket is a representation of a vector as a column vector in a given basis. A bra is a representation of a vector as a row vector in a given basis.

Even if you are not given a basis, one can still think of a Ket as a column vector. Granted my construction is artificial but it still helps to understand the concept.

Given a vector v\in\mathbb{R}^n. Let us call u_1 = \frac{v}{\|v\|}. Let \{u_k\}_{k=2}^{n} be any set of vectors that are mutaully orthogonal and all orthogonal to u_1. We have now constructed a basis in which we can represent |v\rangle as a column vector and \langle v| as a row vector. If we need to consider a different basis we will by means of a unitary transform.
 
  • #64
The reason I want to stick with the idea of thinking of Kets as column vectors is because it simply helped me keep better track of the manipulations. When doing mathematical manipulation in new areas I think it is best to keep a concrete example of something already understand in mind as an example.
 
  • #65
You are right, my point is it is important to know what is going on, then all that help you are great. The problem is when teaching I think that saying that it is just a column vector can seem to confuse, especialy when one get to more advanced topics.

And people seem to forget that, there is a differens between a column vector and a vector, even in the finite case.
 
  • #66
Agreed, it is tough and we have to remember that people step into learning QM with backgrounds that are not always equal.
 
  • #67
mrandersdk said:
You are right, my point is it is important to know what is going on, then all that help you are great. The problem is when teaching I think that saying that it is just a column vector can seem to confuse, especialy when one get to more advanced topics.

And people seem to forget that, there is a differens between a column vector and a vector, even in the finite case.

I'm trying to understand bras and kets and operators in finite dimensional Hilbert space within notation that I'm familar with, rather than trying to sell this idea. It may not even work but perhaps you can help me see if it does. The complex coefficients, kets, bras and the innder product seem to work consistently, but I don't know how to deal with operators. Whenever the transpose of an operator is taken, it is also cojugated, right?

If I understand correctly any operator acting to the left on a ket obtains a ket. Operating to the right on a bra, it obtains a bra. But when does one take the adjoint of an operator?
 
  • #68
The Hilbert space adjoint of an operator A is the operator A^* satisfying (\langle x|A^*)\cdot|y\rangle = \langle x|\cdot(A|y\rangle) for all x,y. To appease rigor, when we go to infinite dimensions we should say something about the respective domains of the operator and its' adjoint.
 
  • #69
you should transpose and complex conjugate all the numbers in the matrix (now assuming you are working with the operator as a matrix)

You are right about that an operator working on a ket from the right gives an ket, and from the left an bra. The adjoint of an operator is also an operator, so it is the same.

Something important about the adjoint, is given a ket |\psi> then we can make the ket A|\psi>, the corresponding dual <\psi|A^\dagger.

Maybe it is that 'corresponding' you are worried about. This is just because that (as comote pointed out) in a hilbert space there is a unique one to one correspondance between the space and it dual, so given a ket |\psi> there must be an element we can denote by <\psi|. And we have a function J: H \rightarrow H^* such that <\psi|= J(|\psi>), and i guess it can be shown that, you get <\psi|A^\dagger= J(A|\psi>), so her eyou use it.

Maybe it is actually this function J you have been asking about the whole time?

You shouldn't try to understand this function as lowering and raising indexes as in general relativity (aka. tensor language (at least i don't think so, maybe one could)).

The existence of this great correspondance is due to Frigyes Riesz. Maybe look at

http://en.wikipedia.org/wiki/Riesz_isomorphism


comote: Not sure you are right

(\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)

this is defined to do this, without the star.
 
  • #70
if you want to write it in your way it should be defined as the operator satisfying

\langle x| (A^\dagger|y\rangle) = ((\langle x|A) |y\rangle)^*
 
  • #71
Ok, I see now. I am more comfortable with the notation
\langle A^*x,y\rangle=\langle x,Ay\rangle.

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
 
  • #72
yes you are right about that, one want to write like you do, I had to look in my book to remember the right way to write it to.

But a it is actually smart in the way that you get, if you define the J above as dagger

(|\psi>)^\dagger = <\psi|

and even better, you can show (or define i guess) the adjoint as the operator satisfying

(A|\psi>)^\dagger = <\psi|A^*

so by defining the dagger to do that on kets, and defining A^\dagger = A^* on operators you get an easy correspondance, between the space and it's dual.
 
  • #73
mrandersdk said:
comote: Not sure you are right

(\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)

this is defined to do this, without the star.

comote said:
Ok, I see now. I am more comfortable with the notation
\langle A^*x,y\rangle=\langle x,Ay\rangle.

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
This confused me too in a recent discussion, but I realized that (\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle) should be interpreted as the definition of the right action of an operator on a bra.

\langle A^\dagger x,y\rangle=\langle x,Ay\rangle is of course the definition of the adjoint operator. I agree that the bra-ket notation is clumsy here.
 
  • #74
comote said:
Ok, I see now. I am more comfortable with the notation
\langle A^*x,y\rangle=\langle x,Ay\rangle.

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
Bleh; when working with bras and kets, I've always hated that notation, since it breaks the cohesiveness of the syntax. And similarly if I'm actually working with a covector for some reason -- I would have great physical difficulty writing \langle \omega^*, v \rangle instead of \omega(v). No problem with the bra-ket notation, though, since it maintains the form of a product: \langle \omega | v \rangle.
 
  • #75
Thank you all. You've been immensely helpful. Even the easy confusion that results from leaning this stuff is notable. By the way, thanks for the link to the other thread, Fredrik.

How does one derive this \langle A^*x,y\rangle=\langle x,Ay\rangle ??
 
  • #77
I'm having trouble posting. I get a database error for long posts with a lot of laTex. Am I the only one?

I'm going to give it an hour.
 
  • #78
just seen a mistake in my earlier post

\langle x| (A^\dagger|y\rangle) = ((\langle x|A) |y\rangle)^*

should be

\langle x| (A^\dagger|y\rangle) = ((\langle y|A) |x\rangle)^*
 
  • #79
mrandersdk said:
Something important about the adjoint, is given a ket |\psi> then we can make the ket A|\psi>, the corresponding dual <\psi|A^\dagger.

Maybe it is that 'corresponding' you are worried about. This is just because that (as comote pointed out) in a hilbert space there is a unique one to one correspondance between the space and it dual, so given a ket |\psi> there must be an element we can denote by <\psi|. And we have a function J: H \rightarrow H^* such that <\psi|= J(|\psi>), and i guess it can be shown that, you get <\psi|A^\dagger= J(A|\psi>), so her eyou use it.

Maybe it is actually this function J you have been asking about the whole time?

Go easy on me with the abstract algebra, but yes!

As you say, we have a function J: H \rightarrow H^*.

It's a bijective map, so J^{-1}: H^* \rightarrow H.

I've been calling J=g_{ij} and J^{-1}=g^{ij}

Now, I'd like think we can include the quantum mechanical operators as various products of H and H^\ast.

H \otimes H,\ H \otimes H^\ast,\ H^\ast \otimes H, and H^\ast \otimes H^\ast.

For example A \left| x \right> = \left| y \right>, where

x , y \in H
A \in H \otimes H^\ast
A \in H \otimes H^\ast

This part is guess-work: For an operator \Theta=psi \times phi, where

\psi \in H
\phi \in H^*
\Theta \in H \otimes H^*,

Then \Theta \dagger = J ( \psi ) J ( \phi ), where

\Theta \dagger \in H^* \otimes H

Again, it may not all hang together as desired.
 
  • #80
mrandersdk said:
stil think there is a big differens. Are you thinking of the functions as something fx. in L^2(R^3), ...
I'm a bit confused about your notation. What does the R^3 in L^2(R^3) represent? I just recalled that the notation used in the quantum mechanics for the the set of all square integrable functions is not always written as L^2 as a mathematician might write, but as L_2 or H_2. An example of the former is found in Notes on Hilbert Space, by Prof. C-I Tan, Brown University.

http://jcbmac.chem.brown.edu/baird/quantumpdf/Tan_on_Hilbert_Space.html

An example of the later is found in Introduction of Quantum Mechanics - Third Edition, Richard L. Liboff, page 102

Note on Latex: I see people using normal Latex to write inline equations/notation. To do this properly don't write "tex" in square brackets as you normally would do when the expression is to appear inline. To write inline equations use "itex" inside the square brackets. Its for this reason that letters are being printed out inline but not with the bottom of the letter alligned with the other letters.

Pete
 
Last edited by a moderator:
  • #81
comote said:
Getting back to the first thing I said. Even in basis independent notation what I said about column/row vectors has meaning. If we are given a unit vector \psi ...
I'd like to point out an incorrect usage of notation here. Since this is a thread on bras and kets I think that its imporant to point this out here. I also think that it relates to what some posters are interested in too, i.e. the usefulness of ket notation.

comote - Recall your comment in a previous post, i.e.
If we are given a unit vector \psi ...
\psi is not the notation for a unit vector, unless you are using it as short hand for \psi = \psi(x)? It is the kernel which denotes the quantum state. By kernel I mean a designator. For instance, in tensor notation the components of the stress-energy-momentum tensor are T^{\alpha\beta}. The geometric notation of this tensor looks like T(_,_) where the "_" denote place holders for two 1-forms. The letter "T" as it is used here is called a "kernel". In quantum mechanics \psi almost always denotes a kernel. The actual quantum state is represented using ket notation as |\psi>.

On to your next statement
...then we can understand it as being an element of some orthonormal basis and then saying
|\psi\rangle is the representation as a column vector makes sense.
If one wishes to represent the state in position space then one projects it into position space using the position eigenbra <x| which is dual to the position eigenket |x>. I.e. \psi(x) = <x|\psi>. This represents an element of a column vector. It is the component of the state on the position basis. There is a continuous number of rows here labeled with the continuous index x.

The ket notation thus allows a very general representation of a quantum state. It is best to keep in mind the difference between the kernel which denotes the state, the mathematical object which represents the state and a component of that state on a given basis.

Pete
 
Last edited:
  • #82
comote said:
The momentum operator does not have eigenstates per se,...
They most certainly do. The eigenstates of momentum are well defined. There are also eigenstates of the position operator too.

Pete
 
  • #83
Hi comote = I'm going through each post one by one so please ignore my comments in my previous posts which were already addressed by mrandersdk. You are fortunate to have him here. He seems to have Dirac notation down solid!

mrandersdk and comote - Welcome aboard! Nice to have people here who knows their stuff.

Best wishes

Pete
 
  • #84
Oh, the R^3, was just because i assumed square integrabel over, the vetor space R^3, but this could be lot of other things I guess, depending on what particular thing you are working on.
 
  • #85
pmb_phy said:
They most certainly do. The eigenstates of momentum are well defined. There are also eigenstates of the position operator too.

Pete
The point comote was making is that there do not exist elements of the Hilbert space (i.e. square-integrable functions) that are eigenstates of position and momentum. So those operators do not have eigenstates in the strictest sense.

But that's where the rigged Hilbert space comote mentioned comes into play: it consists of the extra data

. A subspace of test functions. (e.g. the 'Schwartz functions')
. A superspace of linear functionals applicable to test functions. (called 'generalized states')

and then if you take the extension of the position and momentum operators to act on generalized states when possible, you can find eigen-[generalized states] of these extended operators.


Of course, we usually only bother making these distinctions when we have a specific reason to do so -- so in fact both of you are right, you're just talking in different contexts. :) (comote actually caring about the types of objects, while you are using the words in the usual practical sense)
 
Last edited:
  • #86
Hurkyl said:
The point comote was making is that there do not exist elements of the Hilbert space (i.e. square-integrable functions) that are eigenstates of position and momentum. So those operators do not have eigenstates in the strictest sense.
Yes. Upon further inspection I see that is what he was referring to. Thanks. However I disagree in that those operators do have eigenstates in the strictest sense. Just because they don't belong to a Hilbert space, and they don't represent physical states, it doesn't mean that they aren't eigenstates. They are important as intermediates in the math.

Pete
 
Last edited:
  • #87
pmb_phy said:
Just because they don't belong to a Hilbert space, and they don't represent physical states, it doesn't mean that they aren't eigenstates.
Sure it does. The domain of P is (a dense subset of) the Hilbert space. If |v\langle isn't in the Hilbert space, then it's certainly not in the domain of P, and so the expression P |v\rangle is nonsense!
 
  • #88
Hurkyl said:
Sure it does. The domain of P is (a dense subset of) the Hilbert space.
As I recall, that depends on the precise definition of the operator itself. Mind you I'm going by what my QM text says. The authors could have been sloppy but nothing else in that text is sloppy. Its pretty thorough as a matter of fact. Let me get back to you on this.

Phrak - I've thought about your questions some more and have some more to add. In tensor analysis the tensors themselves are often defined in terms of how their components transform. It is commonly thought that the transformation is due to a coordinate transformation. However this is not quite correct. To be precise the tensors defined as such are defined according to how the basis vectors transform. Transforming basis vectors (kets)easily compared to tensor analysis so perhaps we should focus on basis transformations rather than coordinate transformations.

More later.

Pete
 
  • #89
Regarding my comment above, i.e.
It is commonly thought that the transformation is due to a coordinate transformation. However this is not quite correct. To be precise the tensors defined as such are defined according to how the basis vectors transform.
I was reminded of this fact when I was reviewing GR. I had the chance a few weeks ago to take some time and read Sean Carroll's GR lecture notes which are online at
http://xxx.lanl.gov/abs/gr-qc/9712019. On page 44 the author writes
As usual, we are trying to emphasize a somewhat subtle ontological distinction - tensor components do not change when we change coordinates, they change
when we change the basis in the tangent space, but we have decided to use the coordinates to define our basis. Therefore a change of coordinates induces a change of basis:
This is an important fact that is often overlooked.

I looked over your previous posts regarding lowering of indices (e.g. https://www.physicsforums.com/showpost.php?p=1782754&postcount=19) and wanted to point out that you should have tried the identity matrix to represent the metric. If you did that and first taken the complex conjugate of the row vector before you took the product then you would have gotten the result you were looking for, i.e. you'd end up with the dual vector represented as a row vector.

I hope this helps.

Pete
 
  • #90
What in the hell happened to Pete? Why is his name lined-out?
 
  • #91
I'm very curious about that too. This is really weird. I did a search for his most recent posts, and #89 is the last one. None of his recent posts offer any clue about what happened.
 
  • #92
Pete-

If you're still lurking--this thread, at least: Everything I know about tensors I learned from Sean Carroll; a wonderful, and accessible text.

I'd mused-over the points you'd brought-up in your post #89, and partly due to a previous comment you made here about my reference to coordinate bases.

As you suggest, it's just as well to use a metric g_ij(real vector) --> g_ij(complex vector)* = h_ij rather than to introduce a column vector to represent a complex number. I simply thought the column vector of a complex number would be nicer as it combines the two operations of complexification and the lowering/raising operation into one. Either acts equally well on Hilbery space vectors.

So the next task is to demonstrate that type(1,1) tensors with complex entries are a valid representation ( in finite dimensions) of the quantum mechanical operators that act on bras and kets; that is, they behave as required when the adjoint it taken. The adjoint would be applicatioin of the metrics h_ab and h^cd to a qm operator A_b^d to obtain A^a_c.

I'm just slow, or I would have done it by now--or failed to do so because it's simple wrong.
 
Last edited:
  • #93
Within the structure of Newtonian physics, we can write

dP/dt = F, where P and F are the usual momentum and force vectors, in 3D.

Also then according to Dirac's notation
d |P>/dt =|F>. Or does it?

Is it, in the sense of an equivalence relation, really legit to equate P and |P> -- in 3D space ? Why, or why not?
Regards,
Reilly Atkinson
 
  • #94
You can't do that. There are ways to fx. relate rotation in 3D space (rotation of the lab frame), to kets describing a state, but theses at least as I learned it, things you need to postulate.

But this is kind of advance topic, and to get something on this, you need advenced quantum mechanichs book (again sakurai is a classic.)

The short answer why you can't this is that we are dealing with quantum mechanics, and this is a whole lot different than Newtonean physics.

But also the momentum can chareterise a state, but the force on it can't, so that isn't a state. Quantum mechanics builds on hamiltonian mechanics, and this formalism (harsh said) don't use forces, but uses potentials.

It seems like you haven't taking any courses in QM ?
 
  • #95
mrandersdk-

I've been thinking a great deal about your objections to casting vectors in Hilbert space as tensors, or even matrices. Is it that a great deal is lost in taking an abstraction to a representation?

I've given-up on representing Hilbert space vectors as tensors. My assumptions were wrong. However, in your case, I you might find some satisfaction in respresenting both tensors and vectors in Hilbert space under a single abstraction, if it can be done.
 
  • #96
reilly said:
Within the structure of Newtonian physics, we can write

dP/dt = F, where P and F are the usual momentum and force vectors, in 3D.

Also then according to Dirac's notation
d |P>/dt =|F>. Or does it?

Is it, in the sense of an equivalence relation, really legit to equate P and |P> -- in 3D space ? Why, or why not?
I'm guessing that might have been a rhetorical question (such as lecturers sometimes
ask their students)?

If so, I'll have a go and say that the observables P, F, etc, in classical
physics are best thought of as ordinary C^\infty functions on 6D phase space.
In quantization, one maps classical observables such as P to self-adjoint
operators on a Hilbert space, and classical symmetry transformations are expressed
there as U P U^{-1} where U denotes a unitary operator implementing the
transformation in the Hilbert space. If we can find a complete set of eigenstates of P
in the Hilbert space, then we can find a |p\rangle corresponding to any
orientation of 3-momentum.

But the above says nothing about 3D position space. We haven't yet got a "Q" operator
corresponding to the Q position variable in classical phase space. When we try
to incorporate Q as an operator in our Hilbert space, with canonical commutation relations
corresponding to Poisson brackets in the classical theory, we find that it's quite hard to
construct a Hilbert space (rigorously) in which both the P and Q play nice together,
and one usually retreats to the weaker (regularized) Weyl form of the commutation
relations. So it's really a bit misleading to think of the Hilbert space as somehow being
"in" 3D position space.

Regarding the F classical observable, we'd write (classically!) the following:

<br /> F ~=~ \frac{dP}{dt} ~=~ \{H, P\}_{PB}<br />

where the rhs is a Poisson bracket and H is the Hamiltonian. In the quantum theory,
this would become an operator equation with commutators (and with \hbar=1) ,

<br /> F ~=~ i \, \frac{dP}{dt} ~=~ [H, P]<br />

(possibly modulo a sign).

But I'm not sure whether any of this really answers the intended question. (?)
 
  • #97
Hurkyl;1783747 said:
It's not so much that we want to actually represent bras and kets as row and column vectors -- it's that we want to adapt the (highly convenient!) matrix algebra to our setting.

For example, I was once with a group of mathematicians and we decided for fun to work through the opening section of a book on some sort of representation theory. One of the main features of that section was to describe an algebraic structure on abstract vectors, covectors, and linear transformations. In fact, it was precisely the structure we'd see if we replaced "abstract vector" with "column vector", and so forth. The text did this not because it wanted us to think in terms of coordinates, but because it wanted us to use this very useful arithmetic setting.

Incidentally, during the study, I pointed out the analogy with matrix algebra -- one of the others, after digesting my comment, remarked "Oh! It's like a whole new world has opened up to me!)


(Well, maybe the OP really did want to think in terms of row and column vectors -- but I'm trying to point out this algebraic setting is a generally useful one)

If I had any sense, I would have, but I am actually more comfortable with tensors than matrices. In any case, I've retreated to understanding the algebra in terms of matrices.

Please correct me in the following if I am wrong. It seems there are really only a small number of rules involved in a matrix respresentation:

AB = (A^{\dagger} B^{\dagger})^{\dagger}

or even

A^{\dagger}B = (A B^{\dagger})^{\dagger}

With bras and kets represented as 1xN and Nx1 matrices, and with the adjoint of a complex number defined as it's complex conjugate, c^{\dagger} = c^* :

\left&lt; u \right| X \left| v \right&gt; = \left&lt; v \right| X^{\dagger} \left| u \right&gt; ^*

can be represented as

\left( u^{\dagger} X v \right) = \left( v^{\dagger} X^{\dagger} u \right) ^{\dagger}

The next is a little more interesting. The operator

\left| u \right&gt; \left&lt; v \right|

is represented as

u \times v^{\dagger}

the outer product of u \ and v^{\dagger} .

If I am not mistaken, Y = u \times v^{\dagger} is a quantum mechanical operator that acts from the left on kets to return kets, and act from the right on bras to return bras?
 
Last edited:
  • #98
shuoldnt you have

AB = ( B^{\dagger}A^{\dagger})^{\dagger}

A^{\dagger}B = ( B^{\dagger}A)^{\dagger}
 
  • #99
mrandersdk said:
shuoldnt you have

AB = ( B^{\dagger}A^{\dagger})^{\dagger}

A^{\dagger}B = ( B^{\dagger}A)^{\dagger}

Yes, or course you are right. Thank you, mrandersdk!

I forgot to include complex numbers, with

\left( \ c \left| u \right&gt; \ \right)^{\dagger} = \left&lt; u \right| c^\dagger

represented as

\left( c u )^\dagger = u^\dagger c^\dagger

along with double daggers, like

\left( X^\dagger \right) ^\dagger = X

, I think this nearly completes a axiomatic set for manipulating equations.
 
Last edited:
  • #100
mrandersdk said:
You can't do that. There are ways to fx. relate rotation in 3D space (rotation of the lab frame), to kets describing a state, but theses at least as I learned it, things you need to postulate.

But this is kind of advance topic, and to get something on this, you need advenced quantum mechanics book (again sakurai is a classic.)

The short answer why you can't this is that we are dealing with quantum mechanics, and this is a whole lot different than Newtonean physics.

But also the momentum can chareterise a state, but the force on it can't, so that isn't a state. Quantum mechanics builds on hamiltonian mechanics, and this formalism (harsh said) don't use forces, but uses potentials.

It seems like you haven't taking any courses in QM ?

In truth, I've taught the subject several times, both to undergraduates and graduate students. You are, as are many in this thread, confusing content and notation. That is, sure, in Dirac notation |S> stands for a state vector vector. However, he operative word here is vector, any vector in fact. There's nothing in the definition of bras and kets that restricts them to QM.

Why not in mechanics or E&M or control engineering. A vector is a mathematical object. In physics, or in any quantitative discipline. we assign vectors to objects that we,. say, describe, naturally, by ordered pairs, or triplets or, each number in the n-tuplet corresponds to a vector component in the appropriate space. All the stuff about transformation properties is contained in the characteristics of the appropriate space.


Dirac notation is nothing more, and nothing less than one of many equivalent methods for working with linear vector spaces, finite or infinite, real or complex -- in fact, probably over most mathematical fields. All the confusion about transposes and adjoints, operators, direct products and so forth would be problems in any notational scheme. An adjoint is an adjoint, the adjoint of product of operators of flips the order of the individual adjoints, and so forth.

(My suspicion is that Dirac invented his notation to make his writing and publication more simple. Rather than bolding, or underlining variable names to indicate a vector, he chose his famous bra-ket notation, because it was easier for him to write.)

Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors -- there are definitional tricks that allow the multiparticle sates to be considered as a single column vector. So, as a direct product is a tensor, we've now got both tensors and, naturally, vectors in Dirac-land.A better way to do realistic tensors is to create them by tensor-like combinations of creation operators acting on the Fock-space vacuum. Recall also the extensive apparatus of spherical tensors in angular momentum theory. We can often consider both states and operators as tensors.

The Dirac notation is extensively and clearly discussed in Dirac's Quantum Mechanics -- he goes through virtually every issue raised in this thread -- end of his first Chapter. and Chapters II and III. In my opinion, to understand QM as it is practiced today, one must study Dirac's book. For example, the whole apparatus of QM notation and concepts as we know them today, is largely defined and developed in Dirac's book. There's no substitute for the original.

Regards,
Reilly Atkinson
 
Back
Top