Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Phrak
  • Start date
  • Tags
    Tensors
In summary: Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just avector space equipped with an Hermitian inner product (and some extra arcane stuff about "completion" in the inf-dim case).
  • #71
Ok, I see now. I am more comfortable with the notation
[tex]\langle A^*x,y\rangle=\langle x,Ay\rangle[/tex].

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
 
Physics news on Phys.org
  • #72
yes you are right about that, one want to write like you do, I had to look in my book to remember the right way to write it to.

But a it is actually smart in the way that you get, if you define the J above as dagger

[tex] (|\psi>)^\dagger = <\psi| [/tex]

and even better, you can show (or define i guess) the adjoint as the operator satisfying

[tex] (A|\psi>)^\dagger = <\psi|A^* [/tex]

so by defining the dagger to do that on kets, and defining [tex] A^\dagger = A^* [/tex] on operators you get an easy correspondance, between the space and it's dual.
 
  • #73
mrandersdk said:
comote: Not sure you are right

[tex](\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)[/tex]

this is defined to do this, without the star.

comote said:
Ok, I see now. I am more comfortable with the notation
[tex]\langle A^*x,y\rangle=\langle x,Ay\rangle[/tex].

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
This confused me too in a recent discussion, but I realized that [tex](\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)[/tex] should be interpreted as the definition of the right action of an operator on a bra.

[tex]\langle A^\dagger x,y\rangle=\langle x,Ay\rangle[/tex] is of course the definition of the adjoint operator. I agree that the bra-ket notation is clumsy here.
 
  • #74
comote said:
Ok, I see now. I am more comfortable with the notation
[tex]\langle A^*x,y\rangle=\langle x,Ay\rangle[/tex].

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
Bleh; when working with bras and kets, I've always hated that notation, since it breaks the cohesiveness of the syntax. And similarly if I'm actually working with a covector for some reason -- I would have great physical difficulty writing [itex]\langle \omega^*, v \rangle[/itex] instead of [itex]\omega(v)[/itex]. No problem with the bra-ket notation, though, since it maintains the form of a product: [itex]\langle \omega | v \rangle[/itex].
 
  • #75
Thank you all. You've been immensely helpful. Even the easy confusion that results from leaning this stuff is notable. By the way, thanks for the link to the other thread, Fredrik.

How does one derive this [tex] \langle A^*x,y\rangle=\langle x,Ay\rangle [/tex] ??
 
  • #77
I'm having trouble posting. I get a database error for long posts with a lot of laTex. Am I the only one?

I'm going to give it an hour.
 
  • #78
just seen a mistake in my earlier post

[tex] \langle x| (A^\dagger|y\rangle) = ((\langle x|A) |y\rangle)^*[/tex]

should be

[tex] \langle x| (A^\dagger|y\rangle) = ((\langle y|A) |x\rangle)^*[/tex]
 
  • #79
mrandersdk said:
Something important about the adjoint, is given a ket [tex]|\psi>[/tex] then we can make the ket [tex]A|\psi>[/tex], the corresponding dual [tex]<\psi|A^\dagger[/tex].

Maybe it is that 'corresponding' you are worried about. This is just because that (as comote pointed out) in a hilbert space there is a unique one to one correspondance between the space and it dual, so given a ket [tex]|\psi>[/tex] there must be an element we can denote by [tex]<\psi|[/tex]. And we have a function [tex]J: H \rightarrow H^*[/tex] such that [tex]<\psi|= J(|\psi>)[/tex], and i guess it can be shown that, you get [tex]<\psi|A^\dagger= J(A|\psi>)[/tex], so her eyou use it.

Maybe it is actually this function J you have been asking about the whole time?

Go easy on me with the abstract algebra, but yes!

As you say, we have a function [tex]J: H \rightarrow H^*[/tex].

It's a bijective map, so [tex]J^{-1}: H^* \rightarrow H[/tex].

I've been calling [tex]J=g_{ij} [/tex] and [tex]J^{-1}=g^{ij}[/tex]

Now, I'd like think we can include the quantum mechanical operators as various products of [tex]H[/tex] and [tex]H^\ast[/tex].

[tex]H \otimes H,\ H \otimes H^\ast,\ H^\ast \otimes H[/tex], and [tex]H^\ast \otimes H^\ast[/tex].

For example [tex]A \left| x \right> = \left| y \right>[/tex], where

[tex] x , y \in H [/tex]
[tex] A \in H \otimes H^\ast [/tex]
[tex] A \in H \otimes H^\ast [/tex]

This part is guess-work: For an operator [tex] \Theta=psi \times phi[/tex], where

[tex] \psi \in H [/tex]
[tex] \phi \in H^* [/tex]
[tex] \Theta \in H \otimes H^* [/tex],

Then [tex] \Theta \dagger = J ( \psi ) J ( \phi ) [/tex], where

[tex] \Theta \dagger \in H^* \otimes H [/tex]

Again, it may not all hang together as desired.
 
  • #80
mrandersdk said:
stil think there is a big differens. Are you thinking of the functions as something fx. in [tex]L^2(R^3)[/tex], ...
I'm a bit confused about your notation. What does the [itex]R^3[/itex] in [itex]L^2(R^3)[/itex] represent? I just recalled that the notation used in the quantum mechanics for the the set of all square integrable functions is not always written as [itex]L^2[/itex] as a mathematician might write, but as [itex]L_2[/itex] or [itex]H_2[/itex]. An example of the former is found in Notes on Hilbert Space, by Prof. C-I Tan, Brown University.

http://jcbmac.chem.brown.edu/baird/quantumpdf/Tan_on_Hilbert_Space.html

An example of the later is found in Introduction of Quantum Mechanics - Third Edition, Richard L. Liboff, page 102

Note on Latex: I see people using normal Latex to write inline equations/notation. To do this properly don't write "tex" in square brackets as you normally would do when the expression is to appear inline. To write inline equations use "itex" inside the square brackets. Its for this reason that letters are being printed out inline but not with the bottom of the letter alligned with the other letters.

Pete
 
Last edited by a moderator:
  • #81
comote said:
Getting back to the first thing I said. Even in basis independent notation what I said about column/row vectors has meaning. If we are given a unit vector [tex]\psi[/tex] ...
I'd like to point out an incorrect usage of notation here. Since this is a thread on bras and kets I think that its imporant to point this out here. I also think that it relates to what some posters are interested in too, i.e. the usefulness of ket notation.

comote - Recall your comment in a previous post, i.e.
If we are given a unit vector [tex]\psi[/tex] ...
[itex]\psi[/itex] is not the notation for a unit vector, unless you are using it as short hand for [itex]\psi[/itex] = [itex]\psi[/itex](x)? It is the kernel which denotes the quantum state. By kernel I mean a designator. For instance, in tensor notation the components of the stress-energy-momentum tensor are [itex]T^{\alpha\beta}[/itex]. The geometric notation of this tensor looks like T(_,_) where the "_" denote place holders for two 1-forms. The letter "T" as it is used here is called a "kernel". In quantum mechanics [itex]\psi[/itex] almost always denotes a kernel. The actual quantum state is represented using ket notation as |[itex]\psi[/itex]>.

On to your next statement
...then we can understand it as being an element of some orthonormal basis and then saying
[tex]|\psi\rangle[/tex] is the representation as a column vector makes sense.
If one wishes to represent the state in position space then one projects it into position space using the position eigenbra <x| which is dual to the position eigenket |x>. I.e. [itex]\psi[/itex](x) = <x|[itex]\psi[/itex]>. This represents an element of a column vector. It is the component of the state on the position basis. There is a continuous number of rows here labeled with the continuous index x.

The ket notation thus allows a very general representation of a quantum state. It is best to keep in mind the difference between the kernel which denotes the state, the mathematical object which represents the state and a component of that state on a given basis.

Pete
 
Last edited:
  • #82
comote said:
The momentum operator does not have eigenstates per se,...
They most certainly do. The eigenstates of momentum are well defined. There are also eigenstates of the position operator too.

Pete
 
  • #83
Hi comote = I'm going through each post one by one so please ignore my comments in my previous posts which were already addressed by mrandersdk. You are fortunate to have him here. He seems to have Dirac notation down solid!

mrandersdk and comote - Welcome aboard! Nice to have people here who knows their stuff.

Best wishes

Pete
 
  • #84
Oh, the [itex]R^3[/itex], was just because i assumed square integrabel over, the vetor space R^3, but this could be lot of other things I guess, depending on what particular thing you are working on.
 
  • #85
pmb_phy said:
They most certainly do. The eigenstates of momentum are well defined. There are also eigenstates of the position operator too.

Pete
The point comote was making is that there do not exist elements of the Hilbert space (i.e. square-integrable functions) that are eigenstates of position and momentum. So those operators do not have eigenstates in the strictest sense.

But that's where the rigged Hilbert space comote mentioned comes into play: it consists of the extra data

. A subspace of test functions. (e.g. the 'Schwartz functions')
. A superspace of linear functionals applicable to test functions. (called 'generalized states')

and then if you take the extension of the position and momentum operators to act on generalized states when possible, you can find eigen-[generalized states] of these extended operators.


Of course, we usually only bother making these distinctions when we have a specific reason to do so -- so in fact both of you are right, you're just talking in different contexts. :) (comote actually caring about the types of objects, while you are using the words in the usual practical sense)
 
Last edited:
  • #86
Hurkyl said:
The point comote was making is that there do not exist elements of the Hilbert space (i.e. square-integrable functions) that are eigenstates of position and momentum. So those operators do not have eigenstates in the strictest sense.
Yes. Upon further inspection I see that is what he was referring to. Thanks. However I disagree in that those operators do have eigenstates in the strictest sense. Just because they don't belong to a Hilbert space, and they don't represent physical states, it doesn't mean that they aren't eigenstates. They are important as intermediates in the math.

Pete
 
Last edited:
  • #87
pmb_phy said:
Just because they don't belong to a Hilbert space, and they don't represent physical states, it doesn't mean that they aren't eigenstates.
Sure it does. The domain of P is (a dense subset of) the Hilbert space. If [itex]|v\langle[/itex] isn't in the Hilbert space, then it's certainly not in the domain of P, and so the expression [itex]P |v\rangle[/itex] is nonsense!
 
  • #88
Hurkyl said:
Sure it does. The domain of P is (a dense subset of) the Hilbert space.
As I recall, that depends on the precise definition of the operator itself. Mind you I'm going by what my QM text says. The authors could have been sloppy but nothing else in that text is sloppy. Its pretty thorough as a matter of fact. Let me get back to you on this.

Phrak - I've thought about your questions some more and have some more to add. In tensor analysis the tensors themselves are often defined in terms of how their components transform. It is commonly thought that the transformation is due to a coordinate transformation. However this is not quite correct. To be precise the tensors defined as such are defined according to how the basis vectors transform. Transforming basis vectors (kets)easily compared to tensor analysis so perhaps we should focus on basis transformations rather than coordinate transformations.

More later.

Pete
 
  • #89
Regarding my comment above, i.e.
It is commonly thought that the transformation is due to a coordinate transformation. However this is not quite correct. To be precise the tensors defined as such are defined according to how the basis vectors transform.
I was reminded of this fact when I was reviewing GR. I had the chance a few weeks ago to take some time and read Sean Carroll's GR lecture notes which are online at
http://xxx.lanl.gov/abs/gr-qc/9712019. On page 44 the author writes
As usual, we are trying to emphasize a somewhat subtle ontological distinction - tensor components do not change when we change coordinates, they change
when we change the basis in the tangent space, but we have decided to use the coordinates to define our basis. Therefore a change of coordinates induces a change of basis:
This is an important fact that is often overlooked.

I looked over your previous posts regarding lowering of indices (e.g. https://www.physicsforums.com/showpost.php?p=1782754&postcount=19) and wanted to point out that you should have tried the identity matrix to represent the metric. If you did that and first taken the complex conjugate of the row vector before you took the product then you would have gotten the result you were looking for, i.e. you'd end up with the dual vector represented as a row vector.

I hope this helps.

Pete
 
  • #90
What in the hell happened to Pete? Why is his name lined-out?
 
  • #91
I'm very curious about that too. This is really weird. I did a search for his most recent posts, and #89 is the last one. None of his recent posts offer any clue about what happened.
 
  • #92
Pete-

If you're still lurking--this thread, at least: Everything I know about tensors I learned from Sean Carroll; a wonderful, and accessible text.

I'd mused-over the points you'd brought-up in your post #89, and partly due to a previous comment you made here about my reference to coordinate bases.

As you suggest, it's just as well to use a metric g_ij(real vector) --> g_ij(complex vector)* = h_ij rather than to introduce a column vector to represent a complex number. I simply thought the column vector of a complex number would be nicer as it combines the two operations of complexification and the lowering/raising operation into one. Either acts equally well on Hilbery space vectors.

So the next task is to demonstrate that type(1,1) tensors with complex entries are a valid representation ( in finite dimensions) of the quantum mechanical operators that act on bras and kets; that is, they behave as required when the adjoint it taken. The adjoint would be applicatioin of the metrics h_ab and h^cd to a qm operator A_b^d to obtain A^a_c.

I'm just slow, or I would have done it by now--or failed to do so because it's simple wrong.
 
Last edited:
  • #93
Within the structure of Newtonian physics, we can write

dP/dt = F, where P and F are the usual momentum and force vectors, in 3D.

Also then according to Dirac's notation
d |P>/dt =|F>. Or does it?

Is it, in the sense of an equivalence relation, really legit to equate P and |P> -- in 3D space ? Why, or why not?
Regards,
Reilly Atkinson
 
  • #94
You can't do that. There are ways to fx. relate rotation in 3D space (rotation of the lab frame), to kets describing a state, but theses at least as I learned it, things you need to postulate.

But this is kind of advance topic, and to get something on this, you need advenced quantum mechanichs book (again sakurai is a classic.)

The short answer why you can't this is that we are dealing with quantum mechanics, and this is a whole lot different than Newtonean physics.

But also the momentum can chareterise a state, but the force on it can't, so that isn't a state. Quantum mechanics builds on hamiltonian mechanics, and this formalism (harsh said) don't use forces, but uses potentials.

It seems like you haven't taking any courses in QM ?
 
  • #95
mrandersdk-

I've been thinking a great deal about your objections to casting vectors in Hilbert space as tensors, or even matrices. Is it that a great deal is lost in taking an abstraction to a representation?

I've given-up on representing Hilbert space vectors as tensors. My assumptions were wrong. However, in your case, I you might find some satisfaction in respresenting both tensors and vectors in Hilbert space under a single abstraction, if it can be done.
 
  • #96
reilly said:
Within the structure of Newtonian physics, we can write

dP/dt = F, where P and F are the usual momentum and force vectors, in 3D.

Also then according to Dirac's notation
d |P>/dt =|F>. Or does it?

Is it, in the sense of an equivalence relation, really legit to equate P and |P> -- in 3D space ? Why, or why not?
I'm guessing that might have been a rhetorical question (such as lecturers sometimes
ask their students)?

If so, I'll have a go and say that the observables P, F, etc, in classical
physics are best thought of as ordinary [itex]C^\infty[/itex] functions on 6D phase space.
In quantization, one maps classical observables such as P to self-adjoint
operators on a Hilbert space, and classical symmetry transformations are expressed
there as [itex]U P U^{-1}[/itex] where U denotes a unitary operator implementing the
transformation in the Hilbert space. If we can find a complete set of eigenstates of P
in the Hilbert space, then we can find a [itex]|p\rangle[/itex] corresponding to any
orientation of 3-momentum.

But the above says nothing about 3D position space. We haven't yet got a "Q" operator
corresponding to the Q position variable in classical phase space. When we try
to incorporate Q as an operator in our Hilbert space, with canonical commutation relations
corresponding to Poisson brackets in the classical theory, we find that it's quite hard to
construct a Hilbert space (rigorously) in which both the P and Q play nice together,
and one usually retreats to the weaker (regularized) Weyl form of the commutation
relations. So it's really a bit misleading to think of the Hilbert space as somehow being
"in" 3D position space.

Regarding the F classical observable, we'd write (classically!) the following:

[tex]
F ~=~ \frac{dP}{dt} ~=~ \{H, P\}_{PB}
[/tex]

where the rhs is a Poisson bracket and H is the Hamiltonian. In the quantum theory,
this would become an operator equation with commutators (and with [itex]\hbar=1[/itex]) ,

[tex]
F ~=~ i \, \frac{dP}{dt} ~=~ [H, P]
[/tex]

(possibly modulo a sign).

But I'm not sure whether any of this really answers the intended question. (?)
 
  • #97
Hurkyl;1783747 said:
It's not so much that we want to actually represent bras and kets as row and column vectors -- it's that we want to adapt the (highly convenient!) matrix algebra to our setting.

For example, I was once with a group of mathematicians and we decided for fun to work through the opening section of a book on some sort of representation theory. One of the main features of that section was to describe an algebraic structure on abstract vectors, covectors, and linear transformations. In fact, it was precisely the structure we'd see if we replaced "abstract vector" with "column vector", and so forth. The text did this not because it wanted us to think in terms of coordinates, but because it wanted us to use this very useful arithmetic setting.

Incidentally, during the study, I pointed out the analogy with matrix algebra -- one of the others, after digesting my comment, remarked "Oh! It's like a whole new world has opened up to me!)


(Well, maybe the OP really did want to think in terms of row and column vectors -- but I'm trying to point out this algebraic setting is a generally useful one)

If I had any sense, I would have, but I am actually more comfortable with tensors than matrices. In any case, I've retreated to understanding the algebra in terms of matrices.

Please correct me in the following if I am wrong. It seems there are really only a small number of rules involved in a matrix respresentation:

[tex] AB = (A^{\dagger} B^{\dagger})^{\dagger}[/tex]

or even

[tex] A^{\dagger}B = (A B^{\dagger})^{\dagger}[/tex]

With bras and kets represented as 1xN and Nx1 matrices, and with the adjoint of a complex number defined as it's complex conjugate, [itex] c^{\dagger} = c^* [/itex] :

[tex]\left< u \right| X \left| v \right> = \left< v \right| X^{\dagger} \left| u \right> ^* [/tex]

can be represented as

[tex]\left( u^{\dagger} X v \right) = \left( v^{\dagger} X^{\dagger} u \right) ^{\dagger} [/tex]

The next is a little more interesting. The operator

[tex] \left| u \right> \left< v \right| [/tex]

is represented as

[tex] u \times v^{\dagger} [/tex]

the outer product of [itex] u \ [/itex] and [itex] v^{\dagger} [/itex] .

If I am not mistaken, [itex] Y = u \times v^{\dagger} [/itex] is a quantum mechanical operator that acts from the left on kets to return kets, and act from the right on bras to return bras?
 
Last edited:
  • #98
shuoldnt you have

[tex] AB = ( B^{\dagger}A^{\dagger})^{\dagger}[/tex]

[tex] A^{\dagger}B = ( B^{\dagger}A)^{\dagger}[/tex]
 
  • #99
mrandersdk said:
shuoldnt you have

[tex] AB = ( B^{\dagger}A^{\dagger})^{\dagger}[/tex]

[tex] A^{\dagger}B = ( B^{\dagger}A)^{\dagger}[/tex]

Yes, or course you are right. Thank you, mrandersdk!

I forgot to include complex numbers, with

[tex] \left( \ c \left| u \right> \ \right)^{\dagger} = \left< u \right| c^\dagger [/tex]

represented as

[tex] \left( c u )^\dagger = u^\dagger c^\dagger [/tex]

along with double daggers, like

[tex] \left( X^\dagger \right) ^\dagger = X [/tex]

, I think this nearly completes a axiomatic set for manipulating equations.
 
Last edited:
  • #100
mrandersdk said:
You can't do that. There are ways to fx. relate rotation in 3D space (rotation of the lab frame), to kets describing a state, but theses at least as I learned it, things you need to postulate.

But this is kind of advance topic, and to get something on this, you need advenced quantum mechanics book (again sakurai is a classic.)

The short answer why you can't this is that we are dealing with quantum mechanics, and this is a whole lot different than Newtonean physics.

But also the momentum can chareterise a state, but the force on it can't, so that isn't a state. Quantum mechanics builds on hamiltonian mechanics, and this formalism (harsh said) don't use forces, but uses potentials.

It seems like you haven't taking any courses in QM ?

In truth, I've taught the subject several times, both to undergraduates and graduate students. You are, as are many in this thread, confusing content and notation. That is, sure, in Dirac notation |S> stands for a state vector vector. However, he operative word here is vector, any vector in fact. There's nothing in the definition of bras and kets that restricts them to QM.

Why not in mechanics or E&M or control engineering. A vector is a mathematical object. In physics, or in any quantitative discipline. we assign vectors to objects that we,. say, describe, naturally, by ordered pairs, or triplets or, each number in the n-tuplet corresponds to a vector component in the appropriate space. All the stuff about transformation properties is contained in the characteristics of the appropriate space.


Dirac notation is nothing more, and nothing less than one of many equivalent methods for working with linear vector spaces, finite or infinite, real or complex -- in fact, probably over most mathematical fields. All the confusion about transposes and adjoints, operators, direct products and so forth would be problems in any notational scheme. An adjoint is an adjoint, the adjoint of product of operators of flips the order of the individual adjoints, and so forth.

(My suspicion is that Dirac invented his notation to make his writing and publication more simple. Rather than bolding, or underlining variable names to indicate a vector, he chose his famous bra-ket notation, because it was easier for him to write.)

Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors -- there are definitional tricks that allow the multiparticle sates to be considered as a single column vector. So, as a direct product is a tensor, we've now got both tensors and, naturally, vectors in Dirac-land.A better way to do realistic tensors is to create them by tensor-like combinations of creation operators acting on the Fock-space vacuum. Recall also the extensive apparatus of spherical tensors in angular momentum theory. We can often consider both states and operators as tensors.

The Dirac notation is extensively and clearly discussed in Dirac's Quantum Mechanics -- he goes through virtually every issue raised in this thread -- end of his first Chapter. and Chapters II and III. In my opinion, to understand QM as it is practiced today, one must study Dirac's book. For example, the whole apparatus of QM notation and concepts as we know them today, is largely defined and developed in Dirac's book. There's no substitute for the original.

Regards,
Reilly Atkinson
 
  • #101
Ok, I know we can use the notation for every vector space if we wan't. Of cause we can do that. I'm not sure why you say that multiparticle states are direct products?

If the particles are independent, you can write them as a tensorproduct of two vectors, if they are correlated then you can't nessecarily.

The reason i said you equation was wrong, was because we where talking about QM, so it didn't make sense.

Again you are right that a vector is often described by a n-tuple, but as i have said a lot of times in this thread, the tuple doesn't make sense without a basis, telling us what it means. A bit like your equation didn't make sense because you didn't tell what you ment by |p> and |F>.

The problem about adjoint, is to write the definition used in math

[tex] <x,A y> = <A^*x,y> [/tex]

in diracs notation. You have to be very carefull to write this.

Not sure what your point is about fock-space? Is it because if we have a space describing one particle, and we take a tensor product between such two states then we are not in the space anymore, but in the fock space formalism you incorporate this problem?

I haven't read diracs book, but it sounds interesting, I will look at it in my vecation, thanks for the reference. I agree that he made the notation because it made it simpler to write (maybe to remember some rules of manipulating), but I just think that people often get a bit confused about it, because one learn QM with wavefunctions first and then learn bra-ket, then often people think that the wavefunction is used just like a ket, and it often isn't (even though you proberly could, after all L^2 is a vector space).
 
  • #102
reilly said:
Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors
...
So, as a direct product is a tensor
That last statement is (very) incorrect! The direct product of two vector spaces is quite different than their tensor product -- in fact, most quantum 'weirdness' stems from the fact you use direct products classically but tensor products quantum mechanically.
 
  • #103
pwew, there was another that found that a bit disturbing.
 
  • #104
what does this mean, I can't see this can be correct:

"Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors "

Maybe I just can't read it but what does

"So, as a direct product is a tensor"

mean?
 
Last edited:
  • #105
by the way

[tex]\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2[/tex]

and

[tex]\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}[/tex]

is not correct. it is

[tex]\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}[/tex]

and

[tex]\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3^n}[/tex]
 
Last edited:

Similar threads

Back
Top