How are wavefunctions and eigenkets used differently in quantum mechanics?

In summary, the wave function in the Hilbert-space formulation is determined by a set of compatible self-adjoint operators A_1,\ldots,A_n whose common eigenkets form a complete orthonormalized set of (generalized) eigenvectors |a_1,\ldots,a_n \rangle. The physical meaning of this wave function is given by Born's rule.
  • #1
carllacan
274
3
Hi.

I've been learning quantum mechanics from different sources and I'm starting to notice that they have really different ways of treating certain things.

For example: in some places Griffiths works with wavefunctions, while Sakurai works using eigenkets. This confuses me. As I understand, wavefunctions are simply the coefficients of a state's expansion on the position basis. How can then a operator "act" on a wavefunction?

Thank you for your time.
 
Physics news on Phys.org
  • #2
##A \psi(x) = \langle x |A|\psi \rangle##.
 
  • Like
Likes 1 person
  • #3
Just to be a little bit more explicit, and cover a little more of the notation that you probably see:

$$A\psi(x)\equiv A\left<x|\psi\right>\equiv\left<x|A|\psi\right>$$
 
  • Like
Likes 1 person
  • #4
I think you need to become acquainted with Gleasons theroem
http://kof.physto.se/cond_mat_page/theses/helena-master.pdf

The fundamental thing is the observables. Gleasons theroem (plus a couple of other things, the most important of which is non contextuality) implies a positive operator of unit trace P exists such that the expected value of the observable O is trace(PO).

By definition P is called the state of the system. States of the form |x><x| are called pure and they are the states Griffith is talking about and trace (PO) = <x|O|x>.

Mathematically this is probably a bit advanced for you right now, but that's the reason Griffiths and books at that level can be a bit confusing if you think a bit beyond their cookbook approach. You really need a bit more mathematical depth for the subtleties to be clear.

I suggest reading the first three chapters of Ballentine
https://www.amazon.com/dp/9810241054/?tag=pfamazon01-20

It's mathematically advanced, and you probably won't understand it all, but you will likely get the gist of the correct approach.

Thanks
Bill
 
Last edited by a moderator:
  • Like
Likes 1 person
  • #5
There's sometimes a bit messy notation in theoretical physics. So just some remarks to the relation in the abstract Hilbert-space formulation and the wave-mechanics formulation (of non-relativistic quantum theory; in relativistic QT it doesn't make too much sense to talk about "wave functions", but that's another story).

In the Hilbert-space formulation a (pure) state is represented by a ray, which in turn is represented by any unit-vector [itex]|\psi \rangle[/itex] contained in the ray. This unit vector is thus only determined up to a phase factor, which is irrelevant for any physically relevant quantities.

An observable of the system is represented by a self-adjoint operator on Hilbert space. To define the wave function one needs a complete set of compatible obserables, i.e., a corresponding set of mutually commuting self-adjoint operators [itex]\{A_1,\ldots,A_n \}[/itex] whose common eigenkets form a complete orthonormalized set of (generalized) eigenvectors [itex]|a_1,\ldots,a_n \rangle[/itex], where the [itex]a_j \in \mathbb{R}[/itex] run over the spectrum of the operators [itex]A_j[/itex]. With generalized I mean that the spectrum of some (or all) of the operators can contain continuous parts (or are entirely continuous).

Then the wave function, expressed with respect to this basis is defined as
[tex]\psi(a_1,\ldots,a_n)=\langle a_1,\ldots,a_n|\psi \rangle.[/tex]
The physical meaning of this wave function is given by Born's rule, i.e., [itex]|\psi(a_1,\ldots,a_n)|^2[/itex] is the probability (distribution) for measuring common values [itex](a_1,\ldots,a_n)[/itex] when measuring the observables represented by the self-adjoint operators [itex]A_1,\ldots,A_n[/itex].

The action of an arbitrary operator [itex]B[/itex] on the wave function is given by
[tex]\hat{B} \psi(a_1,\ldots,a_n)=\langle a_1,\ldots,a_n|B \psi \rangle.[/tex]
Note that [itex]\hat{B}[/itex] is something different than the abstract operator [itex]B[/itex]. The former is an abstract operator defined only indirectly by some algebra of observables, while [itex]\hat{B}[/itex] is the representation of the operator in the concrete realization on a function Hilbert space of wave functions.

As an example take the usual case of a single spinless particle, whose observable algebra is spanned by the position and momentum operators [itex]x_j[/itex] and [itex]p_j[/itex] ([itex]j \in \{1,2,3\}[/itex] denoting the representants of the components of the corresponding vectors wrt. a Cartesian basis, obeying the Heisenberg-algebra commutation relations
[tex][x_j,x_k]=[p_j,p_k]=0, \quad [x_j,p_k]=\mathrm{i} \hbar \delta_{jk}.[/tex]
As a complete common set of observables we can choose the three position-vector components of the particle, each of which has the entire real axis as its spectrum. The corresponding generalized eigenvectors are by definition normalized as
[tex]\langle \vec{x}|\vec{y} \rangle=\delta^{(3)}(\vec{x}-\vec{y}).[/tex]
Then the wave function is
[tex]\psi(\vec{x})=\langle \vec{x}|\psi \rangle.[/tex]
Since [itex]|\vec{x} \rangle[/itex] is a common eigenvector of the three position-vector components, we have
[tex]\hat{x}_j \psi(\vec{x}):=\langle \vec{x}|\mathbf{x}_j \psi \rangle=\langle \mathbf{x}_j \vec{x}|\psi \rangle=x_j \langle \vec{x}|\psi \rangle=x_j \psi(x).[/tex]
Here, I've written the abstract position vector in bold letters to distinguish it from the (generalized) eigenvalues [itex]x_j[/itex].

For the momentum operator one has to use the commutator relations, which say nothing else than that the momentum operators are the generators of spatial translations to show that
[tex]\hat{p}_j \psi(x)=-\frac{\mathrm{i}}{\hbar} \frac{\partial}{\partial x_j} \psi(x).[/tex]
 
  • Like
Likes 1 person
  • #6
carllacan said:
Hi.

I've been learning quantum mechanics from different sources and I'm starting to notice that they have really different ways of treating certain things.

For example: in some places Griffiths works with wavefunctions, while Sakurai works using eigenkets. This confuses me. As I understand, wavefunctions are simply the coefficients of a state's expansion on the position basis. How can then a operator "act" on a wavefunction?

Thank you for your time.

Yes, there are very different approaches and each has something to give. They are so different that it can be confusing to try to use them both at the same time. One can stick with wave functions  ##\psi## most of the time in non-relativistic wave mechanics, when dealing with atoms and molecules. This is how Landau and John Slater do it in their books. They are closer to standard mathematics one knows from the field of differential equations. They are more concrete objects - their modulus squared directly gives probability density (according to Born's rule for ##\psi(\mathbf r)##) and they can be used to calculate values of matrix elements of electric moment or other quantities. Typical calculations of spectrum for given Hamiltonian are easier with wave functions than with abstract operators or matrices.

How can operator act on the wave function ##\psi##? Simply - it produces another function. For example, take the operator for momentum along ##x## axis. The result of the operation is another function
$$
(\hat{p}_x \psi)(x) = \frac{\hbar}{i} \frac{\partial \psi(x)}{\partial x}.
$$
Kets look nice and cool but they do not give any practical advantage for solving problems in non-relativistic theory as far as I know. Things are different in the quantum theory of radiation - the thing is based on the ket formalism. But my advice is to understand what is going on with wave functions and only then delve into intricacies of field and ket formalism - there are quite a few (generalized eigenkets, so-called rigged Hilbert space etc...)
 
  • #7
Thank you for your answers.

I think I'll follow bhobba's suggestion and get as much as I can from Ballentine and Sakurai, and then go back to the simplified Griffiths version, as that is mostly the approach my university courses follow.
 
  • #8
@Jano L. You are right to some extent: When it comes to practical calculations in non-relativistic QT the wave-function approach is the most useful, but it's not the most simple conceptual basis to learn about QT and often to set up a problem, which you might solve at the end using the well-established methods of partial differential equations for the Schrödinger or Pauli equations, it's much easier to use the abstract formalism.

I find that the best book for advanced beginners is Sakurai, Modern Quantum Mechanics. It doesn't start with wave functions but from the real foundations and with the most simple systems (like spin 1/2 which only needs a two-dimensional complex Hilbert space).

For the understanding of the interpretational problems and for some mathematical subtleties (as is the fact that physicists nowadays do not use the original Hilbert-space approach but the socalled rigged Hilbert space) I'd recommend Ballentine. Also very good as an advanced text is Weinberg, Lectures on Quantum Mechanics.

For a lot of wave mechanical applications you can use the classical texts like Landau and Lifshits (vol. 3 of the theoretical physics course) or Messiah. Also Pauli's classical text and also the quantum-mechanics volume of his lectures on theoretical physics is marvelous.

I don't know Griffiths's book well enough to say anything about it. Given his excellent electrodynamics book, I guess it's pretty good too.
 
  • Like
Likes 1 person
  • #9
often to set up a problem, which you might solve at the end using the well-established methods of partial differential equations for the Schrödinger or Pauli equations, it's much easier to use the abstract formalism.
Could you give some example ?
 
  • #10
E.g., to derive the general equations for time-independent or time-dependent perturbation theory is much more clear than with the wave-mechanics approach, but that's of course subjective. I forgot to mention that sometimes also the third approache, Feynman's path-integral formulation, has its merits. For time-dependent perturbation theory I'd even prefer it to the operator method.
 
  • #11
Is it correct then to operate as follows?
[itex]\int dx [\hat{P}ψ(x)]^*[\hat{P}ψ(x)] = \int dx \hat{P} ψ(x)^* \hat{P} ψ(x) = \int dx\:ψ(x)^*\hat{P}^2ψ(x) [/itex]
(assuming P is hermitian)
 
Last edited:
  • #12
The middle and the final expressions are incorrect. Correct result is
$$
\int dx\, \psi^* \hat{p}^2 \psi,
$$
since the operator is hermitian.
 
  • Like
Likes 1 person
  • #13
Jano L. said:
The middle and the final expressions are incorrect. Correct result is
$$
\int dx\, \psi^* \hat{p}^2 \psi,
$$
since the operator is hermitian.

Thanks, I forgot the ² in the final expression. Why is the middle one wrong, though?
 
  • #14
carllacan said:
Is it correct then to operate as follows?
[itex]\int dx [\hat{P}ψ(x)]^*[\hat{P}ψ(x)] = \int dx \hat{P} ψ(x)^* \hat{P} ψ(x) = \int dx\:ψ(x)^*\hat{P}^2ψ(x) [/itex]
(assuming P is hermitian)

Are you trying to find the expectation value of ##\hat P## or of ##\hat P^2##?

Assuming you're trying to find the expectation value of something, the first and second expressions are both invalid. The third one gives the expectation value of ##\hat P^2##. The expectation value of ##\hat P## would be ##\int dx\:ψ(x)^*\hat{P}ψ(x)##.
 
  • #15
jtbell said:
Are you trying to find the expectation value of ##\hat P## or of ##\hat P^2##?

Assuming you're trying to find the expectation value of something, the first and second expressions are both invalid. The third one gives the expectation value of ##\hat P^2##. The expectation value of ##\hat P## would be ##\int dx\:ψ(x)^*\hat{P}ψ(x)##.

No, I just have to prove that the first expression is equal to the third one.
 
  • #16
carllacan said:
Thank you for your answers.

I think I'll follow bhobba's suggestion and get as much as I can from Ballentine and Sakurai, and then go back to the simplified Griffiths version, as that is mostly the approach my university courses follow.
It's not a bad idea to study the more general type of "state", but this won't help you understand the specific thing you asked about when you started the thread.
 
  • #17
Is ##\hat P## supposed to be the momentum operator (which is usually lower-case ##\hat p##), or is it the parity operator (which is usually upper-case ##\hat P##)? Or is it supposed to be a generic unspecified Hermitian operator?
 
  • #18
If we ignore all the issues associated with the fact that position and momentum are unbounded observables, then the main difference between normalizable kets and wavefunctions is simply this:
A normalizable ket ##|\alpha\rangle## is an element of a Hilbert space. A wavefunction ##\psi## is an element of the Hilbert space of square-integrable complex-valued functions on ##\mathbb R^3##.​
So the only difference is that in the latter case, we say that the Hilbert space is specifically ##L^2(\mathbb R^3)##, and in the former case, we don't.

If I remember correctly, Sakurai uses the formula ##\psi_\alpha(x)=\langle x|\alpha\rangle## to define wavefunctions from kets. I remember doing some calculations involving this formula when I took a class based on that book, and at the time I felt like they explained a lot. (I don't feel that way anymore, because now I understand how hard it is to justify these steps). For example, if we already know that ##e^{-ipl}## translates states a distance ##l## (i.e. ##e^{-ipl}|x\rangle=|x+l\rangle##), then we can do stuff like this:
$$\psi_\alpha(x-l) =\langle x-l|\alpha\rangle=\langle x|e^{-ipl}|\alpha\rangle$$
If we divide both sides by ##-l## and take the limit ##l\to 0##, we get
$$\frac{d}{dx}\psi(x)=\langle x|ip|\alpha\rangle =i\langle x|p|\alpha\rangle.$$
In texts at the level of Sakurai, this is justified by using Taylor's formula to rewrite the left-hand side, and the power series definition of the exponential function to rewrite the right-hand side. The problem with that is that the power series definition only works when the exponent is a bounded operator. Anyway, if we still trust the result, we have "derived" that the p operator on the space of kets corresponds to ##-i\frac{d}{dx}## on the space of wavefunctions defined through the formula ##\psi_\alpha(x)=\langle x|\alpha\rangle##.

I'm not sure what to think of calculations like this, where we just pretend that the mathematical difficulties don't exist.

So what if we don't ignore the fact that position and momentum are unbounded? For starters we can no longer say that the kets are elements of a Hilbert space. There's still a Hilbert space associated with the theory, but kets aren't elements of it. The proper way to define them involves choosing an appropriate subspace ##\Omega## of the Hilbert space, and then define bras as continuous linear functionals on ##\Omega## and kets as continuous antilinear functionals on ##\Omega##. The notation ##\langle x|\alpha\rangle## shouldn't be interpreted as an inner product. It should be interpreted as ##\langle x|\big(|\alpha\rangle\big)##, i.e. the value of ##\langle x|## at ##|\alpha\rangle##, where ##|\alpha## is an element of ##\Omega##. Now I think the formula ##\psi_\alpha(x)=\langle x|\alpha\rangle## actually makes sense, and I think the steps of the calculation above can be justified too. It's pretty far from trivial though.
 
  • #19
Fredrik said:
I'm not sure what to think of calculations like this, where we just pretend that the mathematical difficulties don't exist.

Of course they are, as far as satisfying any kind of reasonable mathematical rigour is concerned, a load of rubbish.

That's why we move to the Rigged Hilbert space formalism, but mathematically its quite difficult. Some books do tackle these difficult issues eg:
https://www.amazon.com/dp/146147115X/?tag=pfamazon01-20

And of course Ballentine gives a brief overview.

But I have to say, and I fell into this trap so know how seductive it is, its not really germane to the physics. Its like analysis vs calculus. Calculus is just fine for doing physics, the deeper mathematical issues such as convergence, least upper bound axiom, completeness of the reals etc, that is the domain of analysis, isn't really that important. If you want to call yourself a mathematician you need to do your epsilonics - but physics isn't quite that worried. And you do run into issues with such a naive approach, but in practice its usually (not always but usually) OK.

Handwavey math works pretty well, especially to start with. You can delve into the technical details of mathematical rigour as mood and interest grabs you.

Personally when dealing with fundamental issues in QM, I consider everything a finite dimensional vector space - they are the physically realizable states - that is extended by the Rigged Hilbert space formalism to include linear functionals purely for mathematical convenience so we can take derivatives etc and our calculations throw up things like waves that extend to infinity - obviously that is just an idealisation - but in certain situations is a nice way to model things. Dirac delta functions don't exist either - but are nice for modelling.

Just don't expect rigour unless you are willing to delve into some rather hairy math as a quick browse through say Gelflands tome on distribution theory and Rigged Hilbert spaces readily attest to.

Added Later:
BTW Rigged Hilbert spaces are not the only way to rigerously develop this stuff - non standard anaysis also has a take eg:
http://projecteuclid.org/download/pdf_1/euclid.pja/1195523279

But if you think Rigged Hilbert spaces hairy, take my word for it non standard analsis is a whole new ball game - groan. I know - I was into it at one time and worked through the details. Without those details it's not to bad though and we even have some elementary textbooks that take that approach:
https://www.amazon.com/dp/0871509113/?tag=pfamazon01-20

But if you want rigour without the dreaded - it can be shown - groan - you are in for a whole world of hurt.

Thanks
Bill
 
Last edited by a moderator:
  • #20
bhobba said:
Personally when dealing with fundamental issues in QM, I consider everything a finite dimensional vector space - they are the physically realizable states - that is extended by the Rigged Hilbert space formalism to include linear functionals purely for mathematical convenience so we can take derivatives etc and our calculations throw up things like waves that extend to infinity - obviously that is just an idealisation - but in certain situations is a nice way to model things. Dirac delta functions don't exist either - but are nice for modelling.

Thanks
Bill

Do you really mean finite dimensional, or do you include countably infinite dimensional spaces? If you stay with finite dimensional vector spaces, I don't know how you possibly could express any sort of position representation. But in countably infinite spaces, you could work with e.g. the Hermite polynomials, which would give you a sort of (non-intuitive) position representation.

I cannot imagine how you would accomplish such a task with a finite number of elements.
 
  • #21
Matterwave said:
Do you really mean finite dimensional, or do you include countably infinite dimensional spaces? If you stay with finite dimensional vector spaces, I don't know how you possibly could express any sort of position representation. But in countably infinite spaces, you could work with e.g. the Hermite polynomials, which would give you a sort of (non-intuitive) position representation.

I mean finite dimensional.

You can't do the usual formalism with finite dimensions. That's why you extend it to include functionals defined on the space of all finite dimensional vectors which includes the infinite dimensional Hilbert space, Rigged Hilbert spaces, all sorts of stuff. In fact the space of such functionals is the maximal space you can have in a Gelfland triple. But that's just for mathematical convenience. In practice you never have say a state of exact momentum - its expansion as a wavefunction is of infinite extent and obviously physically unrealisable. The physically realizable states are always finite in number. Take position. A particle obviously has a zero chance of being some huge distance away, say at the other side of the universe, and you can't measure position exactly so you will get a finite, but large number of actual outcomes. For convenience we consider the number so large it goes over to a continuum and we can apply the methods of the calculus.

Thanks
Bill
 
  • #22
bhobba said:
I mean finite dimensional.

You can't do the usual formalism with finite dimensions. That's why you extend it to include functionals defined on the space of all finite dimensional vectors which includes the infinite dimensional Hilbert space, Rigged Hilbert spaces, all sorts of stuff. In fact the space of such functionals is the maximal space you can have in a Gelfland triple. But that's just for mathematical convenience. In practice you never have say a state of exact momentum - its expansion as a wavefunction is of infinite extent and obviously physically unrealisable. The physically realizable states are always finite in number. Take position. A particle obviously has a zero chance of being some huge distance away, say at the other side of the universe, and you can't measure position exactly so you will get a finite, but large number of actual outcomes. For convenience we consider the number so large it goes over to a continuum and we can apply the methods of the calculus.

Thanks
Bill

How could you, even in principle, find out the dimensionality of your vector space then? The phrase "there's 0 chance of the particle being on the other side of the universe" is well and good, but it doesn't give you any quantitative way of saying exactly what that maximum range of possibilities might be. It seems absurd to me to say "the particle could be this far from here, but not a single Planck length farther to the left, I know with 100% certainty this fact". Do you just make the cut off where you think "well beyond here seems absurd, so let's just cut it off here"?
 
  • #23
Matterwave said:
How could you, even in principle, find out the dimensionality of your vector space then?

The space of all sequences of finite length has infinite dimension but any element has finite dimension.

There is zero way to tell if the states we use in QM are really elements of such a space. And doing that makes foundational issues a lot easier.

Basically its the Rigged Hilbert space view. Any functional defined on the space of finite sequences is the limit of a sequence in that space via the weak topology (ie via the limit of <ai|u>). So in a sense such functionals are simply approximations used for convenience. The space of bounded functionals is a Hilbert space (in the usual norm - the other spaces are Hilbert spaces as well, but in funny norms - in fact you have sequences of norms defined on them - groan - that's part of the mathematical non triviality of this stuff - translation - its hard) and taking subspaces of that Hilbert space along with its dual you get various Gelfland triples. These are the spaces used in QM. But one, for convenience, can always assume the physically realizable states are elements of that space of finite sequences. The math really is a lot easier. In practice to make use of the techniques of calculus etc one approximates them via various spaces that have infinite dimension - but in this view they are simply approximations.

Thanks
Bill
 
Last edited:
  • #24
So if I ignore the fundamental mathematical details as much as I ignore Analysis when doing Calculus am I safe treating the wavefuncions ψ as scalars on which linear operator are applied?
 
  • #25
carllacan said:
So if I ignore the fundamental mathematical details as much as I ignore Analysis when doing Calculus am I safe treating the wavefuncions ψ as scalars on which linear operator are applied?

Basically - yes.

But do give the first 3 chapters of Ballentine a read - it will be a lot clearer. He examines the subtleties a lot better than most.

But if you want rigour be prepared for a whole world of hurt.

Thanks
Bill
 
Last edited:
  • #26
carllacan said:
So if I ignore the fundamental mathematical details as much as I ignore Analysis when doing Calculus am I safe treating the wavefuncions ψ as scalars on which linear operator are applied?
As scalars, no. As vectors in a finite-dimensional vector space like ##\mathbb C^n##, yes.
 
  • #27
Fredrik said:
As scalars, no. As vectors in a finite-dimensional vector space like ##\mathbb C^n##, yes.

Why is that? If ψ(x) = <x|ψ> then by the definition of the inner product it is an scalar.
 
  • #28
carllacan said:
Why is that? If ψ(x) = <x|ψ> then by the definition of the inner product it is an scalar.

We have that ##\psi(x)## is a scalar sure. But that doesn't mean that ##\psi## or ##x## are scalars.
 
  • Like
Likes 1 person
  • #29
micromass said:
We have that ##\psi(x)## is a scalar sure. But that doesn't mean that ##\psi## or ##x## are scalars.
I was referring to the wavefunctions. Wherever I see ψ(x) I can treat it like an scalar?

PD: Excuse me for asking that much, I just want to get it right once and for all.
 
  • #30
carllacan said:
I was referring to the wavefunctions. Wherever I see ψ(x) I can treat it like an scalar?

PD: Excuse me for asking that much, I just want to get it right once and for all.

Don't excuse yourself for asking. We are here to answer questions like yours!

But yes, ##\psi(x)## is a scalar, so you can treat it like one.

The wavefunction ##\psi## itself must be treated as a vector.
 
  • Like
Likes 1 person
  • #31
micromass said:
Don't excuse yourself for asking. We are here to answer questions like yours!

But yes, ##\psi(x)## is a scalar, so you can treat it like one.

The wavefunction ##\psi## itself must be treated as a vector.

Thank you for your help!
 

FAQ: How are wavefunctions and eigenkets used differently in quantum mechanics?

1. What is the difference between a wavefunction and an eigenket?

A wavefunction is a mathematical function that describes the quantum state of a system. It contains information about the probability of finding a particle at a certain position or momentum. An eigenket, on the other hand, is a vector in a Hilbert space that represents a specific quantum state of a system. It is a solution to the Schrödinger equation and contains information about the energy of the system.

2. How are wavefunctions and eigenkets related?

Wavefunctions and eigenkets are related through the process of quantization. The wavefunction of a system can be expressed as a linear combination of eigenkets, with the coefficients representing the probability amplitudes of each eigenket. In other words, the eigenkets form a basis for the wavefunction.

3. What is the significance of eigenkets in quantum mechanics?

Eigenkets are essential in quantum mechanics because they represent the possible states of a system. The energy eigenkets, in particular, are used to calculate the energy spectrum of a system and determine the probabilities of different energy levels. They also play a crucial role in the measurement process, as the measured value of an observable corresponds to the eigenvalue of the corresponding eigenket.

4. How are wavefunctions and eigenkets used differently in quantum mechanics?

Wavefunctions are used to describe the quantum state of a system and calculate probabilities of different measurements. They are also used to evolve the state of a system over time using the Schrödinger equation. Eigenkets, on the other hand, are used to represent the possible states of a system and calculate the energy spectrum. They are also used in the measurement process to determine the outcome of a measurement.

5. Can a system have multiple eigenkets?

Yes, a system can have multiple eigenkets. In fact, most systems have an infinite number of eigenkets, each corresponding to a different energy level. The superposition principle allows for the combination of multiple eigenkets to form a wavefunction, representing a combination of different states of the system. This is known as the principle of superposition and is a fundamental concept in quantum mechanics.

Similar threads

Replies
21
Views
2K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
128
Views
12K
Replies
6
Views
2K
Back
Top