Does Bra-Ket Notation Clarify the Gradient Operator in Quantum Mechanics?

In summary: That's a misuse of the Dirac notation. A ket should be independent of any representation. Have you seen such notation in a book?Yes sorry, I should have written n(R(t)) as a basis of eigenstates<n(R)|∇R|n(R)> = 1/Rso does this make sense, in that the gradient ∇R is the operator? I'm trying to understand Berry's derivation of the Geometric Phase.And can I take this even further as a second order gradient?
  • #1
christianmetzgeer
5
1
I'm trying to understand gradient as an operator in Bra-Ket notation, does the following make sense?

<ψ|∇R |ψ> = 1/R​

where ∇R is the gradient operator. I mean do the ψ simply fall off in this case?

Equally would it make any sense to use R as the wave function?

<R|∇R |R> = 1/R​
 
Physics news on Phys.org
  • #2
christianmetzgeer said:
I'm trying to understand gradient as an operator in Bra-Ket notation, does the following make sense?

<ψ|∇R |ψ> = 1/R​

where ∇R is the gradient operator. I mean do the ψ simply fall off in this case?
It doesn't make sense. ##| \psi \rangle## is a state vector, not a function of position (or any other variable). ##\nabla_R## can only appear when the state vector is projected onto the basis of position states, otherwise it is a more generic operator, such as momentum ##\hat{P}##.
 
  • Like
Likes bhobba
  • #3
DrClaude said:
It doesn't make sense. ##| \psi \rangle## is a state vector, not a function of position (or any other variable). ##\nabla_R## can only appear when the state vector is projected onto the basis of position states, otherwise it is a more generic operator, such as momentum ##\hat{P}##.

Yes sorry, I should have written n(R(t)) as a basis of eigenstates

<n(R)|∇R|n(R)> = 1/R​

so does this make sense, in that the gradient ∇R is the operator? I'm trying to understand Berry's derivation of the Geometric Phase.

And can I take this even further as a second order gradient?

<n(R)|∇2R|n(R)> = -1/R2
 
  • #4
christianmetzgeer said:
Yes sorry, I should have written n(R(t)) as a basis of eigenstates

<n(R)|∇R|n(R)> = 1/R​

so does this make sense, in that the gradient ∇R is the operator? I'm trying to understand Berry's derivation of the Geometric Phase.

And can I take this even further as a second order gradient?

<n(R)|∇2R|n(R)> = -1/R2
That's a misuse of the Dirac notation. A ket should be independent of any representation. Have you seen such notation in a book?
 
  • #5
DrClaude said:
That's a misuse of the Dirac notation. A ket should be independent of any representation. Have you seen such notation in a book?

I'm working from Durstberger's Thesis on GEOMETRIC PHASES IN QUANTUM THEORY

http://physics.gu.se/~tfkhj/Durstberger.pdf

equation (2.2.9) in the section on the derivation of the Geometric Phase.
 
  • #6
Looking at that thesis, I see that ##R## doesn't represent position, but some parameter that is varied.

Honestly, I'm not completely comfortable with that notation. Maybe those more mathematically versed in QT can help (@vanhees71 or @A. Neumaier, maybe?).
 
  • #7
I think the notation is fine. The point is, that R in this context is not an operator in a Hilbert space, it is a parameterization of a "family" of Hilbert spaces. That is, you have a base manifold, the R-space, and at each particular value of R sits a Hilbert space HR. The Hilbert spaces are all isomorphic to each other of course, but not in an "obvious" way. The appropriate mathematical structure is a fibre bundle, and the linked thesis starts explaining it on p. 61, probably better than i ever could.

The keypoint is:
State vectors |n> in a single Hilbert space H generalize to (smooth) "sections" |n(R)> of the fibre bundle, ie. you choose a state vector in each Hilbert space "in a smooth way". Also ∇R is not an operator in Hilbert space, it is just the naive way to take derivatives of sections (or state vector fields) along the base manifold. A=<n(R)|∇R|n(R)> is not an expectation value, it is the coefficient of the (Berry-) connection 1 form, which tells you how to actually take derivatives: ∇R -> ∇R - i A (or +, forgot which one), which is then called covariant derivatives. A is then also called "gauge potential".

I'm sorry if this wasn't really intelligible, but as i said, i think the linked thesis already explains it quite well.
 
  • #8
I found a solution in David Griffith's Introduction to Quantum Mechanics 1995 p97 where he asks "Is the derivative operator Hermitian?"
define the derivative operator as

$$ \hat{D}=\frac{\partial }{\partial R} $$

using integration by parts
$$ \left\langle \psi ^*|\hat{D} \psi \right\rangle =\psi ^* \psi |_a^b-\left\langle \left.\hat{D} \psi ^*\right|\psi \right\rangle $$

the boundary term vanishes iff
$$ \psi(a) =\psi(b) $$
these will vanish when integrating over infinity where square integrability guarantees
$$ \psi(a) =\psi(b) = 0$$

in which case
$$ \left\langle \psi ^*|\hat{D} \psi \right\rangle = -\left\langle \left.\hat{D} \psi ^*\right|\psi \right\rangle $$

and finally I can pull the $$\hat{D}$$ out side the inner product
$$\hat{D} \left| \psi \right| ^2$$
and express the whole thing as a function of $$\frac{1}{R}$$

The key idea is the boundary terms vanish at infinity. Is this right?
 
  • #9
Your final part is wrong. How do you come to the conclusion you can draw out the derivative operator from the scalar product. It doesn't make any sense! What you got correctly out is thus just
$$\langle \psi |\hat D \psi \rangle=-\langle \hat{D} \psi |\psi \rangle.$$
Now you can already answer the question, wheter ##\hat{D}## is Hermitean or not!
 
  • #10
In some linear algebra course I was told by the lecturer that a linear mapping ##L: A\rightarrow B## is called a linear operator only if the domain ##A## and the codomain ##B## are the same vector space. In the case of a gradient acting on a function this obviously isn't true, as a scalar function becomes a vector function in that operation. This kind of technicalities usually don't matter, though.
 
  • #11
The operators in QT are essentially self-adjoint operators and thus defined on a dense subspace of the Hilbert space, where domain and codomain are the same. Of course, Griffiths doesn't bother his undergraduate reader with this subtlety, and it's almost always fine. It's no longer fine for a problem as simple looking as the infinite-box potential (see the recent discussion in this forum). A very good book for the physicist to understand the subtleties in a modern way is Ballentine, Quantum Mechanics, where the socalled rigged-Hilbert space formalism is explained in some but not too much detail. If you want a rather mathematically rigorous treatment, check the two-volume book by Galindo and Pascual.
 
  • #12
vanhees71 said:
The operators in QT are essentially self-adjoint operators and thus defined on a dense subspace of the Hilbert space, where domain and codomain are the same. Of course, Griffiths doesn't bother his undergraduate reader with this subtlety, and it's almost always fine. It's no longer fine for a problem as simple looking as the infinite-box potential (see the recent discussion in this forum). A very good book for the physicist to understand the subtleties in a modern way is Ballentine, Quantum Mechanics, where the socalled rigged-Hilbert space formalism is explained in some but not too much detail. If you want a rather mathematically rigorous treatment, check the two-volume book by Galindo and Pascual.
In Griffith's defense I thought it best to include the text that I left out in my previous post

The following is from Griffith's introduction to Quantum Mechanics"It's close, but the sign is wrong, and there's an unwanted boundary term. The sign is easily disposed of: ## \hat{D} ## itself is (except for the boundary term) skew Hermitian, so I ##\hat{D}## would be Hermitian—complex conjugation of the I compensates for the minus sign coming from integration by parts. As for the boundary term, it will go away if we restrict ourselves to functions which have the same value at two ends:

$$f(a) = f(b)$$In practice, we shall almost always be working on the infinite interval (a = -##\infty##, b = +##\infty##), where square integrability guarantees that f(a) = f(b) = 0 and hence that i ##\hat{D}## is Hermitian. But – ##\hat{D}## is not Hermitian in the polynomial space P(N).By now you will realize that when dealing with operators you must always keep in mind the function space you're working in—an innocent-looking operator may not be a legitimate linear transformation, because it carries functions out of the space; the eigenfunctions of an operator may not reside in the space; and an operator that's Hermitian in one space may not be be Hermitian in another. However, these are relatively harmless problems—they can startle you, if you're not expecting them, but they don't bite. A much more dangerous snake is lurking here, but it only inhabits vector spaces of infinite dimension. I not a moment ago that ##\hat{x}## is not a linear transformation in the space P(N) (multiplication by x increases the order of the polynomial and hence takes functions outside the space). However, it is a linear transformation on P(##\infty##), the space of all polynomials on the interval -1 <= x <= 1. In fact, it's a Hermitian transformation, since (obviously)

$$\int_{-1}^{1} [f(x)]^* x[g(x)] = \int_{-1}^{1} [xf(x)]^* [g(x)] dx$$

But what are its eigenfunctions?

$$x(a_0 + a_1 x + a_2 x^2 + ...) = \lambda(a_0 + a_1 x + a_2 x^2 + ...)$$

For all x, means,
$$0 = \lambda a_0$$,
$$a_0 = \lambda a_1$$,
$$a_1 = \lambda a_2$$,and so on. If ##\lambda ## = 0, then all the components are zero, and that's not a legal eigenvector; but if ##\lambda \neq 0## , the first equation says ##a_{0}##, so the second gives ##a_{1}##, and the third says ##a_{2}##, and so on, and we're back in the same bind. This Hermitian operator doesn't have a complete set of eigenfunctions—in fact it doesn't have any at all! Not, at any rate, in P(##\infty##).
What would an eigenfunction of ##\hat{x}## look like? If

$$x g(x) = \lambda g(x)$$

where lambda, remember is a constant, then everywhere except at one point x = ##\lambda## we must have g(x) = 0. Evidently the eigenfunctions of ##\hat{x}## are Dirac delta functions:
$$g_\lambda(x) = B \delta(x-\lambda)$$
and since delta functions are not polynomials, it is no wonder that the operator ##\hat{x}## has no eigenfunctions in P(##\infty##).

The moral of the story is that whereas the first two theorems in section 3.1.5 are completely general (the eigenvalues of a Hermitian operator are real, and the eigenvectors belonging to different eigenvalues are orthogonal), the third one (completeness of the eigenvectors) is valid (in general) only for finite-dimensional spaces. In infinite-dimensional spaces some Hermitian operators have complete sets of eigenvectors, some have incomplete sets, and some (as we just saw) have no eigenvectors (in the space) at all. Unfortunately, the completeness property is absolutely essential in quantum mechanical applications."

So Griffiths clearly states the importance of domains, I apoligize for not making this point clearer originally.

I think I now understand that the derivative operator is Hermitian iff I work with finite dimensional spaces, avoid polynomial spaces, and the bounds are f(a) = f(b) = 0, and then and only then can the derivative operator be used. Is this it?
 
Last edited:
  • Like
Likes vanhees71

FAQ: Does Bra-Ket Notation Clarify the Gradient Operator in Quantum Mechanics?

1. What is the gradient as an operator?

The gradient is a mathematical operator that is used to calculate the rate of change or slope of a multi-variable function. It is represented by the symbol ∇ (del) and is often used in vector calculus.

2. How is the gradient calculated?

The gradient is calculated by taking the partial derivatives of a function with respect to each variable and arranging them in a vector. This vector will have the same number of components as the number of variables in the function.

3. What is the significance of the gradient in mathematics?

The gradient is an important mathematical concept that is used in many fields, including physics, engineering, and economics. It allows us to understand the rate of change of a function and find critical points such as maxima and minima.

4. Can the gradient be negative?

Yes, the gradient can be negative. This indicates that the function is decreasing in the direction of the gradient. However, the magnitude of the gradient is always positive, as it represents the rate of change or slope of the function.

5. How is the gradient used in real-life applications?

The gradient is used in many real-life applications, such as optimizing production processes, analyzing weather patterns, and creating computer graphics. It allows us to understand and quantify the changes in a system and make predictions based on those changes.

Similar threads

Back
Top