Terminology - Hermitian, adjoint ....

In summary, an adjoint operator is an operator that is self-adjoint if and only if it has mutually orthogonal solutions.
  • #1
ognik
643
2
I thought I had these, but then I get to Sturm-Liouville and my confidence wavers ...please confirm / correct / supplement:

1) An Adjoint operator is written $ A^† = (A^T )^*≡(A^* )^T $
We can identify an operator A as adjoint $ (A^† ),iff <ψ_1 |Aψ_2> = <Aψ_1 | ψ_2> $
An adjoint operator can also be written (ODEs) as $ \overline{\mathcal{L}}$

2) An Operator is Self-adjoint if $ A=A^† $
We can identify an ODE (general form) as self-adjoint if $ p_0'=p_1 $
Also (like adjoint?) we can identify an ODE as self-adjoint $ iff <ψ_1 |Aψ_2> = <Aψ_1 | ψ_2> $
Self Adjoint ODEs have mutually orthogonal solutions

3) An Operator is Hermitian if $ A=A^† $ (ie. self-adjoint). However an Hermitian operator is not necessarily self-adjoint
 
Last edited:
Physics news on Phys.org
  • #2
I would go with Nate's answer here on Math.SE. In particular, you can look at the implications: Hermitian implies self-adjoint implies symmetric, but the converse implications are false.
 
  • #3
Lol, I looked at that link, too.

The thing is, when you're talking about linear operators over an infinite-dimensional space, boundedness matters.

Also, often a linear operator isn't defined "everywhere" (for example, the differentiation operator $D$ isn't defined on non-differentiable functions), but it might be defined "densely" (on a dense subset of our original space). The concern here is that the adjoint may have a different domain of definition, which then complicates matters.
 
  • #4
Deveno said:
The thing is, when you're talking about linear operators over an infinite-dimensional space, boundedness matters..
That is one of the things I was clear on :-)

Also, often a linear operator isn't defined "everywhere" (for example, the differentiation operator $D$ isn't defined on non-differentiable functions), but it might be defined "densely" (on a dense subset of our original space). The concern here is that the adjoint may have a different domain of definition, which then complicates matters.
...Which is part of why I am also asking about Hilbert space. The book says "When adjoint or self-adjoint operators are discussed in the context of a Hilbert space, all functions of that space will satisfy the boundary conditions" From this and other subtle hints (the Author never descends into being direct), I gathered that for this course I should assume Hilbert space unless otherwise informed...

Going back to Adjoint, is what I wrote correct, especially the 2nd point? I have also seen a variation like this : $ <ψ_1 |A^† ψ_2>=<Aψ_1 | ψ_2> $, so which is right? and both are so similar to the self-adjoint analogue?

Thanks
 
  • #5
Ackbach said:
I would go with Nate's answer here on Math.SE. In particular, you can look at the implications: Hermitian implies self-adjoint implies symmetric, but the converse implications are false.

Confusing, my book says s/adjoint & satisfying the boundary conditions, is also hermitian, but not vica-versa? So Hermitian is also s/adjoint and s/adjoint is also symmetric?

What about Hermitian Matrices whose off-diagonal entries are complex? Then it can't be symmetric because of conjugation?

Another point of confusion, in my book a self adjoint operator is often written as $A^\dagger$. But I have also come across Hermitian Matrices shown as $A^\dagger$ - whatever the answer above, this can't be true both ways? So what is the proper symbol for hermitian? I am tempted to use $A^H$ ...
 
Last edited:
  • #6
In finite-dimensional vector spaces, these distinctions need not be made.

In an inner-product space, the adjoint of a linear transformation $T: V \to V$ is the transformation $T^{\ast}$ such that:

$\langle Tu,v\rangle = \langle u,T^{\ast}v\rangle$, for all $u,v \in V$.

For a finite-dimensional real vector space with the inner product $\langle u,v\rangle = u^Tv$ (as matrices in the basis $B$), if $T$ has the matrix $A$ in the basis $B$, then $T^{\ast}$ has the matrix $A^T$:

$\langle Au, v\rangle = (Au)^Tv = (u^TA^T)v = u^T(A^Tv) = \langle u,A^Tv\rangle$

For a finite-dimensional complex vector space (using the physicists' notion of sesquilinear), we take:

$\langle u,v\rangle = u^{\dagger}v$ (again using matrix representations of $u,v$ in some basis $B$).

It is then easy to see that the adjoint of $T$ with matrix $A$ has the matrix $A^{\dagger} = \overline{A^T}$:

$\langle Au,v\rangle = (Au)^{\dagger}v = (u^{\dagger}A^{\dagger})v = u^{\dagger}(A^{\dagger}v) = \langle u,A^{\dagger}v\rangle$

In the finite real case, a self-adjoint matrix (or the linear transformation it represents) is called symmetric. For matrices, this is equivalent to $A = A^{T}$, and such a matrix has entries that are symmetric about the main diagonal.

In the finite complex case, a self-adjoint matrix (or the linear transformation it represents) is called Hermitian. This is equivalent to $A = A^{\dagger}$.

Things *change* in an infinite-dimensional space: matrix representation is no longer possible. Now vectors (that is *column* vectors) can be represented by kets. The corresponding row vector can be represented by a bra. Linear operators go "in the middle" they can hit kets, to give another ket, or be hit by bras, to give another bra. In the finite-dimensional case, there is perfect symmetry, the space of kets is isomorphic to the space of bras.

In the infinite-dimensional case, there are "more" bras. That is, the set formed by the corresponding bras of basis kets (if we have a basis) of our infinite-dimensional vector space is linearly independent, but it does not span. The typical way this "imbalance" is rectified is to limit our study of bras (linear functionals) to bounded ones, which turns out to be the same as if we had limited it to continuous ones.

We can put it this way: if $T: V \to W$, then $T^{\ast}:W^{\ast} \to V^{\ast}$. Often we are interested in the case: $V = W$.

In the finite-dimensional case, there is a UNIQUE $v^{\ast} \in V^{\ast}$ corresponding to $v$, namely:

$v^{\ast}(w) = \langle v, w\rangle$.

You can think of this as: "turning $v$ into its adjoint" (taking its complex-conjugate transpose).

This is no longer the case when $V$ is not finite-dimensional. To recover some of the lost symmetry, we have to restrict "which" bras we allow. This restore the imbalance between the domain of a linear operator and the domain of its dual (adjoint).
 
  • #7
Hi guys, I'm slowly getting there with all these spaces and properties ...some interim questions if you wouldn't mind:

1) What is the difference/similarity between the notations $ <x, y> and <x|y> $?

2) I read that a Dense space/subset B of a space X, means that every point in X either belongs in B, or has a point arbitrarily close to it? Not sure of the importance of this, but could a (shallow) take-away be that the B and X spaces are very close to being equal? Could they have different properties and still be dense?

3) Another possible shallow take-away, a Hilbert space allows for both linear algebra and calculus?
 
  • #8
The canonical example of a dense subset is the rationals in the reals. In other words, we can approximate a real number to within any desired degree of precision by a rational number ("just add enough decimal places").

This extends to the density of the rational vectors in the real vectors for any $\Bbb R^n$, in an obvious way (we approximate any real vector by approximating each coordinate).

The "usual" metric in $\Bbb R^n$ (the square root of the sum of the squares of the differences of the coordinates) is essentially derived from the dot product:

$d(x,y) = \sqrt{(x_1 - y_1)^2 + (x_2 - y_2)^2 + \cdots + (x_n - y_n)^2} = \sqrt{\langle (x-y),(x-y)\rangle}$

This is the measure of "nearness" we use in real Euclidean space. The complex-conjugation variant we use for complex Euclidean spaces is to ensure the positive-definite quality, which requires (among other things) that the norm of a complex vector lie in an ORDERED field.

Given any metric (a symmetric positive-definite real-valued function defined on $B \times B$ for our "space" $B$ that obeys the triangle inequality), we can use *that* for our concept of "nearness". Inner products induce metrics in a natural way (because the Cauchy-Schwarz inequality can be used to show the induced metric obeys the triangle inequality).

So inner-product spaces come with a "intrinsic" way of expressing the "nearness" of two vectors: the norm of their vector difference. For certain kinds of functions (in particular *continuous* ones), definition on a dense subset extends uniquely to a definition on the entire space. These are TOPOLOGICAL notions, some of which underlie the calculus (the notions of limit, continuity, closure, interior, and boundary are topological notions, possible in more general settings than just real or complex vector spaces, which are quite "special sets").

As to your other question, what is the difference between:

$\langle x,y\rangle$ and $\langle x|y\rangle$,

in one sense, almost nothing. Computationally, one often performs the same steps either way. Conceptually, it's a bit different.

The former is a function $V \times V \to F$- combine two vectors, spit out a scalar.

The second is a function $V^{\ast} \times V \to F$, where $V^{\ast} = \{f: V \to F: f \text{ is linear}\}$.

The elements of $V^{\ast}$ are called "linear functionals" or "dual vectors" or "covectors". In the finite-dimensional case, this boils down to the difference between "rows" and "columns".

In other words, $|y\rangle$ is a vector, but $\langle x|$ is a function that is "looking for a vector". When it finds one, it spits out a scalar.
 
  • #9
Deveno said:
The canonical example of a dense subset is the rationals in the reals. In other words, we can approximate a real number to within any desired degree of precision by a rational number ("just add enough decimal places").
Nicely clear, but small query - I had come across this example and wondered - a rational number = ratio of 2 integers, so adding decimal places seems more like irrational numbers?

Anyway, this all helped a lot to make sense out of it, ta.

The former is a function $V \times V \to F$- combine two vectors, spit out a scalar.[\quote]
So its more like a dot product? And not the row/col structure?

I hadn't thought about this before, but are x,y in <x|y> always vectors? (or vectorisable matrices)
 

FAQ: Terminology - Hermitian, adjoint ....

What is the definition of a Hermitian operator?

A Hermitian operator is a type of linear operator in mathematics that has certain properties related to complex numbers. It is also known as a self-adjoint operator because it is equal to its own adjoint.

How is a Hermitian operator related to complex conjugation?

A Hermitian operator is related to complex conjugation because it involves taking the complex conjugate of a matrix or vector. In other words, the adjoint of a Hermitian operator involves flipping the sign of the imaginary part of each element and then transposing the matrix.

What is the significance of Hermitian operators in quantum mechanics?

Hermitian operators play a crucial role in quantum mechanics as they represent observable quantities in the theory. This means that the eigenvalues of a Hermitian operator correspond to the possible outcomes of a measurement in quantum mechanics.

Can a non-Hermitian operator be self-adjoint?

No, a non-Hermitian operator cannot be self-adjoint. For an operator to be self-adjoint, it must be equal to its own adjoint, which means it must satisfy the condition A=A†. If an operator is not Hermitian, it will not satisfy this condition and therefore cannot be self-adjoint.

How are Hermitian and unitary operators related?

Hermitian and unitary operators are closely related as they both have special properties that make them useful in quantum mechanics. A unitary operator is a type of linear operator that preserves the inner product of vectors, while a Hermitian operator is a type of unitary operator that is also equal to its own adjoint.

Similar threads

Replies
2
Views
2K
Replies
4
Views
1K
Replies
3
Views
1K
Replies
27
Views
2K
Replies
5
Views
2K
Replies
4
Views
2K
Replies
1
Views
2K
Back
Top