Understanding Dirac Notation: A Simplified Explanation for Scientists

In summary, Dirac notation is a way to represent wave functions and operators using kets and bras. Operators act on kets in the same way as they would on normal wave functions. The complex conjugate of a ket is represented by a bra vector. The inner product of a bra and a ket is defined as the complex conjugate of the wave functions multiplied and integrated over all space. The Riesz representation theorem states that for each ket there is a unique corresponding bra, and vice versa. This antilinear bijection preserves distances between points. The norm on the bras defined by the inner product is consistent with the operator norm. Expressions in Dirac notation are defined to make it look like multiplication with an associative operation. However,
  • #1
Storm Butler
78
0
Hello, I'm fuzzy on how Dirac notation works especially when operators are added in. Does anyone have a clear explanation (the simpler the better) that they can give to me, and or a website or book that does a good job of explaining it?
 
Physics news on Phys.org
  • #2
KETS
Any wave function psi, can be represented in dirac notation as a ket written as |psi>. Operators act on these kets in the same way they would act on a normal wave function.
Eg.
let H be hamiltonian operator, then the eigenvalue equation is,
Hpsi = Epsi, where psi is an eigenfunction of the hamiltonian.
In dirac notation, this is written as,
H|psi> = E|psi>
Alternatively, sometimes psi is not explicitely written inside the ket. Sometimes the i-th eigenfunction is simply written as |i>. For example, the first eigenfunction of the SHO is sometimes written as |0> and the next eigenfuctnion is sometimes written as |1> etc.

BRAS
Any wave function has its complex conjugate, such as psi*, where * indicates complex conjugate. In dirac notation, the complex conjugate of a wavefunction is written as a bra vector which looks like <psi|.

<i|j>
When a bra vector and a ket vector are written down as <i|j> for example, it is read as though it is the complex conjugate of the wavefunction i times the wavefunction j integrated over all space. That is, when a bra and ket are written down as one, it means the author has intended not just a multiplication but also an integral over all space.

<i|Q|j>
The above expression means that the operator Q acts on the wave function |j>, then once this result is obtained, is multiplied by <i| and integrated over all space.
 
  • #3
so it is basically another way to write matrix operations?
 
  • #4
A ket is a member of a complex Hilbert space H, i.e. it's just a vector written in a funny way. A bra is a member of H*, the dual space of H. H* is defined as the set of all bounded linear functionals on H, with addition of vectors and multiplication by a scalar defined in the obvious way:

[tex](f+g)(x)=f(x)+g(x)[/tex]​
[tex](af)(x)=a(f(x))[/tex]​

These definitions give H* the structure of a vector space.

A functional [itex]f:H\rightarrow\mathbb C[/itex] is said to be bounded if there exists a real number M such that [itex]|fx|\leq M\|x\|[/itex] for all x in H. Note that I'm using the notational convention that says that we write fx instead of f(x) when f is linear. It's pretty easy to show that a linear functional (any linear operator actually) is bounded if and only if it's continuous. (Link). So H* can be defined equivalently as the set of all continuous linear functionals on H.

Let's write the inner product of x and y as (x,y). The physicist's convention is to let the inner product be linear in the second variable and antilinear in the first. The Riesz representation theorem (which is easy to prove (link) if you know the projection theorem already) says that for each f in H, there's a unique x0 in H* such that [itex]f(x)=(x_0,x)[/itex] for all x, and that this x0 satisfies [itex]\|x_0\|=\|f\|[/itex]. The norm on the right is the operator norm, defined by [itex]\|f\|=\sup_{\|x\|=1}\|fx\|[/itex]. The map [itex]f\mapsto x_0[/itex] is a bijection from H* into H, so there's exactly one bra for each ket, and vice versa. It's not a vector space isomorphism though, because it's antilinear rather than linear, as you can easily verify for youself. (A function [itex]T:U\rightarrow V[/itex], where U and V are complex vector spaces, is said to be antilinear if [itex]T(ax+by)=a^*Tx+b^*Ty[/itex], for alla vectors x,y and all complex numbers a,b).

We can use this antilinear bijection to define an inner product on H*. Let x' and y' be the bras that corresponds to the kets x and y respectively (via the bijection mentioned above). We define (x',y')=(x,y). This definition gives H* the structure of a Hilbert space, and ensures that the antilinear bijection we defined preserves distances between points. The norm on H* defined by the inner product is consistent with the operator norm that we used before, because

[tex]\|x'\|^2_{vector}=(x',x')=(x,x)=\|x\|^2=\|x'\|^2_{operator}[/tex]​

where the norm on the left is the one defined by the inner product, and the one one the right is the operator norm. The last equality follows from the Riesz theorem, as mentioned above.

So far I've been writing the kets as x,y, etc. From now on I'll write them as [itex]|\alpha\rangle,\ |\beta\rangle[/itex], etc. The bra in H* that corresponds to the ket [itex]|\alpha\rangle[/itex] (via the antilinear bijection mentioned above) is written as [itex]\langle\alpha|[/itex]. Note that we have

[tex](|\alpha\rangle,|\beta\rangle)=\langle\alpha|(|\beta\rangle)=\langle\alpha||\beta\rangle=\langle\alpha|\beta\rangle[/tex]​

The first equality is what we get from the Riesz theorem. The second is the notational convention for linear functions that I mentioned above. The third is another notational convention that I haven't explained yet. We just drop one of the vertical lines to make it look nicer.

Note that the right-hand side isn't the scalar product of [itex]\alpha[/itex] and [itex]\beta[/itex] (those symbols aren't even defined) or a "scalar product" of the bra [itex]\langle\alpha|[/itex] with the ket [itex]|\beta\rangle[/itex] (that concept hasn't been defined). It's the scalar product of the kets [itex]|\alpha\rangle[/itex] and [itex]|\beta\rangle[/itex], or equivalently, the bra [itex]\langle\alpha|[/itex] acting on the ket [itex]|\beta\rangle[/itex].

Everything else is defined to make it look like we're just multiplying things together with an associative multiplication operation. For example, the expression [itex]|\alpha\rangle\langle\alpha|[/itex] is defined as the operator that takes an arbitrary ket [itex]|\beta\rangle[/itex] to the ket [itex]\langle\alpha|\beta\rangle|\alpha\rangle[/itex]. This definition can be expressed as

[tex](|\alpha\rangle\langle\alpha|)|\beta\rangle=|\alpha\rangle(\langle\alpha|\beta\rangle)[/tex]​

if we allow ourselves to write the scalars on the right. The convention is of course to allow that, so we would write both the left-hand side and the right-hand side of this equation as [itex]|\alpha\rangle\langle\alpha|\beta\rangle[/itex].

Here's an easy exercise: Define the expression [itex]\langle\alpha|A[/itex], where A is an operator, in a way that's consistent with what I just said.

Note that nothing I have said so far tells you how to make sense of expressions such as

[tex]\int da|a\rangle\langle a|=1[/tex]​

which includes "eigenvectors" of an operator that doesn't have any eigenvectors. I still don't fully understand how to make sense of those myself, but I'm working on it. A full understanding includes knowledge about how to prove at least one of the relevant spectral theorems. This is the sort of stuff that you might see near the end of a 300-page book on functional analysis.
 
Last edited:
  • #5
Sakurai is good on this
 
  • #6
Sakurai does a good job of teaching you how to use bra-ket notation, but it's pretty bad if you want definitions. As I recall, it doesn't even define the dual space. You can read Sakurai and never realize that a bra is a functional on the Hilbert space of kets.
 
  • #7
Yes, but the OP asked "I'm fuzzy on how Dirac notation works", thus he wants to use it :-)
 
  • #8
Ballentine does better than Sukurai on this in my opinion.
 
  • #11
What I said about Sakurai in #6 applies even more to Dirac. If you only care about how to use the notation, then Dirac's explanation is great, but he makes unnecessary assumptions and it's not even clear that they can be justified. In addition to that, his definitions are sloppy.

I have added a few more details to #4. This is the stuff that I wish someone had explained to me when I was studying Sakurai.
 
  • #12
The lectures were actually very good although i have not finished them all yet. I tried looking into the sakuri book but it seems as if they are very heavily math oriented and require a very good mathematics background which i don't believe i posses.
 
  • #13
Storm Butler said:
I tried looking into the sakuri book but it seems as if they are very heavily math oriented and require a very good mathematics background which i don't believe i posses.
You really don't need to understand anything more than the concepts "vector space", "complex number" and "linear operator" to read the part of Sakurai that explains the bra-ket notation. To understand the first few chapters, you also have to understand the concepts "basis", "eigenvalue" and "eigenvector". These are all concepts from an introductory course on linear algebra, and he still explains them in the book. Sakurai is very far from math oriented in my opinion. Certainly much less math oriented than my post #4. However, if you feel that way, maybe Dirac is better for you. Don't bother with Ballentine. It's a better book, but it requires a higher level of mathematical maturity than Sakurai.
 
  • #14
I agree Ballentine does require more mathematical maturity. However, regardless of what you do, I suggest everyone at least has a look at it at some time. I find it to be excellent.
 
  • #15
We have to keep in mind that functional analysis and the theory of distribution was motivated in part by physics. So, it can be a little unnatural to first study the theory of Hilbert and Banach spaces and the theory of distributions and then learn quantum mechancs, because then you don't learn how a physicist really thinks.

In physics, you use whatever ad hoc and ill defined formalism that appears to work for your problem, and only later do you try to make the formalism more rigorous (but usually you leave that to the mathematicians).
 
  • #16
nealh149 said:
However, regardless of what you do, I suggest everyone at least has a look at it at some time. I find it to be excellent.
I completely agree with that. We like that book here at Physics Forums. See e.g. this thread in the science book forum.
 
Last edited:
  • #17
oo well I'm actually only a sophomore in high school but i have been trying as hard as i can to expand my knowledge in science as well as math. I've studied, what i believe to be enough calc to easily get through an honors level calc class (i don't know about AP though) and i have a very basic understanding of trigonometry i haven't studied it a great deal. i also probably have a good enough understanding of kinetic physics to get me through a physics course but very little knowledge on electrical theories such as Maxwells equations. Additionally, i have tried with very little success to understand vectors beyond the simple facts of adding and subtracting them. The same goes for linear algebra and matrices, (although matrices i think i get much better then linear algebra as well and vectors). This is were my problem arose when trying to read about dirac notation in a quantum physics book, i had a great deal of interest and bought what seemed to be the simplest book to give me a decent understanding of some mathematics as well as concepts (quantum physics for dummies) however i failed as soon as i got to dirac's barket notation. I simply didn't understand what it was doing where it came from how it worked and how to apply operators to it.
 
  • #18
actually I am not sure if this is allowed (so stop me if it inst) but is it ok if i ask some questions i had on things like linear alg, and matrices ect. on this thread or do i have to make a completely new one on the new subject?
 
  • #19
Storm Butler said:
actually I am not sure if this is allowed (so stop me if it inst) but is it ok if i ask some questions i had on things like linear alg, and matrices ect. on this thread or do i have to make a completely new one on the new subject?

It may be better to do it here because then everyone knows your background.
 
  • #20
Something like "Linear Algebra Demystified" might be a good start. Or any introductory text. I'd pay particular attention to discussions of vector spaces and inner product spaces
 
  • #21
ok well my first question is on vectors. where does the formula C^2=A^2+B^2-2AB*cos(<OPQ)(thats supposed to be an angle) come from? it came up in shcaum's outline of vector analysis when i was trying to figure out how to add vectors. I keep looking at it and i can't figure out how its derived or how it gives and answer, also what does the cos of an angle that isn't in a right triangle mean? This is the only question that i can think of right off the bat sitting here but i will go through my books again and familiarize myself with some of the material as well as other questions i had in detail and ask them later.
 
  • #22
ill also try finding linear algebra demystified, i think that the demystified series is a very good one, especially since they add solved problems at the end of each section.
 
  • #23
Well the specific result you mentioned is actually just the cosine law. But you can simply get it by recognizing that A dot B = |A||B|cos(theta)
 
  • #24
is though dirac matrix algebra that expounded in Schouten?
 
  • #25
Storm Butler said:
Additionally, i have tried with very little success to understand vectors beyond the simple facts of adding and subtracting them. The same goes for linear algebra and matrices, (although matrices i think i get much better then linear algebra as well and vectors). This is were my problem arose when trying to read about dirac notation in a quantum physics book,
You're definitely going to have to learn the basics of linear algebra (the mathematics of linear operators between finite vector spaces) if you're going to understand quantum mechanics at all. Linear algebra is actually not the math that's needed for a rigorous treatement of QM, but if you understand linear algebra really well, you will at least have the right intuition about how to deal with vectors and operators.

The math that's needed for a rigorous treatment is called functional analysis. It's the math of linear operators between vector spaces that are equipped with an inner product (or at least a norm) such that all Cauchy sequences are convergent. So it's basically linear algebra generalized to a class of vector spaces that may be infinite-dimensional. Linear algebra is the easiest part of college-level math. Functional analysis is the hardest. Most physicists never study functional analysis. But they do study linear algebra, because it's the absolute minimum you have to do to at least get some intuition about what you're doing in QM.

Storm Butler said:
ok well my first question is on vectors. where does the formula C^2=A^2+B^2-2AB*cos(<OPQ)(thats supposed to be an angle) come from?
I actually didn't even recognize that formula at first, because I haven't used it since my first year at the university, at least not in that form. I have however had to use the results

[tex]\|\vec A-\vec B\|^2=\|\vec A\|^2+\|\vec B\|^2-2\vec A\cdot\vec B[/tex]

and

[tex]\vec A\cdot\vec B=\|\vec A\|\|\vec B\|\cos\theta[/tex]

a lot. (Here [itex]\theta[/itex] is the angle between [itex]\vec A[/itex] and [itex]\vec B[/itex]). The Wikipedia article on the law of cosines explains it really well. I recommend you take a look at several different proofs. In particular, I suggest the proof using the distance formula, and then you make sure you understand the section titled vector formulation, because it explains the second of the equalities above, which is more important than the cosine law itself.
Storm Butler said:
what does the cos of an angle that isn't in a right triangle mean?
See the Wikipedia article unit circle, in particular the image at the upper right. cos is defined by that image, for arbitrary angles. You can also check out the cosine article if you want more information.
 
  • #26
alright cool, and as far as linear algebra goes do you (or anyone else) know of any good lectures on the subject (something that i could watch on say youtube or some other video displaying website) I tried looking in the stanford lectures and in the IIT ones but i couldn't find any.
 
  • #27
Storm_Butler, as Fredrik said, you'll need to study some linear algebra at least. And once you get that, you might get a better grasp at the Dirac notation by thinking at the ket's as column vectors, the bra's as row vectors and operators as matrices. But any proper mathematician is pulling hair out of their head at what I've written above. Still, it gives you some understanding about it.
 
  • #28
To be honest I think you are really underestimating the time investment required to learn the stuff you want to learn. I very much doubt you could just watch lectures on linear algebra without prior knowledge or experience or an actual textbook and learn from it. However, if you really feel you could, MIT puts their intro to linear algebra course lectures online: http://www.youtube.com/view_play_list?p=E7DDD91010BC51F8&search_query=linear+algebra
 
  • #29
Storm Butler said:
oo well I'm actually only a sophomore in high school

You might find the book Quantum Mechanics in Simple Matrix Form accessible.

It's going to take some years of math study to be able to understand the whole picture, though.
 
  • #30
No your right i probably can't fully understand a subject just by watching a lecture or two but i think it would certainly help expose the material and clear up any confusion i might have with the introductory material. I do have a few books on linear algebra which i gave up after i realized i didn't have a good enough understanding of matrices which is when i bought some textbooks and schaum's books on matrices and I am still learning out of those. But if i can get a good understanding of the subject i think it'll help with figuring out some other things. I'm not studying because i want to eliminate classes or because i think I am going to teach myself all of physics. I am just doing it to expose myself to the material prior to any classes i take and to satisfy any curiosity that i might have at the moment in some subjects. I am, however, fully prepared and ready to accept the fact that no matter how much i study and look up ect I am only ever going ot scratch the surface until i get to college and really take some classes, and even then i probably won't understand everything completely.
 
  • #31
Storm Butler said:
I do have a few books on linear algebra which i gave up after i realized i didn't have a good enough understanding of matrices
Hm...I don't think there are any books on linear algebra that don't explain matrices. I mean, linear algebra is the mathematics of linear operators between finite vector spaces, and a matrix is just a specific way to represent a linear operator. So you definitely shouldn't feel that you need to understand matrices before you start studying linear algebra.

(I'm feeling a bit nostalgic here. The very first post I wrote at Physics Forums was about the relationship between linear operators and matrices. One of the reasons I wrote it was that I wanted to remind myself how to use LaTeX).
 
  • #32
lol ok thanks, and don't worry about sounding nostalgic the more you guys can help me the better i understand 99% of the people on this site probably have a better understanding of physics and math than i do. Also yes, the books i have to go over matrices but i found that they seemed a little abstract and decided to settle with something that had a lot more examples and problems in them in order to go over matrices.
 
  • #33
also what counts as vector space? is it just space in which vectors are present or something entirely different?
 
  • #35
Storm Butler said:
also what counts as vector space? is it just space in which vectors are present or something entirely different?
A vector is by definition a member of a vector space, so you define the concept "vector space" first.

You worry about abstractions, but I think the best way to define a vector space is to get very abstract. A vector space (over the real numbers) is a triple [itex](X,\ S:\mathbb R\times X\rightarrow X,\ A:X\times X\rightarrow X)[/itex] that satisfies eight specific properties. (The × is a Cartesian product and the notation f:U→V means "f is a function from X into Y"). The real numbers are called "scalars" in this context. The function S is called "multiplication by a scalar" (or "scalar multiplication", but do not confuse this with "scalar product" which is something else entirely), and A is called "addition". The conventional notation is to write kx instead of S(k,x) and x+y instead of A(x,y)=x+y. The eight specific conditions that must be satisfied for the triple (X,S,A) to be a vector space are listed on the Wikipedia page that Count Iblis linked to.

Once we're done with the definition, we can allow ourselves to be a bit sloppy with the terminology and refer to X as a vector space. This is a convention that you should be aware of. For example, the set [itex]\mathbb R^2[/itex] of ordered pairs of real numbers isn't really a vector space all by itself, but we still call it a vector space, because we know that if we define (a,b)+(c,d)=(a+c,b+d) and k(a,b)=(ka,kb), the triple [itex](\mathbb R^2,S,A)[/itex] is a vector space. We often say that S and A define a vector space structure on X. I think it would be a good exercise for you to verify that they do, i.e. that all of the eight conditions are satisfied when X,S and A are defined that way.

A complex vector space is defined by replacing the real numbers in the definition above with complex numbers. They can be replaced with other types of "numbers" as well to get a different type of vector space, but don't worry about that. You only need to understand complex vector spaces.

Storm Butler said:
Also yes, the books i have to go over matrices but i found that they seemed a little abstract and decided to settle with something that had a lot more examples and problems in them in order to go over matrices.
That sounds like a good idea for now, but you will eventually have to get used to abstractions. If you keep studying mathematics, you will see that it gets much, much more abstract than you would expect after only studying linear algebra.
 
Last edited:

Similar threads

Replies
3
Views
1K
Replies
4
Views
1K
Replies
1
Views
1K
Replies
10
Views
2K
Replies
7
Views
12K
Replies
7
Views
3K
Replies
7
Views
5K
Back
Top