What Planck Length Is and It’s Common Misconceptions
The Planck length is an extremely small distance constructed from physical constants. Many misconceptions generally overstate its physical significance, stating that it’s the inherent pixel size of the universe. The Planck length does have physical significance, and I’ll discuss what it is and what it isn’t.
Table of Contents
Key Points
- The Planck length is a distance constructed from physical constants and is 10–35 meters.
- It is the length scale at which quantum gravity becomes relevant.
- There is a misconception that the universe comprises Planck–sized pixels, but this is incorrect.
- It is an important order of magnitude when discussing quantum gravity, but it is not the fundamental pixel size of the universe.
What is the Planck Length?
Planck units are defined based on physical constants rather than human-scale phenomena. So while the second is originally one-86400th of a day, the Planck time is based on the speed of light, Newton’s gravitational constant, and Planck’s (reduced) constant, which is twice the angular momentum of an electron. Hypothetically, if we met a group of aliens and wanted to discuss weights and measures, we could use Planck units and they’d know what we are talking about. There is a push towards making our human units based on physical constants, like defining the meter in terms of the speed of light, but at this time the kilogram is still the mass of a brick in France.
“Natural” units still have a bit of choice regarding their definitions. Convention has chosen Planck’s reduced constant over Planck’s regular constant (they differ by a factor of 2##\pi##), and chosen to use the Coulomb constant instead of the dielectric constant or the fundamental charge for electromagnetic units. The latter provides a great example showing that Planck units are not inherently fundamental quantities: the Planck charge is roughly 11.7 times the actual fundamental charge of the universe.
So what is the Planck length? It is defined as:
$$\ell_{p}=\sqrt{\frac{\hbar\,G}{c^3}}$$
This is how far light can go in a unit of Planck time because the speed of light is the “Planck speed.” In SI units, this is on the order of 10-35 meters. By comparison, one of the smallest lengths that have been “measured” is the upper bound on the electron’s radius (if an electron has a radius, what can we certainly say it is smaller than?) is 10-22 meters, about ten-trillion Planck lengths. It is really small. And so far, it is just a unit. The meter is a useful unit for measuring length, but there’s nothing inherently special about it. The Planck length is not useful for measuring any length, but is there anything special about it?
How is it relevant to physics?
The Planck length is the length scale at which quantum gravity becomes relevant. It is roughly the distance things have to be before you start to consider “Hmm I wonder if there’s a chance this whole system randomly forms a black hole.” I did not understand this until I convinced myself with the following derivation, which was the main inspiration for this article.
Consider the energy (E) between two charges (let’s say they’re electrons) at some distance r. Doesn’t matter if they’re attracting or repelling right now.
$$E=\frac{e^2}{4\pi\epsilon\,r}$$
Just to clarify the symbols, e is the fundamental charge, ##\epsilon## is the dielectric constant. Now let’s change the units around, using the definition of the fine structure constant ##\alpha##, which is roughly 1/137.
$$\alpha=\frac{e^2}{4\pi\epsilon}\frac{1}{\hbar\,c}$$
This lets us swap out the electromagnetic constants e and ##\epsilon## with the more “general” constants ##\hbar## and c. The Coulomb energy now looks like this:
$$E=\frac{\alpha\hbar\,c}{r}$$
This is where the hand-waving will begin. If a given volume at rest has a certain amount of energy within, it will have a rest mass m=E/c##^2##. From Newtonian gravity, we can calculate the gravitational energy associated with our charges.
$$E_{g}=G\frac{M^2}{r}=G\frac{\left(\frac{\alpha\hbar\,c}{rc^{2}}\right)^{2}}{r}=\frac{G\alpha^{2}\hbar^{2}}{c^{2}r^{3}}$$
We are neglecting the rest masses of the charges, but those are much smaller than the interaction energy.
The question now is: at what distance is the electrostatic energy equal to the gravitational energy it causes? So we solve for r…
$$r=\sqrt{\frac{\alpha\,G\hbar}{c^3}}=\sqrt{\alpha}\ell_{p}$$
and we find that the radius at which the gravitation of the interaction energy is as important as the interaction energy itself is roughly the Planck length (divided by the 11.7, the square root of 137, but we’ll hand-wave that away for now). This is where it is important: if things are interacting at distances close to the Planck length, you will have to take quantum gravity into account.
One of the only physical systems where quantum gravity is relevant is the black hole. When calculating the entropy of a black hole, Hawking and Bekenstein found that it was equal to the number of Planck areas (Planck lengths squared) that can fit in the cross-sectional area of a Schwartzschild black hole (or a quarter of its total surface area), in units of the Boltzmann constant. The Hawking temperature of a black hole is one of the only equations where ##\hbar##, c, and G all appear, making it a quantum relativistic gravitational equation. However, the mass of a black hole can be continuous so the number of Planck areas on its surface need not be an integer.
How is it not relevant to physics?
There is a misconception that the universe is fundamentally divided into Planck-sized pixels, that nothing can be smaller than the Planck length, and that things move through space by progressing one Planck length every Planck time. Judging by the ultimate source, a cursory search of Reddit questions, the misconception is fairly common.
There is nothing in established physics that says this is the case, nothing in general relativity or quantum mechanics pointing to it. I have an idea as to where the misconception might arise, that I can’t back up but I will state anyway. I think that when people learn that the energy states of electrons in an atom are quantized and that Planck’s constant is involved, a leap is made toward the pixel fallacy. I remember in my early teens reading about the Planck time in National Geographic, and hearing about Planck’s constant in high school physics or chemistry, and thinking they were the same.
As I mentioned earlier, just because units are “natural” it doesn’t mean they are “fundamental,” due to the choice of constants used to define the units. The simplest reason that Planck-pixels don’t make up the universe is special relativity and the idea that all inertial reference frames are equally valid. If there is a rest frame in which the matrix of these Planck-pixels is isotropic, in other frames they would be length contracted in one direction, and moving diagonally concerning his matrix might impart angle-dependence on how you experience the universe. If an electromagnetic wave with the wavelength of one Planck length was propagating through space, its wavelength could be made even smaller by transforming to a reference frame in which the wavelength is even smaller, so the idea of rest-frame equivalence and a minimal length is inconsistent with one another.
To add to people’s confusion, a lot of the Wikipedia articles on the Planck length were corrupted by one person trying to promote his papers by posting them on Wikipedia, making nonsensical claims with “proof” that a Planck-wavelength photon will collapse into a black hole (again, Lorentz symmetry explains why this doesn’t make sense). There is a surreal and amusing dialogue trying to get to the bottom of this, that you can still read in the discussion section of the Planck length Wikipedia page.
There was an analysis recently of gamma-ray arrival times from a burst in a distant galaxy. The author considered what effect a discretization of space might have on the travel speed of photons of differing energy (it would no longer necessarily be constant), and found that to explain the observations the length scale of the discretization must be at least 525 smaller than the Planck-length. I’m not too sure how seriously people in the field take this paper.
How might it be relevant to physics?
Lorentz symmetry explains why Planck-pixels don’t make sense within current physics, however, current physics is incomplete, especially about quantum gravity. Going beyond established physics, is there more of a roll for the Planck length? I’m a bit out of my element talking about this, so I’ll be brief.
The closest beyond-standard theory to the Planck-pixel idea is Loop Quantum Gravity and the concept of quantum foam. At least that is what I thought before John Baez corrected me. One of the features of Loop Quantum Gravity is that for something to have a surface area or a volume, it must have at least a certain quantum value of surface area or volume, but will not necessarily have integer values of that quantum, and the quantum is not exactly the square or cube of the Planck length, although it is of that order.
Another potential model of quantum gravity is string theory, based on the dynamics of really small strings. To have these dynamics explain gravity, they are of order Planck length, but not specifically the Planck length. The first iteration of string theory was theorized to explain nuclear physics rather than gravity, and the length scale of the strings was much much larger.
So to summarize, the Planck length is an important order of magnitude when quantum gravity is being discussed, but it is not the fundamental pixel size of the universe.
Thanks to John Baez and Nima Lashkari for answering some questions about quantum gravity.
Ph.D. McGill University, 2015
My research is at the interface of biological physics and soft condensed matter. I am interested in using tools provided from biology to answer questions about the physics of soft materials. In the past I have investigated how DNA partitions itself into small spaces and how knots in DNA molecules move and untie. Moving forward, I will be investigating the physics of non-covalent chemical bonds using “DNA chainmail” and exploring non-equilibrium thermodynamics and fluid mechanics using protein gels.
“I do understand the argument that the Planck length is not fundamental cause there is quite some choice left when it comes to defining such a length. What I don’t understand is how you can take arguments from the continuous paradigm (which is theories in terms of differential equations on real numbers) and argue about the invalidity of ideas from the discrete paradigm (universe being pixelated, things moving at the speed of light one unit at a time, …). From my point of view this chain of argument is invalid, exactly because the continuous paradigm breaks down around the scale when spacetime supposedly becomes discrete.
As for myself I’m taking serious the idea, that all our established physical theories (including GR and QM) are effective theories in the sense, that they don’t express anything fundamental about the ultimate nature of reality, but instead are approximations to the inner workings of reality in the discrete paradigm. Any thoughts?”
Hi, I am a complete physics idiot, but I read your posting. Are you saying that the equations the author of this article uses break down/or do not apply in this situation? I would be interested in hearing more about this. Thank you.
”
As for myself I’m taking serious the idea, that all our established physical theories (including GR and QM) are effective theories in the sense, that they don’t express anything fundamental about the ultimate nature of reality, but instead are approximations to the inner workings of reality in the discrete paradigm. Any thoughts?”
Could be… There’s no way of disproving the possibility. But absent a candidate theory based on this discrete paradigm, there’s also nothing to discuss under the Physics Forums rules.
This thread is closed. As always, PM me or another mentor if you have more to add and want it reopened.
If there is anything that the history of physics has shown us, it is that we don’t shoot with high percentage when we try to anticipate the behavior in fundamentally new regimes. So I think what we really need are experiments that are capable of looking for evidence of discreteness. Until we have that, any theory will be pretty much guessing, in my opinion. But I do agree that all theories should be regarded as effective theories until demonstrated otherwise, with attention to the fact that they are impossible to demonstrate otherwise!
I do understand the argument that the Planck length is not fundamental cause there is quite some choice left when it comes to defining such a length. What I don’t understand is how you can take arguments from the continuous paradigm (which is theories in terms of differential equations on real numbers) and argue about the invalidity of ideas from the discrete paradigm (universe being pixelated, things moving at the speed of light one unit at a time, …). From my point of view this chain of argument is invalid, exactly because the continuous paradigm breaks down around the scale when spacetime supposedly becomes discrete.
As for myself I’m taking serious the idea, that all our established physical theories (including GR and QM) are effective theories in the sense, that they don’t express anything fundamental about the ultimate nature of reality, but instead are approximations to the inner workings of reality in the discrete paradigm. Any thoughts?
“I would probably go the other way… Obviously if your theory implies that something is turning into a black hole according to one observer, but is not turning into a black hole according to another observer, then your theory has been essentially discounted by reductio ab adsurdam.
I believe the problem is with the premise than an object’s mass increases as it approaches the speed of light. An object’s MOMENTUM increases as [tex]p = frac{m}{sqrt{1 – (frac v c)^2}}v[/tex]; I feel that has been pretty well reasoned out. But the claim that an object’s actual mass has increased (and hence it’s capacitiy to pull other objects toward it by gravity) is NOT well supported by any reasoning I’m familiar with. I’m pretty sure I’ve seen this point made explicitly in some texts, but at 43, I’m well into my fifth decade of memory failure.”
I (a complete physics idiot) actually posted a question that made the assumption that objects gained mass as they approached the speed of light. I was soon set right. Thank you for your explication, hand-wavey or not, of the Planck length, because I was a victim of the (erroneous) Planck-length = pixel size fiction as well.
A “classical” 4D planck volume of one planck length in spatial directions and one planck time in time direction would be crossed by light diagonally, as light moves by one planck length per planck time. A transformed planck volume with a shorter distance but a longer time loses this property.
“I’m not a fan of this theory, but there is an idea that spacetime is divided into pre-existing irregular grains of 1 Planck volume. This is called spacetime “glass” quantization, as opposed to “crystal” quantization should the grains be regular. The glassy properties of the quantization help it escape the usual problems with Lorentz invariance.”
Thank you for that insight. I would indeed think that if one wishes to regard spacetime as in some sense “coarse-grained” at the Planck scale, one must use a version of coarse-graining that is Lorentz invariant, meaning that the grains are defined by their volume but not their shape. This is hardly unprecedented– the same thing is done to “coarse grain” phase space for statistical mechanical calculations, since there is no need to use a cubic tiling of “equal lengths” of distance and momentum when deciding how to count states. I don’t mean to be unresponsive to the comment
[quote=mfb] To make it worse, if you transform pixels, the relation between (dilated) Planck time and (contracted in one dimension) distance does not hold any more.[/quote]
I simply didn’t understand it. It was my impression that volumes in spacetime would be Lorentz invariant, but perhaps there is something I am missing.
“On the topic of the “Planck pixel,” perhaps this overall idea is being rejected too sweepingly. Presumably, the “pixels” would be in 4-D spacetime, not 3-D space, and volumes in 4-D spacetime are invariant, are they not? So I would imagine that if someone wanted to formulate a theory that said spacetime itself was parceled into “Planck pixels”, they would play the usual game that in different reference frames, meaning along different world lines, the “pixels” would distort, but they’d still tile the spacetime in the same way. Yes that means objects don’t “move one Planck length every Planck time”, but that’s obvious– any such object would be perceived as moving at the speed of light. Instead, a “Planck pixel” idea could say that spacetime is discretely tiled, in the sense that world lines cannot be defined with finer precision than that– similar to the way quantum mechanics “tiles” phase space in statistical mechanics.
Also, if we think of the “Planck pixels” as being in spacetime, their 1-D version also takes on some kind of meaning. If we choose c=1, it is often said that all objects seem to “move through” spacetime at a rate of 1 unit of spacetime displacement per unit of coordinate time. In that sense, an object could appear to move one Planck length each Planck time, and not seem to move at the speed of light, if the “Planck length” was interpreted broadly as also existing in the time dimension. It seems to me that could all be formulated in an invariant way, though its usefulness and/or ramifications I could not say. Most likely it would be some kind of “ultraviolet cutoff” to doing path integrals in spacetime, or some such thing.”
I’m not a fan of this theory, but there is an idea that spacetime is divided into pre-existing irregular grains of 1 Planck volume. This is called spacetime “glass” quantization, as opposed to “crystal” quantization should the grains be regular. The glassy properties of the quantization help it escape the usual problems with Lorentz invariance.
“To see how the calculation works, go here:
[URL]http://math.ucr.edu/home/baez/lengths.html#planck_length[/URL]
[RIGHT]Last edited by a moderator: Yesterday at 1:24 PM[/RIGHT]
”
“[URL]http://math.ucr.edu/home/baez/lengths.html#planck_length[/URL]
Fixed that for you…[COLOR=black]..[/COLOR] :oldsmile:”
Aww, gee… thanks for the help…[COLOR=black].:oldeyes:..[/COLOR]
“Last edited by a moderator: Yesterday at 1:24 PM”
[COLOR=black]”..”…[/COLOR]
“Hint: compare the user name with the url.
[SIZE=2]Sorry, could not resist.[/SIZE]”
Hahahaha! Observation OP!
“That’s not how I interpreted that link. It seems to me what the author is saying […]”Hint: compare the user name with the url.
[size=2]Sorry, could not resist.[/size]
” ..it takes approximately enough energy to create a black hole whose Schwarzschild radius is… the Planck length!… ”
That’s not how I interpreted that link. It seems to me what the author is saying is that if you try to measure a black hole of the plank scale within the accuracy of a radius, then there is enough uncertainty in the momentum that there [i]could exist[/i] another black hole due to the corresponding energy uncertainty of the system (differing by a factor of v/2, classically)
“Nice post! Another way to think about the Planck length is that if you try to measure the position of an object to within in accuracy of the Planck length, it takes approximately enough energy to create a black hole whose Schwarzschild radius is… the Planck length! So, one can argue that it’s impossible to measure distances shorter than this – though the argument is a bit hand-wavy.
To see how the calculation works, go here:
[URL]http://math.ucr.edu/home/baez/lengths.html#planck_length[/URL]”
Hand-wavy is the name of the game here! Thanks for the link, and for the advice.
“To see how the calculation works, go here:”
[URL]http://math.ucr.edu/home/baez/lengths.html#planck_length[/URL]
Fixed that for you…[COLOR=black]..[/COLOR] :oldsmile:
BTW, I’ve been there many, many times…[COLOR=black]…:oldwink:…[/COLOR]
“Eisberg?”
Indeed it is.
“[USER=268035]@JDoolin[/USER]: That neutrino would need an incredible energy. Neglecting factors of 2, we have ##m_nu m_P = 3 eV cdot E_nu## where the lightest neutrino mass is probably of the order of 1 meV.”
I’m not sure if I’m doing this right, but I just googled “energy of a neutrino collision” and found mention of an apparent 5000-10,000 TeV neutrino.
at [URL]http://www.pbs.org/wgbh/nova/next/physics/fastest-neutrino-ever-detected-has-1000x-the-energy-of-the-lhc/[/URL]
So with a bit of estimation, assuming (1) the rest mass energy of a neutrino is about equal to 1 meV, (2) oncoming blueshift is approximately equal to the lorentz contraction factor here. (3) [itex]gamma approx frac{10 times 10^{12} }{1times 10^{-3}}=10^{16}[/itex]
Yes, if we started with visible light, at around [itex]10^{-7}[/itex] meters, it would be blueshifted to a wavelength around [itex]10^{-23}[/itex] meters; a trillion times longer than the Planck Length.
To do what I imagined and have a neutrino observer see my ordinary light-bulb-photon have a wavelength at the Planck Length, it would have to be a Yotta-eV neutrino. So yes, as you say, “an incredible energy”
Eisberg?
I’m not going to argue within the last 30 years. I thinknthe book was ’56 or there abouts. Fundamentals of modern physics, and it was by a german author, I’ll try to dig it up here sometime soon.
“Mass increasing is definitely included in some texts, so you’re not losing that memory just yet! My first text that I read on SR had a thought experiment with 2 bouncing balls and 2 observers, and used it to demonstrate relativistic mass.”Try to find any publication of the last 30 years using that concept.
“and volumes in 4-D spacetime are invariant, are they not?”You would still get different pixels in each frame. To make it worse, if you transform pixels, the relation between (dilated) Planck time and (contracted in one dimension) distance does not hold any more.
On the topic of the “Planck pixel,” perhaps this overall idea is being rejected too sweepingly. Presumably, the “pixels” would be in 4-D spacetime, not 3-D space, and volumes in 4-D spacetime are invariant, are they not? So I would imagine that if someone wanted to formulate a theory that said spacetime itself was parceled into “Planck pixels”, they would play the usual game that in different reference frames, meaning along different world lines, the “pixels” would distort, but they’d still tile the spacetime in the same way. Yes that means objects don’t “move one Planck length every Planck time”, but that’s obvious– any such object would be perceived as moving at the speed of light. Instead, a “Planck pixel” idea could say that spacetime is discretely tiled, in the sense that world lines cannot be defined with finer precision than that– similar to the way quantum mechanics “tiles” phase space in statistical mechanics.
Also, if we think of the “Planck pixels” as being in spacetime, their 1-D version also takes on some kind of meaning. If we choose c=1, it is often said that all objects seem to “move through” spacetime at a rate of 1 unit of spacetime displacement per unit of coordinate time. In that sense, an object could appear to move one Planck length each Planck time, and not seem to move at the speed of light, if the “Planck length” was interpreted broadly as also existing in the time dimension. It seems to me that could all be formulated in an invariant way, though its usefulness and/or ramifications I could not say. Most likely it would be some kind of “ultraviolet cutoff” to doing path integrals in spacetime, or some such thing.
Mass increasing is definitely included in some texts, so you’re not losing that memory just yet! My first text that I read on SR had a thought experiment with 2 bouncing balls and 2 observers, and used it to demonstrate relativistic mass.
The use of relativistic mass is purely historic (and in bad popular science).
General relativity predicts that objects can collapse under certain conditions, usually described as sufficient energy density in their rest-frame. GR does not predict the collapse of something just because it moves at high speed, independent of the reference frame chosen to describe the system.
[USER=268035]@JDoolin[/USER]: That neutrino would need an incredible energy. Neglecting factors of 2, we have ##m_nu m_P = 3 eV cdot E_nu## where the lightest neutrino mass is probably of the order of 1 meV.
“I can’t remember what it’s called, even enough to search it via google, but there is actually a solution to this problem. The example provided on the wiki page that I remember used larger masses, as opposed to photons. Basically it says that as you approach the speed of light and pass a large mass, it can’t turn into a black hole due to your reference frame. I really wish I could remember what it was called.
If I remember correctly (I very well could not), it has to do something with the geodesics of spacetime warping under the energy tensor from the relative speed of you and the mass you’re observing. Now, this doesn’t necessarily apply when we’re talking photons. Darn my memory, and I’m only 23! I guess it’s all downhill from here =/”
I would probably go the other way… Obviously if your theory implies that something is turning into a black hole according to one observer, but is not turning into a black hole according to another observer, then your theory has been essentially discounted by reductio ab adsurdam.
I believe the problem is with the premise than an object’s mass increases as it approaches the speed of light. An object’s MOMENTUM increases as [tex]p = frac{m}{sqrt{1 – (frac v c)^2}}v[/tex]; I feel that has been pretty well reasoned out. But the claim that an object’s actual mass has increased (and hence it’s capacitiy to pull other objects toward it by gravity) is NOT well supported by any reasoning I’m familiar with. I’m pretty sure I’ve seen this point made explicitly in some texts, but at 43, I’m well into my fifth decade of memory failure.
Have you considered the idea of extremely high blueshift reference frames?
I have a common ordinary lightbulb producing wavelengths of light between 400 to 700 nanometers. However, from the point-of-view of a passing neutrino; with it’s velocity negligibly below the speed of light, that same light bulb could be producing light with wavelengths less than the Planck Length.
So, perhaps the light from my lightbulb is producing a black hole in some frames of reference, but producing ordinary visible light in other frames of reference?
I’m highlighting the issue with a rather extreme case–the observer on the neutrino. Some people may argue that neutrino observers are not valid, because they have no ears, no eyes, and no souls, and that their reference frame doesn’t exist. But consider if we took a light of wavelength JUST OVER the planck length, and had one observer fly away from it, while another flew toward it. The observer flying toward it would find that the wavelength of the photon was smaller than the Schwarzschild radius of the photon’s energy. But the observer flying away would find that the wavelength of the same photon was larger than the Schwarzschild radius of the photon’s energy.
Well, I guess my point is that radiant energy– E = hf = hc/lambda, is simply not the same as mass energy E=mc^2.
The mass has its own reference frame independent of everything else in the universe–mass is an intrinsic property. Also, being a black hole, or NOT being a black hole is an intrinsic feature of matter. The light only has a reference frame in reference to its source and its observer, and frequency and wavelength of light are extrinsic features–observer dependent… Relatively moving observers are going to measure different wavelengths of the same light, so if this idea is accurate, they would also disagree on whether the light spontaneously collapsed into a black hole.
[quote]There is a misconception that the universe is fundamentally divided into Planck-sized pixels, that nothing can be smaller than the Planck length, that things move through space by progressing one Planck length every Planck time. Judging by the ultimate source, [URL=’http://i.imgur.com/92cqoCk.png’]a cursory search of reddit questions[/URL], the misconception is fairly common.[/quote]
This misconception turns up a lot here on PF, too:
[URL]https://www.google.com/?gws_rd=ssl#q=%22planck+length%22+site:physicsforums.com[/URL]
I’m glad to have a good article now to point people to, when it comes up again. Thanks! :biggrin:
Nice post! Another way to think about the Planck length is that if you try to measure the position of an object to within in accuracy of the Planck length, it takes approximately enough energy to create a black hole whose Schwarzschild radius is… the Planck length! So, one can argue that it's impossible to measure distances shorter than this – though the argument is a bit hand-wavy. To see how the calculation works, go here:http://math.ucr.edu/home/baez/lengths.html#planck_length
I can't remember what it's called, even enough to search it via google, but there is actually a solution to this problem. The example provided on the wiki page that I remember used larger masses, as opposed to photons. Basically it says that as you approach the speed of light and pass a large mass, it can't turn into a black hole due to your reference frame. I really wish I could remember what it was called. If I remember correctly (I very well could not), it has to do something with the geodesics of spacetime warping under the energy tensor from the relative speed of you and the mass you're observing. Now, this doesn't necessarily apply when we're talking photons. Darn my memory, and I'm only 23! I guess it's all downhill from here =/
Ah, upon rereading the article, I see that you really pretty much hit on my issues in my last post.
Nice work!