Role of in-house concept analysis done by the QG scientists themselves

In summary: We all know that "space" is more than a bunch of points and lines. I think we need to be careful not to overgeneralize from our specific mathematical model of spacetime. In summary, science is what scientists do. Philosophy is what philosophers do. So it is probably a bad idea to call this regular in-house conceptual analysis "philosophy". It is part of the scientists' own job, not somebody else's. So it is confusing to call it philosophy. I may have inadvertently caused some confusion earlier--sorry about that. Science is what scientists do, philosophy is what philosophers do. So it is probably a bad idea to call this regular in-house
  • #71
marcus said:
Anyway to build on your mention of discreteness, in case others might read this thread: I think everyone here realizes that Lqg does not depict space as "made of little grains".

Actually it DOES depict space as "made of little grains", but ONLY when there is a measurement, because "little grains" is the generic eigenstate. The wave function is a complex superposition of all possible configurations of "little grains".

So, "little grains" is the particle part of the wave/particle duality of quantum mechanics.
 
Physics news on Phys.org
  • #72
marcus said:
One big obstacle to understanding I've noticed is that many people have not gotten used to the 1986 Ashtekar introduction of connection rather than metric representation of geometry. So they don't see spinnetworks as a natural construct. Connection means parallel transport.

This is precisely the case (at least for me). I can't envision geometry without a metric. For me the two words are synonomous. I can understand how one might be preferred is some description over the other. But I can't see how geometry can have meaning without a metric being implied somewhere. But if you were able to explain that to me. The next step would be to show me how these spin networks have anything to do with these Ashtekar variable.
 
  • #73
inflector said:
Which correlates very nicely with Rovelli's logical transition from "Heisenberg uncertainty in measurement at the small scale" as the underlying idea to quantizing spacetime as the specific mechanism. It seems to me that LQG, through the formalism of quantizing spacetime is building into spacetime itself the idea that measurement involves uncertainty.

Yes! Nice clarification.
 
  • #74
friend said:
This is precisely the case (at least for me). I can't envision geometry without a metric.

This is because it depends on the definition of geometry. If you think geometry as a topological space, sometimes you won't even have the possibility of having a metric space.

Here's an example:

http://en.wikipedia.org/wiki/E8_manifold
 
  • #75
MTd2 said:
This is because it depends on the definition of geometry. If you think geometry as a topological space, sometimes you won't even have the possibility of having a metric space.

As I'm understanding it so far, if you redefine GR in terms of Ashtekar variable and then define spin networks on that, you don't lose the underlying math of GR. I thought the whole point was to continue Einstein's work.
 
  • #76
The classical solutions of GR involves metrics that spans the whole space. The point is, Einstein Equation is a differential equation, so it is about local differentiability. So, geometry ends up being the union of differential patches of geometry. In 4 dimensions, it has crazy consequences such as infinite non diffeomorphic metrics sharing the same topology or no metric at all despite the existence of a well defined topology.
 
  • #77
friend said:
As I'm understanding it so far, if you redefine GR in terms of Ashtekar variable and then define spin networks on that, you don't lose the underlying math of GR. I thought the whole point was to continue Einstein's work.

I don't understand what you mean by "I thought the whole point.." Is there any doubt about this as a continuation?

Ashtekar variables are a classical formulation of GR. There are a half dozen different reformulations of GR. They are a continuation of Einstein's work because they are mathematically interesting different ways to look at GR. Alternative reformulation is part of science and can contribute to progress.

Some reformulations are Palatini, Holst, Arnowitt-Deser-Misner (ADM), constrained BF theory, I won't try to be complete. Often the reformulations do not involve a metric. A metric does not appear in the mathematics.

So the point is to continue developing GR, and the Ashtekar variables DO that. If you thought it was a continuation, you were right.

But they are still classical. Not quantum yet. They just happen to afford a convenient opportunity to move to quantum theory.

There are other routes as well. (Holst, BF-theory, Regge-like?) What we are now seeing is a convergence of quantum theories of geometry that have gone up the mountain by different routes.

I'm not sure you can say Ashtekar variables are the ONLY way to go. But they played an important historical role. For one thing, the Immirzi parameter came in that way (as a modification of Ashtekar's original variables.)

If I'm off on any details I'd welcome correction. Some readers are surely more knowledgeable about some of the details here. Also I haven't checked the 1986 date here, it is just what comes to mind.
 
Last edited:
  • #78
I found a much more introductory paper, which both gives a conceptual overview and a simple sketch of the mathematical elements of Lqg as it was in 1999. Not a bad way to begin. You get the more philosophical reflective side in conjunction with the math as it was at an earlier stage of development.

http://arxiv.org/abs/hep-th/9910131
The century of the incomplete revolution: searching for general relativistic quantum field theory
Carlo Rovelli
(Submitted on 17 Oct 1999)
In fundamental physics, this has been the century of quantum mechanics and general relativity. It has also been the century of the long search for a conceptual framework capable of embracing the astonishing features of the world that have been revealed by these two ``first pieces of a conceptual revolution''. I discuss the general requirements on the mathematics and some specific developments towards the construction of such a framework. Examples of covariant constructions of (simple) generally relativistic quantum field theories have been obtained as topological quantum field theories, in nonperturbative zero-dimensional string theory and its higher dimensional generalizations, and as spin foam models. A canonical construction of a general relativistic quantum field theory is provided by loop quantum gravity. Remarkably, all these diverse approaches have turn out to be related, suggesting an intriguing general picture of general relativistic quantum physics.
Comments: To appear in the Journal of Mathematical Physics 2000 Special Issue
 
  • #79
The first two paragraphs of that 1999 paper just happen to make the main points being discussed in this thread.
==quote from the 1999 "search for general relativistic QFT" paper==

In fundamental physics, the first part of the twentieth century has been characterized by two important steps towards a major conceptual revolution: quantum mechanics and general relativity. Each of these two theories has profoundly modified some key part of our understanding of the physical world. Quantum mechanics has changed what we mean by matter and by causality and general relativity has changed what we mean by “where” and “when”. ... framework, capable of replacing ... Lacking a better expression, we can loosely denote a theoretical framework capable of doing so as a “background independent theory”, or, more accurately, “general relativistic quantum field theory”.

The mathematics needed to construct such a theory must depart from the one employed in general relativity – differentiable manifolds and Riemannian geometry– to describe classical spacetime, as well as from the one employed in conventional quantum field theory –algebras of local field operators, Fock spaces, Gaussian measures ...– to describe quantum fields. Indeed, the first is incapable of accounting for the quantum features of spacetime; the second is incapable of dealing with the absence of a fixed background spatiotemporal structure. The new mathematics should be capable to describe the quantum aspects of the geometry of spacetime. For instance, it should be able to describe physical phenomena such as the quantum superposition of two distinct spacetime geometries, and it should provide us with a physical understanding of quantum spacetime at the Planck scale and of the “foamy” structure we strongly suspect it to have.

Here, I wish to emphasize that what we have learned in this century on the physical world –with quantum mechanics and general relativity– represents a rich body of knowledge which strongly constraints the form of the general theory we are searching. If we disregard one or the other of these constraints for too long, we just delay the confrontation with the hard problems...
==enquote==
 
  • #80
This is for the relative beginners trying to follow this thread who get bogged down (like me) in the reference to "(active) diffeomorphism invariance" in the paper that marcus just referenced in post #78 because you didn't have a clear understanding of the meaning of active versus passive diffeomorphism invariance as used by Rovelli.

I found the paper by Gaul and Rovelli, http://arXiv.org/abs/gr-qc/9910079v2" , under Section 4.1 entitled: "Passive and Active Diffeomorphism Invariance," to be quite easy to comprehend and a very clear description of the difference between active and passive diffeomorphism invariance. It made reading the paper from post #78 much easier.

Since active diffeomorphism invariance is one of the explicit lessons of GR that Rovelli claims, it seemed useful to have a very precise definition of what that means. Section 4.1 provides just one such definition.
 
Last edited by a moderator:
  • #81
marcus said:
I don't understand what you mean by "I thought the whole point.." Is there any doubt about this as a continuation?

Yes, I know I have to catch up on some of the reading. I hope I'm presenting relevant questions to keep in mind as I read. I think I may start with John Baze's book, "Gauge fields, Knots and Gravity", starting from chapter 3. It seems to include everything to get me to Ashtekar variables.

But perhaps you already know the answer to the following question: It seems that putting the Hilbert-Einstein action in the path integral was the first attempt to quantize gravity. It proved non-renormalizable but still made confirmed predictions in the low energy limit. Then there was a change to Astekar variables which seemed to provide a better way to quantize gravity. My question is can the later version be reduced to the former version? If so, then the first version IS renormalizable. If not, then how can we be sure we are even quantizing in the correct way? Thanks.
 
  • #82
ConradDJ said:
The idea that leads to background independence is just that there is no given, absolute spacetime background. Another way of saying it is that spacetime measurements have no meaning in themselves, but only in relation to other spacetime measurements.

Yes! But before we run into conclusions, let's go slower:

In special as well as general relativity, "spacetimes" are associated to reference frames and observers. The spacetime points simply indexs events relative to this observer.

So to rephrase this slightly, the idea that leads to BI is that there are no preferred observers.

Now, does this imply that observers or spacetime are devoid of physical basis and that the laws of physics must be observer invariant? And that somehow the laws of physics must be a statement of the transformations between spacetimes that manifests that allows for a invariant formulation?

IMO No. It is however a very plausible possibility. It's also the possibility that comes naturally with structural realism, but it's not the only possibility.

The alternative to EQUIVALENCE of observers is DEMOCRACY of observers.

Note that the latter is fully consistent with "the no preferred observer" constructing principles. The difference is that equivalence of observers is to a higher degree a realist construct. In the democracy of observer view the equivalence of observers corresponds to a special case where ALL observers are in perfect consistency. A possible equiblirium point.

So I think the constructing principle of GR, does NOT imply by necessity that the observers are in perfect consistency. It is merely a possibility. But it's admittedly the single most probably possibility! But I think of ot analogous to an "on shell" possibility, where the off shell possibilities are important.

Alterantively one can say that the constructing principle of relativity is that the laws of physics must be observer invariant. However this is a structural REALIST version that may or many not be suitable for merging with QM.

So I think a more neutral version is not "background independence" but rather "background democracy". And the difference is what I tried to describe.

Rovelli as I see it, tries to enforce the background independence by hard constraints, rather than let it be the result of a democratic process. The end result at equilibrium may be very similar but the understanding is quite different.

/Fredrik
 
  • #83
friend said:
Yes, I know I have to catch up on some of the reading. I hope I'm presenting relevant questions to keep in mind as I read. I think I may start with John Baze's book, "Gauge fields, Knots and Gravity", starting from chapter 3. It seems to include everything to get me to Ashtekar variables.

But perhaps you already know the answer to the following question: It seems that putting the Hilbert-Einstein action in the path integral was the first attempt to quantize gravity. It proved non-renormalizable but still made confirmed predictions in the low energy limit. Then there was a change to Astekar variables which seemed to provide a better way to quantize gravity. My question is can the later version be reduced to the former version? If so, then the first version IS renormalizable. If not, then how can we be sure we are even quantizing in the correct way? Thanks.

Although different classes of action may have the same classical equations of motion, they are not necessarily equivalent when treated as quantum theories. Within LQG itself, the Immirzi parameter is such an example. In AS, this means that we do not know if eg. there is no UV fixed point in the generalizations of the Hilbert action, that there is also no UV fixed point in the generalizations of the Holst action (by generalization I mean including all terms compatible with the symmetry of the action).
 
  • #84
atyy said:
Although different classes of action may have the same classical equations of motion, they are not necessarily equivalent when treated as quantum theories. Within LQG itself, the Immirzi parameter is such an example. In AS, this means that we do not know if eg. there is no UV fixed point in the generalizations of the Hilbert action, that there is also no UV fixed point in the generalizations of the Holst action (by generalization I mean including all terms compatible with the symmetry of the action).

Yes, I suppose this is what happens with the bottom up approach, where you try to quantize classical equations of motion. But the question still remains: How do we know we have the right quantization procedure?
 
  • #85
friend said:
Yes, I suppose this is what happens with the bottom up approach, where you try to quantize classical equations of motion. But the question still remains: How do we know we have the right quantization procedure?

Right in the sense of UV complete can be determined purely mathematically.

Right in the sense of describing reality is determined by observation.
 
  • #86
atyy said:
Right in the sense of UV complete can be determined purely mathematically.

Right in the sense of describing reality is determined by observation.

So we're waiting for experiment to confirm that we have the right action in the path integral or the right conjugate variables in the commutator?
 
  • #87
How do we know we have the right quantization procedure?

atyy said:
Right in the sense of UV complete can be determined purely mathematically.

Right in the sense of describing reality is determined by observation.

Just a comment, Atyy. You have actually answered the question how do we decide we have the right quantum theory. (not "quantization procedure".)

AFAIK there is no god-given correct "quantization procedure" and a quantum theory does not have to be the result of "quantizing" a classical theory. It should be thought of as an optional heuristic guide. As a practical matter one can choose to follow procedures which have often worked in the past.

One could, I imagine, come up with a quantum theory not based on any prior classical and not resulting from any "procedure"---that described some phenom. not yet studied classically or otherwise. And then one would check the correctness of that quantum theory exactly as you said in your post---mathematically and by observation.
 
Last edited:
  • #88
friend said:
So we're waiting for experiment to confirm that we have the right action in the path integral...

Right. Always. Right is as as right does. There is no other way according to the scientific method. Right? :biggrin:

But Friend, wouldn't you agree that theories (and mathematical models in particular) are never proven correct, only provisionally trusted as long as they pass empirical tests.

In the case of LQG some tests of the theory have been proposed recently by early universe phenomenologists, based on some possible observations of ancient light (CMB polarization). LQG cosmology rests ultimately on spinfoam dynamics or, if you want to think of it that way, on the Holst action. Spinfoam is a type of sum-over-histories analogous to path integral, but it is probably simpler and more accurate to consider that one would be testing the spinfoam model directly (rather than the Holst action).

So that is a case in point, any pattern seen in the CMB which indicates that the early universe did not result from the kind of bounce predicted by the theory would tend to cast serious doubt, and probably falsify the theory.
 
Last edited:
  • #89
marcus said:
Right. Always. Right is as as right does. There is no other way according to the scientific method. Right? :biggrin:

But Friend, wouldn't you agree that theories (and mathematical models in particular) are never proven correct, only provisionally trusted as long as they pass empirical tests.

The whole point of theorizing and model building is to predict or perhaps postdict nature. This means it must match up with exprerimental results and observations. If it fails to make correct predictions, then you don't have the right theory. I take issue, however, with the idea that we can only do this by producing mathematics to fit the data. I think there are means other than curve fitting the data. (I'm not saying that this is what you said or implied.) Afterall, what experimental proof did Einstein have that the speed of light is constant for all observers? There is mathematical consistency that serves as a guide. But when trying to understand the conceptual basis of someone's efforts, it helps to put them in terms of concepts that are already understood. That's why I ask about how spin networks are related to Ashtekar variable, and what quantum procedure they are using, etc.
 
  • #90
friend said:
The whole point of theorizing and model building is to predict or perhaps postdict nature. This means it must match up with exprerimental results and observations. If it fails to make correct predictions, then you don't have the right theory. I take issue, however, with the idea that we can only do this by producing mathematics to fit the data. I think there are means other than curve fitting the data. (I'm not saying that this is what you said or implied.) ...

That sounds pretty reasonable, especially if you get away from the idea of there being one rigid correct way to arrive at a quantum theory---one correct "quantization procedure".

I would agree entirely, with the proviso that theorists arrive at theories by various paths. Basic conceptual thinking---almost at the level of philosophical principles---can play a major role. So can working by analogy with other quantum theories!

And certainly classical theories. However you get there, the quantum theory has to have the right classical limit.

You are asking about LQG and in that case there are a handful of different convergent strands. Rovelli mentions them in the historical section of one of those three papers, I forget which. I think April 1780.

Quantizing the Ashtekar variables is only one of several heuristic paths that have led to the present theory. What he describes (in about a page or page and a half IIRC) is how various approaches have come together.

In another paper, I think October 1939, he brings out the analogies with QED and QCD. Clearly analogies with other quantum field theories have also helped guide the program to its present stage.

If you want to understand you do need to read some stuff. Not a lot, just find the right page or pages. Maybe I could give page references, instead of having to copy-paste stuff here.
Have to go for now. Back later.
 
  • #91
So what about the core principles of QM?

inflector said:
In this second article note how Rovelli presents the lesson of QM as "any dynamical entity is subject to Heisenberg's uncertainty at small scale" which is different from the "all dynamical fields are quantized of his earlier Quantum Gravity book's introductory chapter."

I personally think this is too fast to see the steps.

I'd like to propose that the core principles, is the content of Bohrs mantra that essentially says that the laws of physics doesn't encode what nature is or does, it encodes what we can say about nature and how it behaves. This summarizes almost the essence of science, namely that we infer/abduct from experiment (OBSERVATION) what nature SEEMS to be and how it SEEMS to behave.

Thus we arrive at an effective undertanding in a good scientific spirit, and all we have is our rational scientific expectations. There just IS no such thing as "real reality". It serves no purpose in the scientific process.

But as with the GR, there seems even here multiple ways to understand and extrapolate this.

I read it in a more explicit way so that the laws of physics encode the the observers expectation of nature as a function of their state of knowledge.

It seems like Rovelli's conclusion is that since he considers the equivalence class of observers as the physical core, he thinks that QM says that the laws of "quantized" physics, encodes expectations of equivalence classes of observations. In this view, he doesn't consider the quantum laws themselves subject to Bohrs mantra. It apparently enters as a realist element.

The alternative, quite similar to GR, is to think that combining this with the "observer democracy" rather suggest that physical law itself - including "quantum laws" are rather intrinsically observer dependent and that instead the problem becomes how to understand how the effective objectivity that we de facto see is a result of a democratic process (which of course would be purely physical to its nature).

/Fredrik
 
  • #92
marcus said:
If we would start where you suggest (with e.g. the idea of "quanta of space")

(snip)

Area and volume are quantized as part of how nature responds to measurement. It is like what Niels Bohr said. "Physics is not about what Nature IS, but rather what we can SAY about Nature." So it is about information---initial and final information about an experiment, transition amplitudes. Or so I think.

Returning to the idea of quantization itself...

It seems clear to me that taking GR and quantizing it is a strategy that makes a decision.

We know that measurement is quantized through large quantities of actual experiment. But it seems to me that this quantization could come from two places:

1) That geometry itself is quantized

2) That there is an interaction in the process of measurement between the device doing the measuring and the object being measured that results in a quantization

All of the quantization of GR approaches seem to be deciding that 1) is more likely than 2). Is there some reason? Has this issue been specifically addressed?

For example, let's go back to the first concrete quantum weirdness experiment (at least that I know of), Stern-Gerlach. In that experiment, some of the silver ions were diverted up and some were diverted down and the classically expected smooth distribution did not occur. But it may be that the process of measurement is what does the quantization, right? Depending on your favorite interpretation of QM you might look at this in various ways but it comes down to the process of measuring resulting in two distinct values, up and down.

We also know through various experiments with http://en.wikipedia.org/wiki/Stern–Gerlach_experiment#Sequential_experiments" and light polarization that the measuring apparatus also changes the state of the objects being tested, whether ions or photons. So clearly there is a significant interaction between the measurement device and the object being measured.

So what says that it is not the process of measuring that results in the quantization rather than that geometry itself is quantized? Strategies that quantize the geometry seem—to me anyway—to assume that it is not the measurement that causes the quantization. They seem to assume that the objects exist in a state that is probabilistically quantized because the geometry itself is probabilistically quantized.

On the other hand, experimental quantum theory itself seems agnostic on this issue. Some interpretations refer to collapses of the wave function during measurement, but quantum theory itself doesn't say why the collapsing happens only that the measurements end up being quantized.

Am I missing something? Or is it fair to say that my points 1) and 2) above characterize two equally valid points of view, but that LQG and other quantize-GR theories assume 1) and NOT 2).
 
Last edited by a moderator:
  • #93
inflector said:
So what says that it is not the process of measuring that results in the quantization rather than that geometry itself is quantized? Strategies that quantize the geometry seem—to me anyway—to assume that it is not the measurement that causes the quantization. They seem to assume that the objects exist in a state that is probabilistically quantized because the geometry itself is probabilistically quantized.

To add to this question, since it will take particles to measure the quantization of space, how would we know it is not just a further quantization of particles that we are measuring?

Also, aren't there quantum variables that can be measured in a continuous spectrum? For example, position and momentum of free particles can be measured anywhere, right? How would spacetime be in a bound state so that its has a discrete spectrum? What is the boundary of spacetime? Maybe it's the particles used to measure space that form the boundary of space, making it have a discrete spectrum.
 
Last edited:
  • #94
I for one think that it's quite established that the issues of exactly what quantization means is NOT sufficiently addressed by Rovelli. He doesn't even try very hard.

Somehow that settles the issue. But he has also declared that this isn't his ambition.

To me the CORE essence of quantization, in despite of the name has nothing to do with wether something ends up literally quantized (in chunks), it's more the constraint that is applied be requireing "observability" or "inferrability". This I picture achieved by requiring the the predictions to be cast in terms of "expectations", computer from initial information that must originate from prior measurements.

So I think you are right to not ignore these things.
inflector said:
Strategies that quantize the geometry seem—to me anyway—to assume that it is not the measurement that causes the quantization. They seem to assume that the objects exist in a state that is probabilistically quantized because the geometry itself is probabilistically quantized.
I'm not sure I would agree with your two options. But I do agree that this is generally under-analysed.

Expectations of course exist in classical logic as well, and clasical probability. I think that what "causes" the quantum logic (or cause quantization as you phrase it) is that if we take seriously how information is encoded by the observer, and consider fitness of this code as an interacting one, then it seems a plausible conjecture that non-commutative structure in the code would have higher fitness and that the evolutionary selection of these "non-commutative histories" is the original of quantization.

I'd claim that an expectation (genereally inductive, probabilistic) is the essence of QM.

Classically we don't have expectations, we have laws that given initial conditions DEDUCES what WILL happen, in an objective sense.

QM expectations is in the form of deductive probability, QM DEDUCES what the probabilities are that certain things will happen. Thus the expectations encodes, in line woth Bohrs mantra, not what WILL happen, but what we can SAY about what will happen; ie what we EXPECT to happen.

So I see construction of the expectations as a key construct. So, we need to construct geometrical notions interms of expectations. Here Rovelli is possibly missing a point becase geometry is defined be relations between observations, and observers. So it may mean that geometry in the GR sense is not observable in the sense of QM, beucase it takes a collection of interacting observerations to observe it. (The democratic view).

Rovelli sometimes seems to assume that geometry exists, and doesn't even try to reconstruct it in terms of realistic measuremnts from the point of view of a single observer. So I basically question his choice of what's observable and what's not. Clearly IMHO at least, observer invariants are not what should be subjecto "quantization" for me that is a likely abuse of QM.

/Fredrik
 
  • #95
friend said:
What is the boundary of spacetime? Maybe it's the particles used to measure space that form the boundary of space

Mathematically we can picture an empty space without boundaries, but physically and in particular when constrained by the observability criterion that QM teaches us, the boundary of spacetime is obviously matter.

Anything else is just something that lacks physical basis IMO.

So I think your on the right track. For me, I've always associated matter with the observer. Gravity without matter is like a quantum theory without observers. Also in all experiments on "empty space" such as casimir effects, the boundary is critical. You can't observer an empty space without inserting a boundary.

To take it a step further, I think there has to be a theory living on the boundary (or more exactly, encoded in them matter) that somehow interacts and mirrors whatever is going on on in the extenral bulk. It's a vauge form of holography.But it's not necessarily exact, the holographic conection is more likely IMO to correspond to an equilibrium point. This is why maybe we need further understanding og this. because it may not be right to use an equilibrium condition as constraint, we may be missing out on physics.

Edit: I insinuated in another thread, but I think that this holographic connection (to be understood) can also be seen dual to the problem of understanding the more general theory mapping. If you consider a generalisation of RG, where you consider the theory space to include a larger set of observers, not just observational scales that you arrive at by changing energy scale, but observers with completely different topology and complexity, then it seems that the holographic duality is like a connection between two different points in theory space with are communicating. It seems to me that the RG space itself must evolve, as this itself should also be subject to observabiltiy constraints.

So it seems like QM + GR must be something like an evolving theory space at where at each "instant" the state of the space (I don't thikn it's a continuous manifold) it defines "connections" between expectations... like the "quantum version" of a GR connection which is not the same as a quantized connection. It seems to be something hairy where certainly the EH action itself is emergent, rather than put in by hand.

So the QG replacement of the GR state space must be something far more hairy, something like a theory space. And here, matter (or what corresponds to it) must be included from constructing principles. It wouldn't make sense otherwise.

/Fredrik
 
Last edited:
  • #96
IMO the first place to look would be the Dirac Sea. Electron and positrons that disappear and then can magically reappear from absolutely nothing. I have issues with any theory that claims to be based on math, but is really based on magic. What if the electrons and positrons did not annihilate, but are instead sitting at an immeasurable zero spin state. In addition, when energy is added they would spilt apart, and then come back into our measurable existence. Bosons are then not required to satisfy the math requirement for a zero spin state.

An electron's probability orbit around an atom could be nothing more then measurement error. Similar to how moths and bugs appear as "flying rods", to cameras that are not able to capture or measure at a fast enough rate. It is a shame we do not have something smaller then an electron or positron in which to measure with, but it would then be the same problem just the particle would have a different name.

Time is nothing more then the rate at which processes run at or complete in. If matter uses electrons and positrons to measure time with how do electrons and positrons measure time? Could energy and matter not experience time at a different rate? How could we measure the time that energy experiences? We cannot.

Quantization is due to having to measure things with matter. All current sensors and measurement devices are composed of matter. We are limited in measuring to what is happening at the transmitter and what is happening at the receiver or sensor. We cannot take a CRT and label an electron as it leaves the gun. To prove that it was the electron that actually hit the screen, and knocked an electron in the screen matter to a higher energy state.
 

Similar threads

Replies
12
Views
2K
Replies
18
Views
5K
Replies
3
Views
4K
Replies
19
Views
10K
Replies
2
Views
2K
Replies
1
Views
2K
Back
Top