- #71
Peter Morgan
Gold Member
- 274
- 77
I'll mostly defer to vanhees71's account, comment #70. I think of triggers as a definite lossy data compression, but how the data is compressed is presumably decided by some committee, which hopefully has some minds. One could perhaps say that once an experiment has been constructed as an automated object, the data collected can be automated and be mostly independent of mind. Indeed, if human intervention is required to keep an experiment on track because of an error condition that lies outside the automation specified, one would expect that any data during the period during which human intervention was required ought to be discarded (unless, perhaps the human intervention can be formally modeled).zonde said:So do the records of experimental data and setup details have mind independent existence?
I'll paste in an account I wrote last night to a correspondent, which seems to be a propos:
One additional note, keying into vanhees71's account, is that triggers for large experiments are usually much more elaborate (and can slip into dangerously ad-hoc territory) than just whether one electrical signal transitions from zero to non-zero.Consider an Avalanche PhotoDiode, an APD: we set up an exotic state of matter so that the output signal is almost always near zero current, but occasionally it is some obviously non-zero value. Hardware is usually set up to record the time at which a transition from zero to non-zero current happens (we could instead record the current as a 14-bit output from an Analog-to-Digital Converter, an ADC, every nanosecond, say, but the record of current transition times is essentially a very compressed, very lossy record of the same information.) Also of interest in experiments is the dead time, the time it takes the hardware to restore the current to near zero so that another transition can be noticed and the time recorded.
Suppose we have this device. When it's set up in a dark room, there is a low rate of current transitions, called the dark rate; when we enter the room and turn on a dim light, the rate of current transitions changes; when we move around the room, the rate of current transitions changes; when we change the intensity of the light or introduce new lights, the rate of current transitions changes. If we set up some barriers, again the rate of current transitions changes, and again when we move the barriers around. If we set up two or more APDs, we can calculate more elaborate statistics, cross-correlations at the same or at different times.
If we ask what could be causing these events, one answer is that we've set up a ridiculously exotic state of matter, so of course weird stuff will happen. More than that, however, we notice that as we continuously change the conditions of the experiment, the current transition statistics change more-or-less continuously, if we collect enough data. Even though the events are discrete, the statistics change continuously. Historically, elementary physics has said that each current transition is caused by a particle, but more sophisticated physics works with a quantum field, which can be understood to make no claims about what happens outside the APD, nor about details of the APD current, but does discuss the statistics one would observe for a given theoretical model of an APD, and how those statistics would change continuously as we move the lights or the barriers or the APDs around.
For what it's worth, my YouTube video from last February, Quantum Mechanics: Event Thinking, deliberately short at 4'26", presents more-or-less this story.
I think it's best not to get too hung up on the Bishop Berkeley problem. Ultimately I can't see that it helps much to be solipsist about the world. Go to the world of extreme positivism for a visit if you like, which I've found occasionally useful as a way to get out of the box, but best to come back. I've been peppering everything I've written on PF with links to my arXiv:1709.06711 (comment #30 has a more up to date version attached) because that's how I think about QM/QFT (for which sorry, I guess) and it's not yet well-known, but for this specific question, I think its mathematical derivation of a random field as a subalgebra of a free quantum field algebra more reconciles a classical perspective and a quantum field perspective than any other math I've seen in the literature (there's a parallel with the de Broglie-Bohm approach, deriving trajectory probabilities from the wave function, but there are also fundamental differences, that I keep to the mathematics of operators acting on Hilbert space as a model for signal analysis, manifest Poincaré invariance is maintained, and I keep to an operational interpretation of the math as far as possible). One significant point, however, is that the philosophy of classical probability has become significantly less settled than it used to be. I'm happy with an instrumental, construct-an-ensemble-and-compute-statistics approach, which I think is what physicists do, but philosophers have worries that I find significant about that approach, and physicists who want to construct a model for the whole observable universe obviously can't construct an ensemble (also, if we take away the background Minkowski space, constructing an ensemble becomes quite fraught, AFAICT —amongst other worries, of course).