What is an Experiment? Exploring ZapperZ's Post

  • Thread starter WiFO215
  • Start date
  • Tags
    Experiment
In summary: -repeat the experiment with a different quantity to be more certain?-try to find another way to measure the quantity?-change your theory to account for the data?
  • #1
WiFO215
420
1
This is where I wish a lot of people who go through college get to do really good and relevant experimental work, not only to familiarize oneself with basic science, but also how one deals with data and to what level of confidence one can make a conclusion. This is the one aspect of intro physics that I wish gets revamped.

I have quoted ZapperZ's post from the Skepticism sub-forum. I am not in college as yet and am quite curious to know why Zz said this. Could someone explain this post to me in further detail? What does one consider an experiment? In such an experiment, what amount of data would you need to agree with your theory to say your theory is true? How does one go about collecting data?
 
Physics news on Phys.org
  • #2
Since this hasn't gotten any replies yet, I'll discuss the "data" and "level of confidence" a bit by explaining an experiment I did with some high school teachers lately, which we used to discuss the same topics. In our case, the teachers were determining the density of an unknown oil by a buoyancy method, by putting pennies (of known mass) into a plastic cup floating in a measuring cup of oil (and measuring the volume change in the oil as the cup began to sink). Note the poor supplies here, so measurements were bound to carry much uncertainty. Basically, there are a few different ways to take data in order to reach conclusions.

First, You could just take one data point. Some teachers put a fair number of pennies in the cup, then measured the volume change (once). Because the uncertainties in these numbers were so high, some teachers ended up with values that were really low (about 0.7 g/cm^3) and some teachers ended up with numbers that were rather high (1.1 g/cm^3 or higher). Taken as individual measurements, they couldn't much be trusted because of the high likelihood of error, since the markings on the cup were pretty well spaced (every 10 mL only).

However, taking more measurements often can counter the effect of poor equipment (as long as the equipment is calibrated correctly and being used correctly -- for instance as long as you aren't looking at the English side of the cup instead fo the metric side and mistakenly reading teaspoons as mL; it's be ok if you used that side, but did a unit conversion). For instance, when all the teachers' measurements were AVERAGED, there was a much more accurate result, and the uncertainty ("precision") was also lower, because you could use the "standard deviation" as a measurement of error in the result.

An even better way to look at the problem was to take a NUMBER of measurements... say counting how many pennies it takes to displace 10mL of oil, then how many it takes to displace 20mL, 30mL, etc. This data can be plotted and fit to a straight line, and typically when you fit data you'll get a result for a slope and the error in that slope. That will give you a result that relates to the density of the oil. "Least squares fitting" used to be done by hand, but is now possible on common software such as Excel.

Even better ways of fitting data involve weighting the individual data points according to their individual error as the fit for the whole data set is calculated... which can be done in better scientific software like Origin.

In some of my research, I've programmed equipment to keep measuring certain things (such as counting photons per second reflected off a surface when a polarizing optic elsewhere in the system was at a certain angle) until a certain low error is reached in the standard deiviation... then to move on to the next programmed angle and do the same. I download the average and error for the photon count as a function of angle... then fit the data using the last method discussed above... so that I could get the lowest error in the parameter (variable) that I was looking for at the time (which was a material property of the surface.

I'm sure this give you a lot to think about and look up... but note that you can certainly take data using simple equipment and explore data analysis and error analysis on your own. (There are some great books out there on error analysis, in our lab we kept a copy of https://www.amazon.com/dp/0072472278/?tag=pfamazon01-20)
 
Last edited by a moderator:
  • #3
Thanks a lot. But I still have some questions. Say you are measuring this quantity X and your experiment shows X + 1 (+-) 0.3. Say that one unit is a large difference from what you expected. What would you conclude about your theory? Would you say it is "ONLY 1 unit error. Doesn't matter in this experiment." or would you say "1 UNIT! That's too much of an error!" ? I am guessing if X >> 1 then your error is not much of concern. Am I correct?

And of course, what about the other case where it is only a small difference from your expected value. Then what would you conclude. For instance, I have heard when objects move (very) fast they tend to gain mass. Assume you did not know of the existence of Relativity theory. When we do experiments to measure mass at decent speeds wherein such errors can be counted but only marginally, and you were the experimenter, would you have said it is an error in mass due to equipment or would you say there is something wrong with your theory that mass is constant with speed.

In short, how does experiment help justify your theory?
 
  • #4
Since no one reply this, I'll try to do my best.
I'll answer your second question first. So that is where physics kicks in when experimental results doesn't agree with current theory. So the relation between experimentalists and theorists is like the following: experimentalists verify theorists' theory, and theorists try to explain experimentalists' results. So assuming that this experiment was done pretty decent, then obviously there is something wrong with the "current"theory. Then it would be really exciting because physicists would start from the black board again. A great example is during the early 20th century, right before the emerging of QM. There were several experiments, black body ratiation, to name a few, strongly suggested that there was something wrong with our classical view of the world. So the physicists, especially theorists at that era started to figure out the why, and eventually lead to the birth of QM.
As for the first question, well, it really depends on how badly your result is. Normally, instead of X+1, we normally look at the percentage of the difference. That is, (X+1-X)/X
So if it is like, say, 5~10%, then it isn't too bad. You probably have some error here and there and add up to 5~10% of error. If it is, say, 10, 20% off, then you better really figure out where those errors come from. Some poorly constructed experiments might have this amount of error built in. If it is say, 100%, then I'll go back and check the calculation. Because normally anything more than 4,50% is probably calculation error. Or otherwise, you probably would get a Nobel Prize for it :)
 
  • #5
Couldn't resist to answer your first post, too.
Physics girl actually have done a good job in explaining it and illustrate it. Here I just want to illustrate numerically how the number of data points relate to the error, and also, some technical meaning of the confident level.
So there is a type of distribution called poisson distribution. Without going into details, basically the error of N events would be 1/(N)^2.
So for example, if I throw the coins and get 100 times of face up, then the error is 100^-2 = 0.1 = 10%.
Now, if I get 10,000 times of face up. Then the error is 10,000^-2= 0.01=1%.
As you can see, the more I do, the less the error would be.

And confident level, it could be a very technical phrase. It is especially important in some of the low probable experiments. Normally, it would look something like this.
"The confident level of X = something is 95%"
One of the best example would be testing the proton life time. The proton life time, as calculated, is extremely low (I think it is 10^33 years).
So if you set up an experiment, and test for the proton decay, and after a year, you get nothing. Does this mean that the theory is wrong? No, not really, Due to the poisson distribution, if is possible to have 0 events with a typical life time.
So by fitting the poisson distribution, and do some calculation, you can find the graph with maximum value n with, say 95% of the graph is larger than 0. That is, for a particular n, you have 5% of possibility to get 0 events. Then you say that the confident level of my result is 95%.
Btw, the intro physics experiments are different species than advance labs. And since the advance lab is really harder, I could understand that many people try to avoid it and end up not having enough training in data statistical analysis.
 
  • #6
anirudh215 said:
I have quoted ZapperZ's post from the Skepticism sub-forum. I am not in college as yet and am quite curious to know why Zz said this. Could someone explain this post to me in further detail? What does one consider an experiment? In such an experiment, what amount of data would you need to agree with your theory to say your theory is true? How does one go about collecting data?

I haven't asked ZapperZ why he wrote that, but I agree with it, and here's why:

The current 'canonical' undergraduate physics curriculum does not do a good job of explaining the experimental work upon which all the mathematical artifice of descriptive theory is erected. Parts of being a good scientist (theoretical, experimental, pure, applied, etc.) include understanding how experimental data is interpreted in the context of a theory, how a scientific apparatus is constructed and validated, how to construct a statistically significant dataset and how to quantify the obtained body of numerical results. Although the curriculum does a good job of providing adequate training for the mathematical and abstracted concepts of Physics, the curriculum provides (relatively) poor training in the experimental aspects.

For example, your phrase "what amount of data would you need to agree with your theory to say your theory is true?" is based on a false premise: a single unexplained result is sufficient to require theory be altered. Theories are experimentally falsified, not proven. Experiments do not exist to chase theories; they exist to provide new data which then pushes theory to improve. Experiments that merely validate existing theories are good for training future experimentalists, but little else (IMO).
 
  • #7
Andy Resnick said:
For example, your phrase "what amount of data would you need to agree with your theory to say your theory is true?" is based on a false premise: a single unexplained result is sufficient to require theory be altered. Theories are experimentally falsified, not proven. Experiments do not exist to chase theories; they exist to provide new data which then pushes theory to improve. Experiments that merely validate existing theories are good for training future experimentalists, but little else (IMO).

Well, personally I think that whether experiments falsify or prove theories is really a point of view. For instance, there was an unexplained phenomenon that the experiments falsify the existing theories, so new theories emerge, and experiments were conducted to prove the theory.

Also, in addition to those ground breaking experiments, there are some experiments that, in some sense, validate old theories, and are still extremely important to the physics community.
For instance, experimentally verifying the fine structure constant, but with a higher sensitivity. It was very important because then you see if the existed theories still agree with the experiments. If not, then the exciting part comes in!
Another type is to verify constants. For instance, the speed of light. If one could construct a experiment that could test the speed of light for another 2 digits more accurate, it would be very helpful to the whole community.
The third type I could think of is calibration/testing the equipments.
 

FAQ: What is an Experiment? Exploring ZapperZ's Post

What is an experiment?

An experiment is a procedure or test used to investigate a hypothesis or validate a scientific theory. It involves manipulating one or more variables and measuring the outcomes to determine a cause-and-effect relationship.

Why is it important to conduct experiments?

Experiments are important because they allow scientists to test their hypotheses and theories in a controlled setting. This helps to establish the validity of their ideas and contributes to the advancement of scientific knowledge.

What are the key components of an experiment?

The key components of an experiment include a hypothesis, independent and dependent variables, control group, experimental group, and a standardized procedure. These elements help to ensure that the results of the experiment are reliable and valid.

How do you design a successful experiment?

To design a successful experiment, it is important to clearly define the research question or hypothesis, identify the variables to be manipulated and measured, select appropriate controls, and follow a standardized procedure. It is also crucial to carefully analyze and interpret the results.

What are some common types of experiments?

Some common types of experiments include controlled experiments, where all variables are controlled except for the one being tested, natural experiments, where the researcher observes naturally occurring phenomena, and field experiments, which are conducted in real-life settings.

Similar threads

Replies
3
Views
1K
Replies
11
Views
1K
Replies
40
Views
3K
Replies
27
Views
3K
Replies
4
Views
2K
Replies
2
Views
2K
Replies
3
Views
2K
Replies
6
Views
2K
Back
Top