speed of sound

Direct Echo-Based Measurement of the Speed of Sound

Estimated Read Time: 3 minute(s)
Common Topics: time, sound, distance, firecracker, wall
Distance vs. time a firecracker report echoed off of a building

Figure 1

Figure 1: Distance vs. time a firecracker report echoed off of a building at different distances on both a cold and a warm day.

The speed of sound varies slightly with temperature, but at a constant temperature, the distance sound travels increases linearly with time according to the equation, D = Vt, where D is the distance traveled (in meters), V is the speed of sound (in m/s), and t is the time in seconds. In this experiment, the time for the report of a firecracker to travel-specific round-trip distances will be measured as a test of the above formula. Hypothesis: The distance sound travels is linear in time, according to D = Vt, and the velocity is well approximated by accounting for temperature.

Data was acquired by setting a firecracker a series of carefully measured distances from a large flat wall and igniting the firecracker a short distance from a microphone while digitally recording the microphone signal. Here, a Vernier LabPro was used with a microphone sampling at 20000 samples per second, but other sound digitizers can also be used.  In this case, it was convenient to use a nearby church building for the large flat wall, as shown in Figure 2.

Large, flat wall used to provide echos

Figure 2

Figure 2:  Large, flat wall used to provide echos.  Note, that it is best to use the side of the building so that the intended wall is the only surface reflecting sound waves to the microphone.

The initial report and the echo of the wall are easily visible in the data when graphed, as shown in Figure 3. The time between these two signals is the round trip travel time of the sound to the wall and back (keep in mind that the travel distance is TWICE the distance of the firecracker from the wall).

Sound signal showing direct firecracker report

Figure 3

Figure 3: Sound signal showing direct firecracker report close to 2.945 s and the reflected report near 3.06 s.

Each file was graphed to determine the direct report time and the echo time. The difference is the round-trip travel time.  Then the distance data is plotted vs. the time data in graph.exe. (https://www.padowan.dk/download/)  When fitting to a trendline in graph.exe, we were sure to check the box to set the vertical intercept to zero, as the hypothesis predicts not only a linear relationship but also a vertical intercept of zero (a direct proportionality.)

Inspection of Figure 1 shows that the hypothesis was supported.  Comparison with the speeds of sound predicted at the two temperatures is in agreement to within about 1%.  (For example https://www.weather.gov/epz/wxcalc_speedofsound ) It may be noted that a couple of other approaches to analysis are possible.  A speed can be computed for each trial as the distance divided by the time for that trial.  Then each of the five resulting speeds can be averaged, and a standard deviation and standard error of the mean (SEM) computed with standard spreadsheet calls.  This approach provides a speed of sound measurement very close to the least-squares fit and also provides an uncertainty estimate (the SEM).  Another analysis approach is to use the LINEST spreadsheet call to do the least-squares fit.  An advantage of this method is the LINEST spreadsheet call can also return an estimated uncertainty for the slope in the fit, which is an estimate of the uncertainty in the velocity since the slope IS the velocity.

This lab was completed in both an early high school Physical Science class and a late high school Physics class last year at a school where I served as a volunteer lab coordinator for several labs.  The students, the teachers, and the administrators at the school all got a bang out of it.  Since it was a favorite, I returned to reprise the lab at the end of the year on a warm day which then provided an opportunity to confirm the expected dependence of the speed of sound on temperature.  Fun times!

42 replies
  1. Swamp Thing says:

    Fun tip: You can use the sound recorder app in a smartphone to record the bang and the echo. There are some good apps that you can then use to display the waveform and measure the delay within the phone, or else you can send the files to a desktop.

    I used this approach to measure the muzzle velocity of things that I fired from my homemade blowgun. I recorded the "pop" from the exiting projectile, followed by the sound of said projectile smacking through a sheet of paper pinned to a backdrop.

    BTW, making a blowgun is a fun way to learn a lot of physics and of course, to teach "safety first."

  2. Dale says:
    Dr. Courtney

    You seem to be maintaining a disagreement based on your own authority without a willingness to cite peer-reviewed support for your position that the favored (or valid) approach is to include a constant term.The best reference I have is:
    "it is generally a safe practice not to use regression-through-the origin model and instead use the intercept regression model. If the regression line does go through the origin, b0 with the intercept model will differ from 0 only by a small sampling error, and unless the sample size is very small use of the intercept regression model has no disadvantages of any consequence. If the regression line does not go through the origin, use of the intercept regression model will avoid potentially serious difficulties resulting from forcing the regression line through the origin when this is not appropriate." (Kutner, et al. Applied Linear Statistical Models. 2005. McGraw-Hill Irwin). This I think summarizes my view on the topic completely.

    Other cautionary notes include:

    "Even if the response variable is theoretically zero when the predictor variable is, this does not necessarily mean that the no-intercept model is appropriate" (Gunst. Regression Analysis and its Application: A Data-Oriented Approach. 2018. Routledge)
    "It is relatively easy to misuse the no intercept model" (Montgomery, et al. "Introduction to Linear Regression". 2015. Wiley)
    "Unlike the model with an intercept, in the no-intercept model the sum of the residuals is not necessarily zero" (Rawlings. Applied Regression Analysis: A Research Tool. 2001. Springer).
    "Caution in the use of the [no intercept] model is advised" (Hahn. Fitting Regression Models with No Intercept Term. 1977. J. Qual. Tech.)

    All directly echoing comments I made and issues I raised earlier.

  3. Dale says:
    Dr. Courtney

    And I've seen scientists publish papers with vertical shifts that make no sense. The probability of an effect when the cause is reduced to zero should be exactly zero.I disagree emphatically on this. Including a vertical intercept in your regression is always valid (categorically and without reservation). You may have a theory that says that in your experiment the effect is zero when the cause is zero, but if you artificially coerce that value to be zero then you are ignoring the data.

    If the data has a non-zero intercept then either your theory is wrong or your experiment is wrong. Coercing it to zero makes you ignore this red flag from the data.

    If your experiment is right and your theory is right then the confidence interval intercept will naturally and automatically include zero. Coercing it to zero prevents you from being able to use the data to confirm that aspect of your theory.

  4. Dr. Courtney says:
    Dale

    I apologize for my wrong assumption. Based on your questions it seemed like you did not understand the statistical issues involved as you did not mention any of the relevant statistical issues but only the pedagogical/scientific issues. For me, if I had decided (due to pedagogical or scientific considerations) to use the no-intercept method then I would have gone through a few of the relevant statistical issues, identified them as being immaterial for the data sets in consideration, and only then proceeded with the pedagogical/scientific justification. I mistakenly thought that the absence of any mention of the statistical issues indicated an unfamiliarity with them.

    That is not the only issue, nor even the most important. By far the most important one is the possibility of bias in the slope. It does not appear to be a substantial issue for your data, so that would be the justification I would use were I trying to justify this approach.

    Or in the Bayesian framework you can directly compare the probability of different models.

    This would be a good statistical justification. It is not a general justification, because the general rule remains that use of the intercept is preferred. It is a justification specific to this particular experiment that the violation of the usual process does not produce the primary effect of concern: a substantial bias in the other parameter estimates.

    Then you should know that your Ockham's razor argument is not strong in this case. It is at best neutral.

    In the Bayesian approach this can be decided formally, and in the frequentist framework this is a no-no which leads to p-value hacking and failure to replicate results.All considerations from the viewpoint of doing science intended for the mainstream literature. But from the viewpoint of the high school or intro college science classroom, largely irrelevant. The papers I cited make a strong case for leaving out the constant term when physical considerations indicate a reasonable physical model will go through the origin, and I think this is sufficient peer-reviewed statistics work to justify widespread use in the classroom in applicable cases. I also pointed out the classroom case of Mass vs. Volume where leaving out the constant term consistently provides more accurate estimates of the material density than including it. Been at this a while and never seen a problem when the conditions are met that are pointed out in the statistics papers I cited. You seem to be maintaining a disagreement based on your own authority without a willingness to cite peer-reviewed support for your position that the favored (or valid) approach is to include a constant term.

    I don't regard the Bayesian approach as appropriate for the abilities of high school students I've tended to encounter. In contrast, computing residuals (and their variance) can be useful and instructive and is well within their capabilities once they've grown in their skills through 10 or so quantitative laboratories.

    But zooming out, the statistical details of the analysis approach are all less relevant if one has taught the students the effort, means, and care to acquire accurate data in the first place for the input and output variables. It may seem to some that I am cutting corners in teaching analysis due to time and pedagogical constraints. But start with 5-10 data points with all the x and y values measured to 1% and you can yield better results with simplified analysis than you can with the same number of data points with 5% errors and the most rigorous statistical approach available. Analysis is often the turd polishing stage of introductory labs. I don't teach turd polishing.

  5. Dale says:
    Dr. Courtney

    You seem to have wrongly assumed that I do notI apologize for my wrong assumption. Based on your questions it seemed like you did not understand the statistical issues involved as you did not mention any of the relevant statistical issues but only the pedagogical/scientific issues. For me, if I had decided (due to pedagogical or scientific considerations) to use the no-intercept method then I would have gone through a few of the relevant statistical issues, identified them as being immaterial for the data sets in consideration, and only then proceeded with the pedagogical/scientific justification. I mistakenly thought that the absence of any mention of the statistical issues indicated an unfamiliarity with them.

    Dr. Courtney

    Yes, I understand that the R-squared values and other goodness of fit statistics are not comparable with other models.That is not the only issue, nor even the most important. By far the most important one is the possibility of bias in the slope. It does not appear to be a substantial issue for your data, so that would be the justification I would use were I trying to justify this approach.

    Dr. Courtney

    A better way to compare with other models is to compute the variance of the residuals.Or in the Bayesian framework you can directly compare the probability of different models.

    Dr. Courtney

    the resulting speed of sound has always been within 1% of the expectation based on the ambient temperatureThis would be a good statistical justification. It is not a general justification, because the general rule remains that use of the intercept is preferred. It is a justification specific to this particular experiment that the violation of the usual process does not produce the primary effect of concern: a substantial bias in the other parameter estimates.

    Dr. Courtney

    Occam's Razor here is more of a pedagogical motive for keeping the model simple. I know all along that the error model is more complicatedThen you should know that your Ockham's razor argument is not strong in this case. It is at best neutral.

    Dr. Courtney

    But having done both, one then faces the challenge of deciding which fit is better.In the Bayesian approach this can be decided formally, and in the frequentist framework this is a no-no which leads to p-value hacking and failure to replicate results.

  6. Dr. Courtney says:
    Dale

    That is fine, but before doing so you should make sure that you have the necessary statistical background knowledge to wisely make that call. You should also realize that it is not clearly the right call and that valid informed objections and differences of opinion are to be expected on this point.I do. You seem to have wrongly assumed that I do not, and that if I had there would only be one right call to make since you previously wrote:

    Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

    Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives.I have thoroughly reviewed the relevant statistics literature. I have authored a widely distributed least-squares fitting software package. I have taught several college level statistics courses. I am aware of the issues. A few quotes from the literature:

    In certain circumstances, it is clear, a priori, that the model describing the relationship between the independent variable and the dependent variable(s) should not contain a constant term and, in consequence, the least squares fit needs to be constrained to pass through the origin.
    (HA Gordon, The Statistician, Vol 30 No 1, 1981)

    There are many practical problems where it is reasonable to assume the relationship of a straight line passing through the origin … (ME Turner, Biometrics, Vol 16 No 3, 1960)

    This article describes situations in which regression through the origin is appropriate, derives the normal equation for such a regression and explains the controversy regarding its evaluative statistics. (JG Eisenhauer, Teaching statistics, Vol 25 No 3 2003)

    Dale

    Personally, to me this issue is about understanding the limitations of your tools. A tool can often be used for a task in a way that it is not intended to be used. Sometimes it is ok, but sometimes it is not. If you are going to use a tool in a way it is not intended then you need to understand the likely failure modes and be vigilant.Yes, I understand that the R-squared values and other goodness of fit statistics are not comparable with other models. A better way to compare with other models is to compute the variance of the residuals. There are columns in my analysis spreadsheet for my pilot experiments doing just that.

    Dale

    I have seen other scientists publish papers misusing linear regression this specific way and claiming an effect where none existed due to the biasing. The tool was breaking under misuse. They also had no clear physical interpretation for the intercept and chose, as you did, to remove it on those same grounds. It is not a thing to be done lightly and they suffered for it.And I've seen scientists publish papers with vertical shifts that make no sense. The probability of an effect when the cause is reduced to zero should be exactly zero. (The risk of death from a poison should be zero for zero mass of poison. The probability of a bullet penetrating armor should be exactly zero for a bullet with zero velocity. The weight of a fish with zero length should be exactly zero.) Further, you are creating a strawman to claim my scientific justification for removing the constant term was the lack of a physical meaning. I justify removing the constant term based on strong physical arguments that for zero input, the output can only be zero. The lack of physical meaning was a pedagogical motive, not a scientific justification.

    Dale

    At a minimum the intercept can be used to indicate a failure of your experimental setup. If you have no theoretical possibility for an intercept and yet your data shows an intercept then that is an indication that your experiment is not ideal. In your case, your distance measurements and time measurements are not perfect. Perhaps there is a systematic error and not just random errors. A systematic error could lead to a non-zero intercept, which you are artificially suppressing.As explained above, my practice is to try a number of analysis techniques on my pilot data, and then slim down the analysis for students to the one that makes the most sense for the overall context. Done the echo-based speed of sound experiment lots of time now. There has never been a problem not adding the extra constant term, and the resulting speed of sound has always been within 1% of the expectation based on the ambient temperature. When the extra parameter is used (by me, not students, but I do re-analyze their data to check for such things) it is invariably close to zero (relative to its error estimate), so one can say it is not significantly different from zero. Some teachers may see the pedagogical benefit of walking students through these steps, but software that provides the error estimates in the slope and vertical intercept tends to be harder for students to use and confusing, so I avoid it for most student uses.

    Dale

    I don't think that Ockham's razor justifies your approach here. The problem is that by simplifying your effect model you have unknowingly made your error model more complicated. Your errors are no longer modeled as zero mean, and the mean of your residuals is directly related to what would have been your intercept. All you have done is to move the same complexity to a hidden spot where it is easy to ignore. It is still there. You still have the same two parameters, but you have moved one parameter to the residuals and suppressed its output.Occam's Razor here is more of a pedagogical motive for keeping the model simple. I know all along that the error model is more complicated, but the students are not usually cognizant of the error model. Much like ignoring air resistance in projectile motion problems, the motive is to keep the model the students see simpler. For published research, I do not doubt the value of the approach of trying linear models with a constant term to see if it is statistically different from zero, and if the slope is changed significantly. But having done both, one then faces the challenge of deciding which fit is better. This is way beyond the scope of a high school science class, but it is discussed here (Casella, G. (1983). Leverage and regression through the origin. The American Statistician, 37(2), 147-152.) Designing labs is about providing students new skills in manageable doses.

    Most papers I've read on through the origin regression are not primarily concerned with whether models that go through the origin SHOULD be used in the first place, but rather how the descriptive statistics are used to assess the goodness of fit. Many possible criticisms do not just apply to linear least squares, but to most non-linear least squares models that are forced through the origin. There is now wide agreement that these models are appropriate in many areas of science, including weight-length in fish, a multitude of other power law models, probability curves, and a variety of economic models.

  7. Dale says:
    Dr. Courtney

    My pedagogical disagreement with this is it trains students to accept terms in physics formulas in cases where those terms do not have clear physical meanings.That is fine, but before doing so you should make sure that you have the necessary statistical background knowledge to wisely make that call. You should also realize that it is not clearly the right call and that valid informed objections and differences of opinion are to be expected on this point.

    Personally, to me this issue is about understanding the limitations of your tools. A tool can often be used for a task in a way that it is not intended to be used. Sometimes it is ok, but sometimes it is not. If you are going to use a tool in a way it is not intended then you need to understand the likely failure modes and be vigilant.

    I have seen other scientists publish papers misusing linear regression this specific way and claiming an effect where none existed due to the biasing. The tool was breaking under misuse. They also had no clear physical interpretation for the intercept and chose, as you did, to remove it on those same grounds. It is not a thing to be done lightly and they suffered for it. At a minimum the intercept can be used to indicate a failure of your experimental setup. If you have no theoretical possibility for an intercept and yet your data shows an intercept then that is an indication that your experiment is not ideal. In your case, your distance measurements and time measurements are not perfect. Perhaps there is a systematic error and not just random errors. A systematic error could lead to a non-zero intercept, which you are artificially suppressing.

    Dr. Courtney

    Back to Einstein and Occam – my clear preference is to train students in science classes to want (even demand) explanations for every term in physics equations.I don't think that Ockham's razor justifies your approach here. The problem is that by simplifying your effect model you have unknowingly made your error model more complicated. Your errors are no longer modeled as zero mean, and the mean of your residuals is directly related to what would have been your intercept. All you have done is to move the same complexity to a hidden spot where it is easy to ignore. It is still there. You still have the same two parameters, but you have moved one parameter to the residuals and suppressed its output.

  8. Dr. Courtney says:
    fizzy

    A graduated cylinder which is not cylindrical to within the indicated precision seems a little unlikely. It seems far more likely that your spurious attempts force the fit through zero was leading to an incorrect regression slope which produced increasing residuals at higher volumes. It is hard to say without seeing the data but it sounds like it did have a finite intercept, but you were in denial about such things, regarding them as "silly".I expect folks who think it is unlikely for high school lab equipment to not be within its indicated precision have not spent sufficient time with high school lab equipment. I teach students how to check and double check equipment accuracy. What better simple check on the accuracy of a graduated cylinder (accuracy spec 0.2 CC) than an electronic balance (verified accuracy spec 0.01 g)?

    Once accepts the constant density of water, one can use the balance itself as the best available check on the accuracy of the graduated cylinder. About half the measurements with the graduated cylinder were outside its spec. This is not a train wreck for how graduated cylinders are usually used in science labs, but I do encourage students to take note of the limitation.

    The resulting density of water without a vertical intercept was 0.9967 (g/cc) with an R-Squared of 0.9999. Adding a vertical intercept puts the R-Squared closer to one, but the resulting density of water is 1.0045 g/cc with a vertical intercept suggesting that 0 cc of water has a mass of -0.5467 g. Silly. The known good value for the density of water at 20 deg C is 0.998 g/cc.

    fizzy

    Did you teach your students how to correctly read the meniscus of the fluid in the measuring cylinder?Yes, of course.

    fizzy

    That could lead to a finite intercept, if you would allow that possibility to be seen. There clearly was some experimental error which needs to be identified. Had you not expressly removed the constant term, it would have given you some information about the problem. You have neatly demonstrated one reason not to bias the regression by excluding parameters.I only removed the constant term for the student method after my careful pilot experiment. My careful pilot included analysis with several possible models, linear with and without constant term, and quadratic with and without constant term. I also carefully considered the residuals of the different models for three different liquids with known densities: water, isopropanol, and acetone. The high correlations of the residuals for different liquids suggests the most likely source of error was the graduated cylinder itself.

    fizzy

    If you suspected the cylinder was not straight, did you at least measure it to attempt to falsify this hypothesis. Apparently not. Did you substitute another cylinder to test the hypothesis.And with what instrument commonly found in high school labs would you suggest accurately measuring the inner diameter at the bottom of a graduated cylinder? The other available cylinders were from the same manufacturer and demonstrated the same trend. (Adding apparently equal volumes near the top added more mass on the balance.) But the most convincing evidence was seeing the same trend in two additional liquids (isopropanol and acetone.) I expect as a manufacturing convenience these plastic graduated cylinders are formed on molds that make them slightly narrower at the bottom than at the top so that they are easier to remove from the molds. It is much more cost effective and resistant to breakage than glassware, and adequate for many laboratory purposes if the limitations are understood. If need be, a cylinder could be recalibrated with water, but it is easier just to double check on a balance for liquids of known density.

  9. Dr. Courtney says:
    Dale

    Sure, I was recommending reading the literature for you as a teacher, not for your students. You seemed reluctant to accept the validity of my explanation about why retaining the intercept is important, so you should inform yourself of the issue from sources you consider valid. Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

    Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives. Personally, I would simply use the default option to include the intercept without making much discussion about it. I would leave the teaching about the statistics to a different class, but I would quietly use valid methods.My pedagogical disagreement with this is it trains students to accept terms in physics formulas in cases where those terms do not have clear physical meanings. Back to Einstein and Occam – my clear preference is to train students in science classes to want (even demand) explanations for every term in physics equations. In a distance vs. time deal with constant velocity, the physical meaning of the constant term is the position (or distance traveled) at time t = 0. This is problematic from the viewpoint of learning the science, and since students are unlikely to grasp the underlying mathematical justification, in the absence of a clear physical meaning, it will seem like a fudge factor whose need is asserted by authority. For pedagogical purposes, I expect to continue to teach my students that the meaning of the vertical intercept is the anticipated output for zero input. I value the science more than the math.

    Demanding a physical meaning for the vertical intercept has born much fruit for my students. Several years back a group of 1st year cadets at the Air Force Academy used this approach to identify the vertical intercept of bullet energy vs. powder charge line as the work done by friction while the bullet traverses the rifle barrel. This method remains the simplest and one of the most accurate methods for measuring bullet friction at ballistic velocities. See: https://apps.dtic.mil/dtic/tr/fulltext/u2/a568594.pdf When studying Hooke's law for some springs, a non-zero vertical intercept is needed to account for the fact that the coils prevent some springs from stretching until some minimum force is applied. The physical meaning is clear: the vertical intercept when plotting Force vs. Displacement is the applied force necessary for the spring to begin stretching.

    In contrast, the mass vs. volume lab doesn't lend itself to a physical meaning when plotting an experimental mass vs. volume. The mass of a quantity of substance occupying zero volume cannot be positive, and it cannot be negative. It can only be zero. Allowing it to vary presents a problem of giving a physical meaning to the resulting value, because "the expected mass for a volume of zero" does not make any sense. It may be mathematically rigorous, but in a high school science class, it's just silly. I'd rather not send my students the message that it's OK for terms in equations not to have physical meanings if someone mumbles some mathematical mumbo jumbo about how the software works. (Students go into Charlie Brown mode quickly.)

    I use Tracker often in the lab for kinematics types of experiments and we do a lot with the kinematic equations. When fitting position vs. time, it is essential that each term in the fit for x(t) have the same physical meaning as in the kinematic equations. The constant term is the initial position, the linear coefficient is the initial velocity, and the quadratic coefficient is twice the acceleration. If the initial position is defined to be zero (as often the case), then a constant term in the model does not make sense. (Tracker allows t = 0 to be set at any frame and the origin can usually be placed at any convenient point; often the position of the object at t = 0 is a convenient point.)

  10. Dale says:
    Dr. Courtney

    But my experience is that from a pedagogical approach, it is best to keep the explanations in a zone where the intended audience (high school students) can understand them quickly.Sure, I was recommending reading the literature for you as a teacher, not for your students. You seemed reluctant to accept the validity of my explanation about why retaining the intercept is important, so you should inform yourself of the issue from sources you consider valid. Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

    Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives. Personally, I would simply use the default option to include the intercept without making much discussion about it. I would leave the teaching about the statistics to a different class, but I would quietly use valid methods.

  11. Dr. Courtney says:
    Dale

    The mathematical assumption is that the independent variable has 0 error. So from a statistics perspective that is what defines independent vs dependent. It is not a physical relationship.I tend to prefer the science understanding of independent and dependent variables in intro science courses (rather than the mathematical definition). The independent variable is usually the thing that is controlled as the hypothetical cause, and the dependent variable is the outcome that is measured as the hypothetical effect. Of course, I cringed a bit plotting Distance vs. time for the echo experiment, because the distance is carefully controlled and the time is measured. But my experience is that if the slope falls out directly from the analysis, a lot more students will get it.

    Plotting time vs. distance preserves the independent variable on the horizontal axis, but then the fit yields a slope that is the reciprocal of the speed of sound. Too many students get lost in the extra step to compute the speed of sound.

  12. Dr. Courtney says:

    As often occurs, there is a tension between the better mathematical descriptions and improved statistical approaches with the pedagogical simplifications needed given the time constraints and mathematical limitations of real high school classes.

    Sure, consulting the literature or an appropriate expert can always suggest a model or statistical approach that is in some sense "better" than a given set of simplifications chosen to deal with the time constraints and math limits of real high school students. But my experience is that from a pedagogical approach, it is best to keep the explanations in a zone where the intended audience (high school students) can understand them quickly. High school students can use least squares fitting to understand the most important scientific learning objectives of a laboratory without worrying too much about the advanced statistics behind it. My experience has also been that it works better than expected given that the assumptions are never really satisfied (measurements never truly have zero error, even for the independent variable.)

    For me, it is enough if students learn to make measurements accurate to 1% and analyze them with sufficient rigor to say whether a hypothesis is supported in the sense of high school science rather than rigorous statistical hypothesis testing. My approach to lab science minimizes believing things based on appeal to authority in favor of believing things based on experimental data. It's hard to insist on a constant term based on the statistical arguments, and most high school science courses don't have time or room in the curricula for the more involved treatment of error analysis. My approach is to include a lot of error awareness along the way and point out where possible how to estimate errors (not rigorously) and identify the dominant sources of error in most experiments. But for the most part, getting students to be careful enough to have errors < 1% most of the time is already far superior to the 5-20% errors I see dominating most high school and even intro college science labs.

  13. Dale says:
    Dr. Courtney

    Which is preferred in the case where the independent variable is expected to have the larger errors?The mathematical assumption is that the independent variable has 0 error. So from a statistics perspective that is what defines independent vs dependent. It is not a physical relationship.

  14. Dale says:
    Dr. Courtney

    For now I'm not buying it, and I intend to keep teaching students to set the vertical intercept to zeroI would strongly encourage you to do some research into the statistical literature in order to get a better understanding of the issue. It is fine if you do not see me as credible on the topic, but you should make sure that you do some solid research into the statistical issues before you dismiss the suggestion.

    Dr. Courtney

    Of all models of the form f(x) = ax^n, why is n=1 so special that it is better modeled as f(x) = c + ax^n?It is not, all linear regressions and even generalized linear regressions need the intercept term for the same reasons.

    Dr. Courtney

    I've always been taught and been convinced that models with fewer adjustable parameters are better.This is a valid point. However, the issue is the statistical method. If you want to use ordinary least squares to do your fitting then you need an intercept term. If you want to do a test with a model that drops the intercept term then there are other methods to do so, but they are far more involved.

    Dr. Courtney

    Other factors being equal, simpler models are usually better.This is basically a repeat of the previous point. You might find Bayesian statistics to your liking. Bayesian methods naturally include both Popper's falsifiability and Ockham's razor as a fundamental part of the method. It also allows for comparison of non-nested models in a rational way.

    Dr. Courtney

    Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit?Neither, it is part of the mathematical machinery of minimizing the least squares residuals. One of the assumptions is that the residuals are zero-mean and constant variance. The intercept is what does that. If you eliminate that then you need to carefully consider your error model. It will no longer be zero mean. What does that imply about your measurements? Is that a reasonable error model?

    Again, don't take my word for it, but also don't simply assume that all is well with your approach either. Do your own research into the statistics literature on the topic and actually learn for yourself about these issues. Gather information you find to be credible and make your opinion an informed opinion, specifically an opinion informed by the statistical literature.

  15. jbriggs444 says:

    Going back and re-reading the experimental setup, we find: "igniting the firecracker a short distance from a microphone".

    So the correct mathematical model would not be a linear fit in the first place. Instead, one has a trig problem — a triangle with two long sides and a short side between. We want to consider the difference between the sum of the lengths of the two "long" sides and the length of the "short" side in the limit as the height of the triangle approaches zero.

    Let us simplify the model by assuming that the firecracker and microphone are arranged perpendicular to the wall so that the triangle is isosceles. Ideally we are interested in the difference in path length as a function of the length of the perpendicular bisector of the "short" side (aka the distance to the wall).

    Let "s" denote the length of the "short" side — the short separation between firecracker and microphone.

    Let "h" denote the height of the triangle — the length of the perpendicular bisector/the distance to the wall.

    Let "l" denote the length of one "long" side — the diagonal distance from firecracker to midpoint on wall.

    Let "d" denote the delta between the path lengths.

    $$d=2l-s$$
    $$l=sqrt{h^2+frac{s^2}{4}}$$
    $$d(h)=2sqrt{h^2+frac{s^2}{4}} – s$$

    Let us see what Excel has to say…

    s h    2h correct      delta
    1 0    0  0            0
    1 0.5  1  0.414213562 -0.585786438
    1 1    2  1.236067977 -0.763932023
    1 2    4  3.123105626 -0.876894374
    1 3    6  5.08276253  -0.91723747
    1 4    8  7.062257748 -0.937742252
    1 5    10 9.049875621 -0.950124379
    1 6    12 11.04159458 -0.958405421
    1 7    14 13.03566885 -0.964331152
    1 8    16 15.03121954 -0.968780458
    1 9    18 17.02775638 -0.972243623
    1 10   20 19.02498439 -0.975015605
    1 11   22 21.02271555 -0.977284454
    1 12   24 23.0208243  -0.979175701
    1 13   26 25.01922366 -0.980776337
    1 14   28 27.01785145 -0.982148548
    

    It looks like a correct linear fit will have a non-zero intercept.

    Edit: Alternately, one could re-scale the independent variable h to reflect the computed path length delta.

  16. A.T. says:
    Dr. Courtney

    Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit?The practical question is simply which slope gives you the better approximation of the speed of sound. I guess it depends on the type of error you have, and the distribution of the samples.

  17. Dr. Courtney says:
    fizzy

    Even if there is not time to go into the details of the maths it would seem important to at least mention that it only minimises y residuals and that the basic criterion for this to work properly is to have very small errors on the x axis variable. It is only under those conditions that it will produce the "best unbiased linear estimation" of the slope.This is a potential contradiction with your earlier assertion that the vertical axis should always have the dependent variable. Now you are saying the vertical axis should be the variable with the larger errors. Which is preferred in the case where the independent variable is expected to have the larger errors?

    In the echo experiment, errors in the timing are on the order of 0.1% due to the sharpness of the sound leading edges and the accuracy of the clock in the sound card. In contrast, errors in the distance measurement arise from students measuring a distance to a wall with a fabric tape measure. Due care can reduce distance measurement errors near (or slightly below) 1%, but 0.1-0.2% is unlikely with high school students. So you are now saying that plotting the distance on the vertical axis was the right choice because the errors are larger?

  18. Dr. Courtney says:
    Dale

    You should pretty much always include it. The only time you can leave it out is when it is actually 0, not just not significantly different from 0, but exactly 0. And in that case then leaving it in is the same as leaving it out, so you should always leave it in.

    First, and most importantly, if you remove it then all of your other parameter estimates become biased. The EmDrive fiasco is a great example of this. This bias occurs even if the intercept is not significantly different from zero.

    Second, your residuals will no longer be zero mean. This may be related to your observation.

    Third, many software implementations change the meaning of the R^2 value they report when the intercept is removed. So the resulting R^2 cannot be meaningfully compared to other R^2 values nor interpreted in the usual fashion.

    Fourth, even if your true intercept is zero if the function is not exactly linear then your fit can be substantially worse than a linear fit with an intercept.

    I’m sure there are other reasons, but basically don’t do it. It is never statistically beneficial (since the only time it is appropriate is when it makes no difference) and it can be quite detrimental. If it makes a difference then you need to leave it in for the reasons above, and if it doesn’t make a difference then it doesn’t hurt to leave it in.

    Honestly, with your data the above biases and problems should be minuscule. So this data seems to be on the “it doesn’t make a difference” side of the rule. But I would recommend leaving it in for the future. I wouldn’t proactively give any explanation to the students, but just use the default setting.For now I'm not buying it, and I intend to keep teaching students to set the vertical intercept to zero when the basic science of the experiment suggests the model will go through the origin. Here's why:

    1. Of all models of the form f(x) = ax^n, why is n=1 so special that it is better modeled as f(x) = c + ax^n? I've never heard an argument or a need to add a constant term when n = 1/2 (for example, fall time as a function of drop height) or when n = 3/2 (for example Kepler's third law) or when the power is unknown (or treated as unknown) or in any other case except for suspected instrument issues where the instrumental measurement may be adding a constant offset to the measurement.

    2. I've always been taught and been convinced that models with fewer adjustable parameters are better. In least squares fitting, a perfect fit can usually be achieved by having as many adjustable parameters as data points. Experiments with small numbers of data points require models with smaller numbers of adjustable parameters to better test a hypothesis. In the extreme, one could never support a direct proportionality using two data points fitting to a line with a constant term, but one can support it fitting to a line forced through the origin.

    3. "Things should be as simple as possible, but no simpler." – Einstein. Though not an absolute arbiter, I also think it wise to keep Occam's Razor in mind. Other factors being equal, simpler models are usually better. I have nothing against a bit of exploratory data analysis, but regarding direct proportions, I see no compelling case why adding a constant should be the preferred two parameter model rather than adding a quadratic term or trying a power law.

    4. Experience. I've been teaching these labs for a long time. My experience is that when the physics screams that the model must go through the origin, agreement will be closer between the best-fit slope and the known good value by fitting to a one parameter model. I have multiple data sets not just showing this in the case of speed of sound measurements, but also mass vs volume measurements and other high school type experiments. Even adding the second parameter in other ways (quadratic term, power law) yields less accurate results for the slope.

    5. Physical meaning. I'm a big fan of teaching the physical meaning of both the slope and intercept when fitting to a linear model. Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit? If it seems to me like more of a fudge factor, it is best to skip it. And to me, it seems like a fudge factor for models of the form f(x) = ax^n. Sure, a constant term in a direct proportionality may have the meaning of a systematic offset in the measurement and is something to keep in mind as a possibility. But due care (as zeroing the electronic balance with the empty graduated cylinder on it) can pretty much eliminate it. I'm not keen on teaching students to add fudge factors "just in case."

  19. fizzy says:

    Analysis of the residuals of the fit to a line forced through the origin suggested the small residuals were systematically due to widening of the cylinder at the top. Fitting to a quadratic with zero constant term made a lot more sense (as the two parameter model) in that case. But this was pretty far into the weeds relative to the initial hypothesis that mass was proportional to volume. A constant term in this case is just silly.A constant term is not "silly". If the fit evaluates it near zero, it will not cost anything, and that is valuable information in itself, not "silly". Negative results can be as important a positive ones. Blinkering the analysis by trying to coerce the result is not only silly but unscientific.

    I find your supposed interpretation of the non linearity rather odd too. A graduated cycle which is not cylindrical to within the indicated precision seems a little unlikely. It seems far more likely that your spurious attempts force the fit through zero was leading to an incorrect regression slope which produced increasing residuals at higher volumes. It is hard to say without seeing the data but it sounds like it did have a finite intercept, but you were in denial about such things, regarding them as "silly".

    Did you teach your students how to correctly read the meniscus of the fluid in the measuring cylinder? That could lead to a finite intercept, if you would allow that possibility to be seen. There clearly was some experimental error which needs to be identified. Had you not expressly removed the constant term, it would have given your some information about the problem. You have neatly demonstrated one reason not to bias the regression by excluding parameters.

    If you suspected the cylinder was not straight, did you at least measure it to attempt to falsify this hypothesis. Apparently not. Did you substitute another cylinder to test the hypothesis. Apparently not. Popper not happy.

  20. fizzy says:

    testing a hypothesis is best understood in the sense of Popper's falsifiability.You still seem to have trouble acknowledging that forcing the graph to go though zero removes any possibility of showing the data does not go through zero. You earlier claimed it "supported the hypothesis". What does Karl Popper have to say about that?

    In your water measuring experiment, you illustrate that something unexpected can come out of an experiment. As I said earlier: you need to analyse the data objectively , not try to coerce it comply with expectations. If you chose to fit a 2nd order polynomial as a model, that too should include the constant term. If it produces a significant offset then that will be part of whether you chose to accept that as a better model of your data or whether you need to re-examine your experiment as you did in this case.

    That your sound data fitted to within 1% simply masks the fact that you did the regression the wrong way round, it does not excuse it or mean that it does not matter to teach it incorrectly.

  21. fizzy says:

    Here is real some meteorological data with significant experimental error in both variables. A linear regression was done, first on x then on y. The two OLS slopes are both invalid because each one ignores the errors in one or other variable. OLS should never be applied to this kind of data in either direction.

    It would be possible to construct data where the true slope lies outside this range but usually the true slope will lie between these two extremes. ( The locus of the points was plotted for other reasons , that is not relevant to this discussion. )

    As can be seen this is not some purist pendant point , it can make an enormous difference to the supposed linear relationship between the two variables.

    I never cease to be amazed by the calibre of scientists and mathematicians who are totally unaware of proper use of this technique, which is one of the most fundamental tools of data analysis. One of the main reasons seems to be that it is not being taught properly in high school.

    Even if there is not time to go into the details of the maths it would seem important to at least mention that it only minimises y residuals and that the basic criterion for this work properly is to have very small errors on the x axis variable. It is only under those conditions that it will produce the "best unbiased linear estimation" of the slope.

    View attachment 236969

  22. Dale says:
    Dr. Courtney

    The question of whether to include a vertical intercept is more interesting.You should pretty much always include it. The only time you can leave it out is when it is actually 0, not just not significantly different from 0, but exactly 0. And in that case then leaving it in is the same as leaving it out, so you should always leave it in.

    First, and most importantly, if you remove it then all of your other parameter estimates become biased. The EmDrive fiasco is a great example of this. This bias occurs even if the intercept is not significantly different from zero.

    Second, your residuals will no longer be zero mean. This may be related to your observation.

    Third, many software implementations change the meaning of the R^2 value they report when the intercept is removed. So the resulting R^2 cannot be meaningfully compared to other R^2 values nor interpreted in the usual fashion.

    Fourth, even if your true intercept is zero if the function is not exactly linear then your fit can be substantially worse than a linear fit with an intercept.

    I’m sure there are other reasons, but basically don’t do it. It is never statistically beneficial (since the only time it is appropriate is when it makes no difference) and it can be quite detrimental. If it makes a difference then you need to leave it in for the reasons above, and if it doesn’t make a difference then it doesn’t hurt to leave it in.

    Honestly, with your data the above biases and problems should be minuscule. So this data seems to be on the “it doesn’t make a difference” side of the rule. But I would recommend leaving it in for the future. I wouldn’t proactively give any explanation to the students, but just use the default setting.

  23. Dr. Courtney says:

    For most high school science labs, testing a hypothesis is best understood in the sense of Popper's falsifiability. If the experiment and subsequent analysis have a reasonable possibility of refuting the hypothesis and the experiment is done with adequate care, then one can say that the hypothesis is supported if the data agrees with the hypothesis. One need not usually delve into the formal hypothesis testing of statistics to teach most high school science labs. (In some project-based courses, I do explain and sow students how to compute uncertainties and p-values, as appropriate for the project and student capabilities.) I also doubt the wisdom of eschewing least squares fitting in high school science labs simply because one does not have time or inclination to delve into formal statistical hypothesis testing.

    The question of whether to include a vertical intercept is more interesting. Certainly a strong case can be made that fitting to a single adjustable parameter (the slope) and the resulting r-squared values makes it very reasonable to conclude that the hypothesis is supported. But I suppose support can always be made stronger by showing the direct proportionality works better than other possible models. Several two parameter models are possible: the standard equation of a line, a parabola with zero constant term, and a power law come to mind. I'm not sure why the standard equation of a line would take priority over the other two. I actually taught a similar experiment recently where students measured mass vs. volume (weighing liquid in a graduated cylinder with the electronic balance zeroed with the graduated cylinder in place). Analysis of the residuals of the fit to a line forced through the origin suggested the small residuals were systematically due to widening of the cylinder at the top. Fitting to a quadratic with zero constant term made a lot more sense (as the two parameter model) in that case. But this was pretty far into the weeds relative to the initial hypothesis that mass was proportional to volume. A constant term in this case is just silly.

    But fitting several different models and analyzing residuals are topics that may be introduced to high school students with available time, but certainly are not necessary. By the time you have good experimental data, supporting the hypothesis and in agreement with the known proportionality constant within 1% in a high school science lab, I think you can rest easy and think you did OK. I certainly would have been content with most students arriving in my college physics labs had they been capable of routinely achieving 1% accuracy.

  24. Dale says:
    fizzy

    neither is that convention arbitrary.Nonsense. It is completely arbitrary. There is no non arbitrary reason to put the dependent variable on the vertical axis. I challenge you to find a non-arbitrary for the vertical dependent axis.

    fizzy

    standard OLS tools … are "blindly following" that convention tooI am not familiar with the specific tool used in the write up, but I disagree completely that standard OLS tools use that convention. The standard OLS tools that I have used typically have the variables horizontal and the observations vertical. Often even that can be overridden by the user. I don’t even know how the OLS tools could follow that convention in principle.

    Perhaps you mean plotting tools instead of OLS tools, or maybe some specific OLS tools that are embedded into a plotting tool.

    fizzy

    If the aim is examine the experimental relationship between elapsed time and distance travelled you should be fitting a two parameter linear model. If your experiment is well designed and there are not any anomalous effects it should have an intercept very close to zero.I agree with this point. Fitting a model without an intercept term is rarely advisable.

  25. fizzy says:

    You won't be of much use as a teacher until you stop seeing yourself as so superior to others.Who said I was a teacher?!

    That is a, however, a very good piece of advice that should constantly be reminded to teachers of all levels. A pearl of wisdom.

    It is not "my way" it is the way that OLS is derived mathematically. What you have done is mathematically invalid. You are unwilling to recognise that. So I guess I could reply that you are nearly useless as a teacher, too. The difference is , I'm not a teacher.

  26. Dr. Courtney says:
    fizzy

    You are blindly mixing theory and experiment…

    Again it is not a "trendline"…

    There is nothing "blind" about following the convention and neither is that convention arbitrary. There is very good reason for following that convention if you are going to use standard OLS tools without knowing what you are doing because they are "blindly following" that convention too !!

    It is in no way "better" to invert the axes and then do a totally invalid regression to estimate the principal result of the experiment.You won't be of much use as a teacher until you stop seeing yourself as so superior to others. You can take up your trendline debate with those who make spreadsheets and other graphical and data analysis tools that refer to least squares fitting results as trendlines. But overall, spend some more time actually teaching, and lose your "my way or the highway" approach. Until then you are nearly worthless as a teacher.

  27. fizzy says:

    I explain it to students this way: the only possible distance any signal can travel in zero time is zero distance.You are blindly mixing theory and experiment. Scientific method demands that you conduct an experiment and then compare to theory / hypothesis. You do not start inserting assumptions from your hypothesis into you data then conclude that this "supports the hypothesis". It is not encouraging that you cannot understand that.

    Again it is not a "trendline". That term belongs to time series analysis and principally comes from economics, as do spreadsheets. What you have here is a linear model you are trying to fit to the data.

    If the aim is examine the experimental relationship between elapsed time and distance travelled you should be fitting a two parameter linear model. If your experiment is well designed and there are not any anomalous effects it should have an intercept very close to zero.

    Plotting it this way makes calculating the speed of sound easier, which was the main point of the lab. So setting the dependent variable on the horizontal axis is in fact a better choice for this experiment than blindly following the arbitrary convention.There is nothing "blind" about following the convention and neither is that convention arbitrary. There is very good reason for following that convention if you are going to use standard OLS tools without knowing what you are doing because they are "blindly following" that convention too !!

    It is in no way "better" to invert the axes and then do a totally invalid regression to estimate the principal result of the experiment.

    When physical considerations demand that a mathematical relationship goes through the origin, there is no need to add a variable vertical shift artificially.Again you are failing to understand the difference between theory and experiment. There is nothing "artificial" about the second parameter, there may be some experimental or physical conditions which produce something a little different from what you expect. You should analyse the data objectively without attempting to force the result you expect. That is the "need". It does not cost anything and if things go as expected you get near zero intersect and say to your students : "this is what we would expect from theory because …. ".

    Since you clearly do not have the slightest inclination to recognise your errors and improve the experiment you are presenting to others, I guess this conversation is futile. I have rarely seen anyone involved in teaching who has the humility to admit an error , so this is not too much of a surprise to me. I thought it worth a try, though.
    If you are confident you can do such a better job, please start showing us all how it is done.Well clearly an article on the use and misuse of OLS may be worth submitting since even PhDs for MIT don't seem to know how to use it. ;)

    I do have a text on that, it may be worth dusting it off.

  28. Dr. Courtney says:

    The notion that the trendline goes trough the origin is supported in lots of ways without assuming a direct proportionality between distance and time. I explain it to students this way: the only possible distance any signal can travel in zero time is zero distance. If time permits, when we do this experiment in class, I'll also have the students try a power law fit to the data. This also enforces the physical constraint of going through the origin, but the varying power ends up very close to 1. When physical considerations demand that a mathematical relationship goes through the origin, there is no need to add a variable vertical shift artificially.

    This lab is designed for students anywhere from 9th grade Physical Science to 1st year college Physics. It's up to the teacher to adapt the details to the available time given the needs and abilities of the students. One can do a lot more in a 3 hour college Physics lab. The version presented in the Insight article was completed in a single hour with a 9th grade Physical Science class with very weak math skills.

    PF is often looking for new Insights articles. If you are confident you can do such a better job, please start showing us all how it is done.

  29. Dale says:
    fizzy

    Time is the dependent variable and should be plotted on the y axis.That is purely a convention, in relativity time is conventionally plotted on the vertical axis. There is nothing that requires one axis to be dependent and the other independent.

    Plotting it this way makes calculating the speed of sound easier, which was the main point of the lab. So setting the dependent variable on the horizontal axis is in fact a better choice for this experiment than blindly following the arbitrary convention.

    fizzy

    The text underlines that care was taken to ensure the software was forced to go through the origin. This it totally wrong.I agree with you on this, but teaching the students why belongs to a statistics class. Same with the fact that regression of x vs y is different than y vs x.

  30. fizzy says:

    No, but you can do things properly, so that attentive students can pick things up correctly, rather then showing them bad ways of doing stuff. There are several things which need correcting here.

    This is not time series data. Time is the dependent variable and should be plotted on the y axis. If you don't have time to explain why, at least do it correctly.
    The text underlines that care was taken to ensure the software was forced to go through the origin. This it totally wrong. It then incorrectly claims that this "supports the hypothesis" that it should go through the origin. False logic and spurious conclusion.

    It would also be good practice to publish a table with the experimental data. That would not take much space in this case.

    The idea of this experiment is great from an educational point of view. I hope Dr Courtney will be motivate to improve this write up a bit.

  31. fizzy says:

    Fun experiment. Sure to get the attention of the kids.

    A few criticisms of the write up.
    When fitting to a trendline in graph.exe, we were sure to check the box to set the vertical intercept to zero, as the hypothesis predicts not only a linear relationship, but also a vertical intercept of zero (a direct proportionality.)Inductive thinking. It seems that you have 5 DATA points. The origin is not a data point, it is part of the hypothesis you are supposed to be testing.

    You are not fitted to a trendline, you are fitting a trendline to the data. The use of the term "trend" is not appropriate either, you are fitting a linear model to the data.

    Inspection of Figure 1 shows that the hypothesis was supported.To large degree you induced this result. Not good teaching to suggest this "supported" the hypothesis.

    If there was a finite intercept from the experiment , this could then be a point of discussion why this varied from what was expected. It may even be worth trying to induce this.

    I find it odd that there is not a single mention of measurement uncertainty. Distance, time, accuracy of determining the exact time of the two events from the noisy sound recording. How the number of data points affects confidence in the slope.

    Statistics of 5 points is not the experimental uncertainty, plus the false data point skews the stats.

    No mention of how graph.exe fits the "trendline" (OLS it seems). No mention of dependent and independent variable ; nor the requirement in using OLS that only the dependent variable has significant experimental error.

    Since distance is the controlled variable here, it should be plotted on the x axis , not y, and the least squares is not being correctly applied as done.

    It is a little saddening that someone can get a PhD in physics from MIT without knowing how to correctly apply OLS. But please note this is not a personal dig, this problem is endemic in science and has been for decades. Using spreadsheets for science may be part of the problem.

    I did highschool in the 70s and our physics teacher carefully explained the limitations and criteria for a valid application of OLS and explained the derivation of the least squares method and showed where the assumption that Xerr << Yerr has to be made to get the result. We were probably the last generation to get that kind of education. :(

    The data here are quite tight and it does not induce a large error. However, where data are more spread out ( larger x and y errors ) there is what is called regression dilution and the slope is under-estimated by OLS. This is one reason why there could be a finite intercept when a zero intercept is expected. I have seen a whole room of Maths PhDs spend an afternoon faced with such an issue and not one of them knew where it came from. The slope was visibly wrong but they could not understand why.

    I hope these comments can be used to improve the presentation and increase its educational value.

  32. Fewmet says:
    sophiecentaur

    A very cheap way that has good accuracy / consistency is to stand a distance from a large wall and use a hammer to hit a metal object. That is obvs so far. The clever bit is to strike the metal exactly when you hear the echo, and repeat. You repeat until you are accurately in sync with the echo pulses. Then you measure the time for 10, 20 or more echos. The accuracy gets better and better with more pulses.I encountered an equivalent phenomenon several years ago while walking on a local college campus. I passed between a blank wall of a building and a pulsating garden sprinkler. My left ear heard the sprinkler, which produced psst sound as it spurted about four times a second. My right ear heard the echo off of the building. I was able to position myself so I heard both sounds simultaneously. I saw that I was hearing the direct sound of the nth spurt and the echo if the (n-1)th spurt. Given the period of the sprinkler spurts and the distance from the sprinkler to the wall I could get the speed of sound.

    If I could get my students access to that setup, I'd asked them to predict where the sound and echo are heard simultaneously, and design the experiment to test the prediction.

  33. Dr. Courtney says:
    sophiecentaur

    I really worries me that students seem to confuse simulation with reality all the time. It's the Startrek effect. They ask why their simulation is not giving the answers they expect. It's GIGO without having any way of chasing the fault in the model. A simulation is so much cheaper than hardware and you don't need lab space nor need to tidy up for the next class. You can see why 'the system' likes to encourage it.I consider downloading real data acquired from a third party as a different (better) class of lab than computer simulations. For example, last year, I had a physical science class download and analyze both Brahe's original data and modern data for testing Kepler's third law. Later, (for a different lab), I had them download available orbital data for earth satellites to test Kepler's third law in that system. I had a physics class analyze Robert Boyle's original data (from his historical publication) to test Boyle's law.

    In my view, these labs are not as good as real, hands on experiments where students acquire the data themselves. But they do more accurately represent the scientific method by comparing predictions from proposed models (usually the hypothesis) against _real_ experimental or observational data. There are many historical cases where science really works this way – a model is validated against data acquired by a different party.

    In contrast, testing a predictive model or hypothesis against a simulation is not a version of the scientific method that I think we should be teaching in introductory labs. That's not how the scientific method really works, and using simulations for labs runs a significant risk of confusing students about the scientific method itself.

  34. sophiecentaur says:
    Dr. Courtney

    I've got mixed feelings about calling an analysis activity a real "laboratory" if someone else did the experiment and collected the data.I really worries me that students seem to confuse simulation with reality all the time. It's the Startrek effect. They ask why their simulation is not giving the answers they expect. It's GIGO without having any way of chasing the fault in the model. A simulation is so much cheaper than hardware and you don't need lab space nor need to tidy up for the next class. You can see why 'the system' likes to encourage it.

  35. sophiecentaur says:

    A very cheap way that has good accuracy / consistency is to stand a distance from a large wall and use a hammer to hit a metal object. That is obvs so far. The clever bit is to strike the metal exactly when you hear the echo, and repeat. You repeat until you are accurately in sync with the echo pulses. Then you measure the time for 10, 20 or more echos. The accuracy gets better and better with more pulses.
    Classic integration method to average out errors. ms timing accuracy is possible with enough pulses.

  36. Dr. Courtney says:
    Dale

    I think that is a succinct summary of the problem with pop-sci presentations. It is good that you are focusing on more than just the fun, but including both fun and learning objectives.In a paper coming out this fall in TPT, colleagues and I identified three challenges in the typical introductory physics lab design:

    1) simple experiments connected with learning objectives
    2) experiments sufficiently accurate for comparisons between theory and measurements without gaps when students ascribe discrepancies to confounding factors (imperfect simplifying assumptions, measurement uncertainties, and “human error”), and
    3) experiments capturing student attention to ensure due diligence in execution and analysis.

    So that can be summarized in three goals: 1) learning objectives 2) accuracy (I like 1%) and 3) Gee Whiz factor. I like the firecracker echo experiment, because it has all three (which is rare) plus a 4th that is often a constraint 4) Cheap.

    I've been working a lot this past year with a number of resource-constrained schools: home schools, private schools, foreign schools, and public schools in underfunded districts. Some times it feels like it comes down to:
    A) What interesting things can you do with a microphone as an accurate timer?
    B) What interesting kinematics can you catch with an available video camera and analyze in Tracker? (Or otherwise use the camera as a timer to 1/30 sec)
    C) What "virtual" labs can you do by downloading historically important or other interesting data (Boyle, Kepler, etc.)?

    I've got mixed feelings about calling an analysis activity a real "laboratory" if someone else did the experiment and collected the data. But these can have a hypothesis, a quantitative test of the hypothesis, data analysis, and a traditional lab report. I wouldn't want a lab program to rely too heavily on these, but better than skipping labs completely due to resource constraints.

  37. Dale says:
    Dr. Courtney

    It's easy to pretend one is doing science when all the students remember is the "Gee Whiz" and no one remembers the learning objectives.I think that is a succinct summary of the problem with pop-sci presentations. It is good that you are focusing on more than just the fun, but including both fun and learning objectives.

  38. Dr. Courtney says:
    Dale

    That is fun! Not often that you get to set off fireworks for scienceYep. I'm actually going to use "Chemistry of Pyrotechnics" to put together a few labs for next year (supposing the local school is pleased enough to let me coordinate a few labs for them again.)

    The challenge with these deals is not making them fun. That's a given. The challenge is connecting the "Gee Whiz" part of it to some interesting science in a way that tests a hypothesis reasonably within the learning objectives and in the Goldilocks zone (not too hard, not too easy, just right.)

    It's easy to pretend one is doing science when all the students remember is the "Gee Whiz" and no one remembers the learning objectives.

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply