- #1
END
- 36
- 4
Hello, PF!
[My question pertains to a non-rigorous, undergraduate introductory Probability and Statistics course. I'm no math major, so please correct me if I've mishandled any terms or concepts as I try to express myself. I'm always eager to learn!]
In a discussion of the standard deviation of a sample in relation to the 68-95-99.7 rule, the following "conceptual" example was given—or rather, made up on the spot—by our professor:
Assume [itex]\bar{x}=50 \%[/itex] and [itex]s=20 \%[/itex] for test scores (in units of percent correct), and assume that the sample represents the normal distribution (symmetrical and bell-shaped) of a test where no test score range below [itex]0 \%[/itex] and none above [itex]100 \% [/itex] (sorry, fellas, no extra credit).
It occurred to me that any score beyond [itex]2.5 [/itex] standard deviations would be a score of more than [itex]100 \%[/itex] or less than [itex]0 \%[/itex]. According to the three-sigma rule, this would still only encompass approximately [itex]98.7\%[/itex] of the scores meaning that approximately [itex]1.3\%[/itex] of the scores fall outside this possible range.
My question:
Is the above example even possible given the "parameters" (limits?—I can't find the right word) [itex]{0 \%}≤x_i≤{100 \%}[/itex]?
And
Extrapolating this question to the overall concept, can any standard deviation [itex]s[/itex] of a normal distribution ever exceed the possible range of data points/values within that distribution?
My guess is that this was simple oversight and an error on the part of my professor.
Thank you!
[My question pertains to a non-rigorous, undergraduate introductory Probability and Statistics course. I'm no math major, so please correct me if I've mishandled any terms or concepts as I try to express myself. I'm always eager to learn!]
* * *
In a discussion of the standard deviation of a sample in relation to the 68-95-99.7 rule, the following "conceptual" example was given—or rather, made up on the spot—by our professor:
Assume [itex]\bar{x}=50 \%[/itex] and [itex]s=20 \%[/itex] for test scores (in units of percent correct), and assume that the sample represents the normal distribution (symmetrical and bell-shaped) of a test where no test score range below [itex]0 \%[/itex] and none above [itex]100 \% [/itex] (sorry, fellas, no extra credit).
It occurred to me that any score beyond [itex]2.5 [/itex] standard deviations would be a score of more than [itex]100 \%[/itex] or less than [itex]0 \%[/itex]. According to the three-sigma rule, this would still only encompass approximately [itex]98.7\%[/itex] of the scores meaning that approximately [itex]1.3\%[/itex] of the scores fall outside this possible range.
My question:
Is the above example even possible given the "parameters" (limits?—I can't find the right word) [itex]{0 \%}≤x_i≤{100 \%}[/itex]?
And
Extrapolating this question to the overall concept, can any standard deviation [itex]s[/itex] of a normal distribution ever exceed the possible range of data points/values within that distribution?
My guess is that this was simple oversight and an error on the part of my professor.
Thank you!