Expunging Myths from The Classrooom

In summary, the conversation discusses an article by Alex Pasternack about the Tacoma Bridge collapse and how the commonly taught explanation of resonance was proven to be incorrect. The article highlights the issue of false lessons being taught in classrooms and the negative consequences this can have. The conversation also raises questions about how the educational system can discover and eradicate these false lessons, and whether there are quality assurance procedures in place at prestigious universities to ensure the validity of science being taught.
  • #36
anorlunda said:
Use whatever approach you want, but how can you measure success without data? How can you propagate best practices and work to eliminate the worst? There are many anecdotes about brilliant teachers who transform one classroom at a time, but whose methods fail to propagate to the whole system.

I take more of a local approach, not being confident that the centralized power needed to "propagate to the whole system" can necessarily be trusted to improve.

I have more confidence in the open marketplace of ideas (and my ability to speak in it), combined with improving things in my own classrooms and my institutions' classrooms to have a better outcome (on average) than centralized "top down" approaches. Teaching is inherently an individual art. Passion, zeal, and good old fashioned hard work hold more promise than attempts to propagate methods to "the whole system."

I measure my success as a teacher with data: standardized test scores, student success in downstream courses, scores on my own assessments, etc. As an administrator, I've had a legitimate place to measure the success of other teachers at my institution with similar metrics. I think it's the myth of generalization that sound techniques that work in one place will work in all places when the relevant human factors are so vastly different.
 
Science news on Phys.org
  • #37
anorlunda said:
Engineering schools have accreditation boards, medical schools have board certification, law schools have bar associations. Why should science be different? Why should employers have the right to demand a science degree, but not a degree from an accredited institution? What about the HR departments of CDC or HHS for example. They are constrained by lots of rigid rules. How could they prevent their scientific staff from being packed by Liberty University grads?

You are raising several important inter-related issues. In no particular order:

Universities and colleges also have accreditation boards- that is, while some individual academic programs do not have individual accreditation, our university as a whole maintains accreditation that covers all of the academic programs. In theory, institutional accreditation acts to maintain quality standards. Now, there are many different university accreditation organizations, and more appear all the time. In that context 'accreditation' means very little- in practice, it essentially only verifies that course content matches the catalog descriptions.

http://ope.ed.gov/accreditation/FAQAccr.aspx

Now board certifications, etc. This is what I mean by 'peer evaluation'. Many professionals must maintain a credential, typically through continuing education credits. To be sure, states have recently increased the types of jobs requiring certification as a revenue source, but certification is bestowed by a peer group. K-12 public school teachers must maintain certification, but I and other university professors do not. I admit this is a strange situation: many people would claim that teaching is the most important aspect of my job, it's certainly the most time-consuming part, yet I have had *no* formal training in teaching- and that holds for my STEM colleagues as well. I create courses: homework sets, tests, etc., and *assign grades* in spite of having no prior experience. How can this be?

Part of the answer, surely, is the lack of 'educator malpractice'. "While educators can be held liable for infringing on students’ rights and for negligence that causes students physical harm, educators do not have a legal responsibility to educate students. In other words, educators can be sued for providing inadequate supervision, but not for providing inadequate instruction."

http://www.aasa.org/SchoolAdministratorArticle.aspx?id=6516

Now for hiring: it's been a while since I directly participated in a hiring process in a private company, but as I recall we can write any list of requirements we like- once they are written down, we can't deviate from them. If we forgot to include something, (say, 5+ experience using ProE, Solid Works, or AutoCAD) we can't add that requirement when evaluating job seekers. AFAIK, we could indeed require a degree from an accredited institution, of certification from an appropriate agency. Now, being in State Institution, the rules are indeed somewhat different- while we can generally write our job requirements as we please, HR decides who is on the search committee and decides who is brought in for an interview. We can require candidates to hold certification, if their profession requires that for practice (speech pathology, for example).

How can my department prevent the hiring of a creationist? In reality, we can't prohibit it- we can strongly recommend the candidate as 'unacceptable', but the Dean decides who gets the offer, which must be approved by the provost, VP of research, president, and board of trustees. If that person makes it though and is hired, and that person proceeds to teach their unscientific beliefs in class, we have the opportunity to kick them out during the pre-tenure review process. If that person stays quiet for 6 years, obtains tenure, and then proceeds to teach their unscientific beliefs in class, there's not much we can do. But academia is replete with stories of tenured faculty going bonkers.

anorlunda said:
<snip>
  • Flutter like a flag fluttering in the wind. Classroom demo or youtube video.
Like! FWIW, I don't discuss galloping gertie in class, but flag fluttering... that could be very cool.
 
  • #38
anorlunda said:
It could be described as a quality control issue for education. If we strive for six-sigma quality improvement in industry, why not in education?

I'm not trying to be insulting to educators. I'm just suggesting that here we have an ideal case up which to do research to improve the educational system.
I would not read too much into this. The cause of such an unusual bridge failure is complex. There are several ways of looking at the collapse and I am not sure that the resonance theory is fundamentally wrong. Just a bit oversimplified and incomplete, perhaps.

The fact is that the wind caused the bridge to begin to oscillate. The frequency of oscillation was determined by the physical characteristics of the bridge but the energy to get it going came from the wind. And the steady wind just kept increasing the amplitude of the oscillations. I don't see a big difference between that and 'resonance'. The forcing does not have to be in the form of pulses of wind to cause that. It could have been caused by a steady wind catching more surface area on the downward parts of the oscillation but less on the upward part, effectively providing pulses of lateral force on the span timed naturally to the period of oscillation.

AM
 
Last edited:
  • #39
anorlunda said:
Some of these replies are getting absurd. A common flaw in public debate is falsely claiming that the opponents argue extreme all or none positions.

One extreme would be that the one and only ultimate truth can be mentioned in classrooms. That is not my position. The other extreme is that anything and everything may be taught in classrooms. I don't believe that those opposing me seriously take that position either.

My motivation for this thread is the simple fact that crackpot science can get you thrown off of PF, but it can not get you thrown out of a classroom if you're the teacher.

I think you're missing the point of an undergraduate education. It isn't to explicitly learn everything correctly, but to build a broad - if sometimes shallow and error riddled - knowledge base that will be expanded upon in a certain field by graduate school or industry work. Brand new engineers in the workforce, even from excellent schools like UCB or Purdue, don't have the specific knowledge required to even do the job they were hired for, more often than not. The degree ensures that they're capable of learning on the job.

To demand rigorous QA checks and surveys of what everyone is teaching would be unfeasible and very expensive to carry out. I also don't see explicitly what needs fixing, an engineer building a bridge will have many times more knowledge on bridge building then what he had in school, and errors from school become far less important.

My position is to advocate for quality control and for process improvement akin to six-sigma, that have become common in industry, should be applied to education. Dispute me on that, not on false all-or-none exaggerations.

Six sigma? Please... I don't think the education system needs some silly, useless, management fad. Our government already burns our taxpayer dollars on such ineffectual management and bureaucracy.
 
  • Like
Likes Merlin3189, Dr. Courtney and vela
  • #40
Student100 said:
Six sigma? Please... I don't think the education system needs some silly, useless, management fad. Our government already burns our taxpayer dollars on such ineffectual management and bureaucracy.

I don't see six-sigma as a fad. It was largely the basis of the Japanese boom (though not enough to prevent the general implosion of Japan later on). Many companies that have used it have had their competitors for lunch, some who have not used it have gone down. Similar for many other techniques springing from ideas on Project Management. The idea itself , e.g., Ishikawa's fish diagrams comes down to identifying the source of a problem and changing the production line to remove it.

EDIT: There is, of course, the task of defining quality here in a reasonable-enough way for it to be implemented effectively. But I don't see how quality management /six-sigma is intrinsically incompatible with an improvement in education.
 
  • #41
I see the mission of six-sigma to create top down pressure to improve the process. Those who say, "We like is the way it is." are expressing resistance to improvement. Six-sigma attempts to create a culture which values quality and constant improvement.

I see the lesson of ISO 9000 to be the need for external definitions of best practices and quality standards. Internally generated goals are too easily converted to softballs.

But education is far from a free-market competitive environment. Motivation to improve will not come from the need to survive via competitive advantage.

Academic freedom is a difficult case. I understand and support the need to nurture mavericks and out-of-the-box thinking. But academic freedom can also mask and protect systemic incompetence. Commercial industry has just as much incentive as academia to nurture mavericks and out-of-the-box thinkers, yet industry has found ways to make that coexist with quality.
 
  • Like
Likes WWGD
  • #42
The main challenge in education is that two prominent metrics of quality are at odds.

Some stakeholders measure quality in terms of retention and graduation rates, which they translate to "success" rates in individual courses. Success rate is often defined as the number of A, B, or C grades divided by the number of students enrolled in the course shortly after the beginning of classes. (D, F, W grades all count against the success rate.) My view is that this metric makes it tempting to simulate success by lowering the bar.

Other stakeholders define quality in terms of what the student can actually do after graduating or passing a course with a certain grade. At the Air Force Academy, we made great strides when we started measuring course success in terms of how many students who earn a given grade in one course (A, B, or C) succeed in a downstream course for which the first course is a pre-requisite. In other words, success in Calc 1 is not defined by passing Calculus 1, but being well-prepared for Calc 2 and Physics. If too many students earning Cs in Calc 1 were underprepared and thus failing Calc 2 and Physics, it was a sure sign of a quality problem.

I was mostly focused on the Basic Math course. We carefully tracked downstream success rates in Calc, Physics, Chemistry, and Engineering Mechanics (which all cadets take). Once you define success correctly, it is much easier to make the necessary adjustments to achieve it.
 
  • Like
Likes Mister T
  • #43
Dr. Courtney said:
<Snip>
Once you define success correctly, it is much easier to make the necessary adjustments to achieve it.

Absolutely agree. But it is often incredibly difficult to do.
 
  • #44
WWGD said:
Absolutely agree. But it is often incredibly difficult to do.

One of the reasons I loved working at the Air Force Academy is because the Air Force hired all their own graduates. When you hire all the graduates, you are able to more clearly communicate your expectations of success.

But I think the important principle is to view the downstream needs of the people who depend upon the abilities you impart to your students.

My wife and I wrote a paper some years ago called, "Science and Engineering Education: Who is the Customer?" Rule #1: The student is not the customer. Rule #2: Who suffers downstream if students who earn good grades do not know what they should? _They_ are your customers. The well educated student is the product, not the customer.

See: http://arxiv.org/ftp/physics/papers/0612/0612117.pdf
 
  • Like
Likes Astronuc
  • #45
anorlunda said:
Suppose we limit the discussion to the most prestigious universities. Do they have quality assurance procedures and/or six-sigma programs? Is the quality of their teaching audited by a third party? Is the galloping gertie myth still being taught in their classrooms today?
University engineering programs are reviewing by ABET. Usually a team of ABET members review course curricula, homework, tests, notebooks of faculty and students. I happened to be a graduate student when our department went through such a review. Otherwise, the quality is based upon peer-review within the department and university. Textbooks are reviewed by qualified people hired by the publishers. I was aware of textbook reviews by faculty members, and when I was a graduate student, my professor asked to do a review of an introductory textbook. Experienced scientists and engineers would do peer-reviews of programs and textbooks, much the same way they would peer-review journal articles. On the other hand, I've seen indications in the decline of the peer-review process over the past several decades. Apparently, it ain't what it used to be.

As for the example of the Tacoma Narrows Bridge failure, I've seen it described as an example of fluid-structure interaction (FSI) and buffeting. I've had experience of looking at flow-induced vibration (FIV) and FSI in nuclear fuel. About 20 years ago, the use of CFD was very limited in nuclear applications, and probably 20 or 30 years behind aerospace. That changed with a surge in FIV/FIS events in which grid-to-rod fretting (GTRF) was a consequence. Now CFD is used by many institutions.

As a student, I noticed that some textbooks referred to obsolete technology. That become more apparent when I joined industry.
 
  • #46
Astronuc said:
University engineering programs are reviewing by ABET. Usually a team of ABET members review course curricula, homework, tests, notebooks of faculty and students. I happened to be a graduate student when our department went through such a review. Otherwise, the quality is based upon peer-review within the department and university. Textbooks are reviewed by qualified people hired by the publishers. I was aware of textbook reviews by faculty members, and when I was a graduate student, my professor asked to do a review of an introductory textbook. Experienced scientists and engineers would do peer-reviews of programs and textbooks, much the same way they would peer-review journal articles.
Are the qualified people hired to review textbooks hired to do so by publisher of the textbook, or hired by the school that considers teaching by the textbook?
Astronuc said:
On the other hand, I've seen indications in the decline of the peer-review process over the past several decades. Apparently, it ain't what it used to be.
I see that most of the discussion is - how to force schools to improve their teaching. Ask from the other end: as a school administrator, what could be done to give the means to improve teaching?
Dr. Courtney said:
One of the reasons I loved working at the Air Force Academy is because the Air Force hired all their own graduates. When you hire all the graduates, you are able to more clearly communicate your expectations of success.

But I think the important principle is to view the downstream needs of the people who depend upon the abilities you impart to your students.
How about, publishing the textbooks you teach from? Could review of textbooks work better if they are not expensive hardbacks printed by some commercial publisher far away and unresponsive to end user, but published in small print runs by publisher that is organizationally part of the university - the faculty is proactively commissioning textbooks on any subjects where need is found or existing books are out of date, and reviewing your peers´ textbooks before publication?

What works better - anonymous peer review or in face peer review? Especially in case of textbooks - it is more hassle telling your students every year that the book is mistaken than tell the author once before publication, so that´s a strong motive to correct mistakes you spot, rather than nod them off for politeness.
 
  • Like
Likes Dr. Courtney
  • #47
while it is an ongoing challenge to correct errors in instruction, presumably undergraduates having received only a passing introduction to the Tacoma Narrows disaster will not be called upon to design bridges without being first exposed to more expert and specialized instruction, such as from this book, where the problem is discussed accurately and in detail:

A Modern Course in Aeroelasticity: Fifth Revised and Enlarged Edition
By Earl Dowell

Not to mention the website of the washington state dept of transportation, accessible to anyone today with an internet connection:

http://www.wsdot.wa.gov/TNBhistory/Machine/machine3.htm

One also learns upon a search that the bridge collapse was as much a result of bureaucratic cost cutting as anything, since the original design by Eldridge is today thought to have been adequate, but it cost almost twice as much as the less stable one adopted. Moreover the field engineer in charge refused to sign off on the design actually used and recommended against it. When he revealed these facts after the disaster, he was fired. Interestingly, (according to the entertaining account in Braun's ode book), it is said the state thought the bridge so safe that they were planning to cancel the insurance policies on it a week later. Unfortunately the local travel agent who sold them the policy also thought it so safe he failed to forward the policy to his company and pocketed the premium. He pointed out his unluckiness in his trial by observing that a week later he would have avoided detection and prison.

I mention that errors in textbooks are rampant due to the ignorance of those of us who write them. I take this as a reminder to consult the best books written by the masters. For example, Diophantus, writing a good while ago, solved the quadratic equation X^2-bX+c = 0, by observing that if r,s are solutions then r+s = b, and rs = c, so if we could find their diffetence r-s=d, we could solve for the solutions as r = (b+d)/2 and s = (b-d)/2. Thus we have c = rs = (b^2-d^2)/4 so d^2 = (b^2-4c). Hence r,s = [b±sqrt(b^2-4c)]/2.

This crystal clear explanation of the quadratic formula is nowhere to be found in modern books. But what to do? How many people today read Diophantus?
 
Last edited:
  • #48
anorlunda said:
Six-sigma attempts to create a culture which values quality and constant improvement.

I see the lesson of ISO 9000 to be the need for external definitions of best practices and quality standards. Internally generated goals are too easily converted to softballs.

.

Hard to argue with improvement. A big problem is establishing a metric. All the teachers I know think that "no child left behind" standardized testing is worse than nothing. I don't have an answer.

I was subjected to ISO 9000 and hated it. It seemed so irrelevant. Also, the company just wanted an empty credential. I can't imagine ISO 9000 working in education.

I would put my effort into studying education scientifically then applying the results. I think little is known about how people learn best, which should dictate how they may best be taught. Maybe such scientific studies and programs are happening, but if so I don't know about it.
 
  • #49
mathwonk said:
I mention that errors in textbooks are rampant due to the ignorance of those of us who write them. I take this as a reminder to consult the best books written by the masters. For example, Diophantus, writing a good while ago, solved the quadratic equation X^2-bX+c = 0, by observing that if r,s are solutions then r+s = b, and rs = c, so if we could find their diffetence r-s=d, we could solve for the solutions as r = (b+d)/2 and s = (b-d)/2. Thus we have c = rs = (b^2-d^2)/4 so d^2 = (b^2-4c). Hence r,s = [b±sqrt(b^2-4c)]/2.

This crystal clear explanation of the quadratic formula is nowhere to be found in modern books. But what to do? How many people today read Diophantus?

Write your own textbooks?
How many schools and universities control a publisher and are able to publish?
 
  • #50
Hornbein said:
I would put my effort into studying education scientifically then applying the results. I think little is known about how people learn best, which should dictate how they may best be taught. Maybe such scientific studies and programs are happening, but if so I don't know about it.

In fact a great deal is known about this for generic learning.
Way back we had behaviorism which can be summarized as "people learn when they respond in a certain way when exposed to a certain stimulus" (Pavlov's dog is the most famous example but there's more to it)
In physics we focus on a more modern approach to learning, constructivism (Piaget is a notable name here).

The field of physics education research (PER) is quite young (late 70s if I'm not mistaken).
Part of recent literature that might be of interest here is that on misconceptions. (See e.g. the publications by Lillian McDermott of Washington University)
During research on conceptual understanding they discovered that the difference between velocity and position is not that well understood by students.
Same for velocity vs acceleration.

These days other fields have started researching this kind of thing as well.
There are two big (but related) problems;
1. It's hard to ensure older teachers are up to date.
Not only older teachers suffer from this though. When a teachers education doesn't specifically uses this research but only uses general didactic reasoning we can't expect even new teachers to use this.

2. Legislators have to be up to date as well, how can we make sure they are? I know they consult active teachers (i.e. they teach on a day to day basis) and people in academics. But we cannot change the current standard approach in one fell swoop, (good) teachers don't have the time a lot of people assume they have. So they have to catch up. But also the textbooks have to be rewritten to accommodate this change.

Interesting stuff all in all. But it gets difficult to implement real fast.
 
  • #51
Hornbein said:
Hard to argue with improvement. A big problem is establishing a metric. All the teachers I know think that "no child left behind" standardized testing is worse than nothing. I don't have an answer.

Good for you. I'm not advocating pushing anything down the throats of educators other than a culture dedicated to improvement of the whole industry, not just one one teacher at a time.

I also believe it to be common sense that the low hanging fruit along the path to improvement are the worst practices and practitioners. I expect that educators know what and who they are.
 
  • #52
anorlunda said:
I see the mission of six-sigma to create top down pressure to improve the process. Those who say, "We like is the way it is." are expressing resistance to improvement. Six-sigma attempts to create a culture which values quality and constant improvement.

I see the lesson of ISO 9000 to be the need for external definitions of best practices and quality standards. Internally generated goals are too easily converted to softballs.

<snip>

Volkswagon AG is ISO 9000 certified and uses lean six-sigma. How'd that work out?

None of those Total Quality Management devices are able to prevent errors- they simply establish a managerial mechanism by which errors can be back-tracked and (hopefully) not repeated. It is wholly unclear how to apply TQM to education.
 
Last edited:
  • #53
Andy Resnick said:
Volkswagon AG is ISO 9000 certified and uses lean six-sigma. How'd that work out?

None of those Total Quality Management devices are able to prevent errors- they simply establish a managerial mechanism by which errors can be back-tracked and (hopefully) not repeated. It is wholly unclear how to apply TQM to education.

Seems unfair to expect six-sigma to do something it is not intended to do -- prevent corruption, lying, etc. And you need to start at some point thinking how to implement six-sigma in different areas, or conclude it should not be applied in it/ to it.
 
  • #54
WWGD said:
Seems unfair to expect six-sigma to do something it is not intended to do -- prevent corruption, lying, etc. And you need to start at some point thinking how to implement six-sigma in different areas, or conclude it should not be applied in it/ to it.

The claim was made the TQM measures could be applied to education with the goal of improving the end product. TQM is not concerned with that (clearly).
 
  • #55
Andy Resnick said:
Volkswagon AG is ISO 9000 certified and uses lean six-sigma. How'd that work out?

None of those Total Quality Management devices are able to prevent errors- they simply establish a managerial mechanism by which errors can be back-tracked and (hopefully) not repeated. It is wholly unclear how to apply TQM to education.

I think we first need a way to conduct recalls in education.

Even in the widespread, long term fraud at UNC, there was not even a mention of recalling or undoing college credit that was fraudulently awarded to student athletes. Until you are willing recall and fix the defective product, there is no real benefit to tracking the errors.

Grad schools long ago stopped believing grades and course credit as a reliable indicator of ability to succeed. It won't be long until companies that really need competence are implementing their own exams or outsourcing testing to an independent authority.
 
Back
Top