Patriot Missles system failures & the metric system

In summary: I just write 100*1ms. That way I have everything I need right at my fingertips without having to worry about units or round off errors.In summary, the problem with the metric system is that it encourages the use of round off errors. This can have undesirable effects on computer systems.
  • #1
Integral
Staff Emeritus
Science Advisor
Gold Member
7,255
66
Discovery Science did it to me again. I am now officially back on the anti-metric bandwagon.

I learned that during the first Gulf War our Patriot missiles system were failing due to an inability to keep time accurately. When the errors were finally officially recognized, the word came down that the systems could not be run for long periods of time. What was not known was just what a “long period” was. They spoke of an incidence were a Patriot missed badly after 100hrs of continuous operation.

They finally identified the culprit as round off error. They were running a counter in steps of .1 which is an infinitely repeating pattern in binary. (.0001001001001…) and cannot be precisely represented in a binary computer. The round off accumulated over time, rendering the system useless.

This is a very real world example of why you should avoid using .1 as a basic step in any computation. An excellent alternative is to use a power of 2 step like 1/16 or 1/32 etc. While this size of step may seem a bit strange to the human mind, the CPU loves it.

Unfortunately the metric system encourages the use of .1 as a fundamental step. While .1 is a nice multiple when you are doing arithmetic in your head, it is no benefit, indeed the source of errors, when the computer is doing the number crunching.

It is interesting that the subdivisions of the inch are typically powers of 2, so the American system is inherently binary computer friendly, why in the world should we switch to the metric impossible step system?

DOWN WITH THE METRIC SYSTEM!
 
Computer science news on Phys.org
  • #2
What strikes me most about this? Even for industrial applications, I normally use 0.001 second increments at most. It's too bad that the US missile program can't keep up with me. :biggrin:
 
  • #3
That's an excellent example of how round off errors and loss of precision can have some very undesirable effects. I don't think the answer is to abandon the metric system, since it is simpler for humans to deal with, in my opinion. :smile:
No matter what unit system you use, there will always be tricky values.
 
  • #4
As long as you try force a base 10 number system into a digital computer you will be facing round off errors. As long as precision is not a concern this is fine. However, if you have need for precision in your computations you need to consider how a computer handles numbers.

It would be far and away better simply to abandon base 10 systems in favor of base 2 systems. This can all be completely invisible to the user of your software, so people do not have to give up decimal, just computers.

The biggest problem with the metric system is that it gets everyone thinking base 10 is the best and only way to think. As long as you are doing computations it is a very bad way to think.

This is one of the reasons I am against the metric system, it sets a bad precedence.
 
  • #5
Ivan Seeking said:
What strikes me most about this? Even for industrial applications, I normally use 0.001 second increments at most. It's too bad that the US missile program can't keep up with me. :biggrin:

This was over 10yrs ago, computers have evolved a lot in that time. However they still cannot deal with .1.

You would do well to take steps of [itex] 2 ^ {-10} [/itex] :approve:
 
  • #6
Come on, I did one millisecond timing on my PC in 1990...in Quick Basic no less! Same with PIC chips. They have been processing that fast for at least ten years. In fact that circuit that I showed you was using 1ms timers, and that was 7400 series stuff.
 
Last edited:
  • #7
Anyway, don't mean to derail the thread, but that smacks of cheesy engineering to me.
 
  • #8
About an hour ago i used a 0.05 iteration in a program in the Lagrange interpolation thread and got round off error in under 20 iterations, you can see it there in the output posted. I'm guilty as well.
 
Last edited:
  • #9
Ivan Seeking said:
Come on, I did one millisecond timing on my PC in 1990...in Quick Basic no less! Same with PIC chips. They have been processing that fast for at least ten years. In fact that circuit that I showed you was using 1ms timers, and that was 7400 series stuff.

I tend to agree with you assesment. But then if they had tried to use a milli second step, the errors would have shown up even faster!
 
  • #10
I'm not familiar with the specifics of the programming related to those missile systems, but whenever I have to count with decimals I change my representation so I'm using integers. So instead of calling a millisecond 0.001s I call it just what it is, 1 millisecond. Then if I need to show something in seconds I just divide the milliseconds by ten for my output, but keep counting in milliseconds so that my data is not off.

Computers have trouble with any number that is not an integer, and they would have just as much trouble with reciprocals of powers of two. Computers would not represent 1/32 as 1/32, but as 0.0009765, which is just another decimal number like 0.1. The downfall of the program wasn't the metric system, but a problem inherent with trying to accuratly represent numbers that are not integers. That's something I learned in an introductory Pascal book in its coverage of loops.

As a Canadian I learned on metric, and I love it. How many feet are in a mile? I don't know, and I don't know anyone who does off the top of their head. How many meters in a kilometer? Now that I can do. Metric is just like a written representation of scientific notation, so you only need to know one fundamental unit for each measure and then it scales up and down effortlessly... 1 km = 1e3 m, one mm = 1e-3 m and so on.
 
  • #11
illwerral said:
I'm not familiar with the specifics of the programming related to those missile systems, but whenever I have to count with decimals I change my representation so I'm using integers. So instead of calling a millisecond 0.001s I call it just what it is, 1 millisecond. Then if I need to show something in seconds I just divide the milliseconds by ten for my output, but keep counting in milliseconds so that my data is not off.

Computers have trouble with any number that is not an integer, and they would have just as much trouble with reciprocals of powers of two. Computers would not represent 1/32 as 1/32, but as 0.0009765, which is just another decimal number like 0.1. The downfall of the program wasn't the metric system, but a problem inherent with trying to accuratly represent numbers that are not integers. That's something I learned in an introductory Pascal book in its coverage of loops.

As a Canadian I learned on metric, and I love it. How many feet are in a mile? I don't know, and I don't know anyone who does off the top of their head. How many meters in a kilometer? Now that I can do. Metric is just like a written representation of scientific notation, so you only need to know one fundamental unit for each measure and then it scales up and down effortlessly... 1 km = 1e3 m, one mm = 1e-3 m and so on.
This is simply incorrect. Computers store binary representations, not decimal. So 1/32 = 2^(-5)= .00001 (binary) this can be represented precisily in a computer. .1 cannot be.

Once again the metric system is great for humans but sux for computer, because .1 and most of its multibles is rounded of by the computer. There is no way around this, as I pointed out in the inital post. You can mimimize the round off error by doing a multiplcation instead of addition. But the result is still an approximation.
 
  • #12
illwerral said:
Computers have trouble with any number that is not an integer, and they would have just as much trouble with reciprocals of powers of two. Computers would not represent 1/32 as 1/32, but as 0.0009765, which is just another decimal number like 0.1. The downfall of the program wasn't the metric system, but a problem inherent with trying to accuratly represent numbers that are not integers.

1/32 = 0.03125 in decimal which is 0.00001 in binary.
The problem is not with computers having trouble representing any non-integer numbers. Floating point numbers are represented in computer hardware in "binary scientific notation" which is of the form A*2^B. Where A is the significand (or mantissa) and B is the exponent. because processors handle only a finite number of bits, like 32 or 64 bits, this means that part of these bits are used for the exponent and part for the mantissa. Therefore, a computer's precision is dependent on how many bits are used for the mantissa. No matter how good the precision is, there'll always be numbers like 0.1 (decimal), or 0.3(3) (decimal) which, in binary, always equate to non-finite binary strings. For these numbers, their binary representation will be truncated or rounded to fit the number of bits allowed for the significand, which results in round off error. The error is very small, but over time it can accumulate.
But there's plenty of non-integer numbers that can be represented in a computer without round off error.
The only problem with the metric system is that it encourages increments/decrements of powers of 10, 0.1 being one such.
 
Last edited:
  • #13
Integral said:
This is simply incorrect. Computers store binary representations, not decimal. So 1/32 = 2^(-5)= .00001 (binary) this can be represented precisily in a computer. .1 cannot be.

Once again the metric system is great for humans but sux for computer, because .1 and most of its multibles is rounded of by the computer. There is no way around this, as I pointed out in the inital post. You can mimimize the round off error by doing a multiplcation instead of addition. But the result is still an approximation.

Actually there are quite a number of ways that computers can be made to readily deal with decimal numbers. It's quite feasible to build decimal computers, even if our modern technology is not well-suited to it. Moreover, programming techniques such as fixed point and arbitrary precision arithmetic are quite well established for some applications.

http://www.siam.org/siamnews/general/patriot.htm

Indicates that the problem was not actually rounding errors per se.
At least one of these software modifications was the introduction of a subroutine for converting clock-time more accurately into floating-point. This calculation was needed in about half a dozen places in the program, but the call to the subroutine was not inserted at every point where it was needed. Hence, with a less accurate truncated time of one radar pulse being subtracted from a more accurate time of another radar pulse, the error no longer cancelled.

Fundementally, the problem is that computers don't deal with numbers in any particularly sensible fashion. This isn't an isolated incident.
http://ta.twi.tudelft.nl/users/vuik/wi211/disasters.html
 
  • #14
You can build computers based on any numerical base, but i imagine they will be harder to create and manage (circuit-wise). A better alternative is to build an emulator that performs base 10 arithmetic and runs on a binary computer, without making use of the available floating point operations to perform floating point operations (i.e. use the available logical operations exclusively). It would be slower, but cheaper.
 
Last edited:
  • #15
Nate,
Thanks for the link, it provides a good description of the problem. It still came down to errors due to round off not being dealt with correctly.

Another point mentioned on the program was that Iraqi modifcations to the Scud to get greater range may have rendered them unstable, sort of like a tumbling bullet, thus they would not have followed a predictable course. Making them pretty much impossible to hit.
 
  • #16
Trust me, even if humans used base-2 units, people would find other ways to screw up software. What's really scary is that the people chosen to develop software to control missiles were so incompetent in the first place. What ever happened to design verification?

- Warren
 
  • #17
Apparently I misunderstood the way numbers are stored in computer, but I see how that works now and why some decimal numbers have no binary representation. That article cleared the issue up nicely for me.
 

FAQ: Patriot Missles system failures & the metric system

What is the Patriot Missiles system and how does it work?

The Patriot Missiles system is a surface-to-air missile defense system used by the United States and other countries. It uses radar detection and tracking to identify and intercept incoming missiles or aircraft.

How effective is the Patriot Missiles system in real-world situations?

The effectiveness of the Patriot Missiles system has been debated, particularly in regards to its performance during the Gulf War in 1991. Some reports suggest a success rate of over 90%, while others claim a much lower rate. The system has since undergone significant upgrades and improvements.

What are some common failures of the Patriot Missiles system?

Some common failures of the Patriot Missiles system include software errors, radar malfunctions, and human error. These failures have been addressed through updates and training programs.

How does the metric system play a role in Patriot Missiles system failures?

The use of the metric system has been identified as a contributing factor to the Patriot Missiles system failures during the Gulf War. The system's software was not properly programmed to convert units from the metric system to the US customary system, resulting in errors in tracking and intercepting targets.

Has the Patriot Missiles system fully transitioned to using the metric system?

Yes, the Patriot Missiles system has since been updated to use the metric system. This change was implemented to avoid future errors and improve the system's accuracy. However, some components of the system, such as the operator interface, may still use the US customary system for ease of use.

Back
Top