How does a computer tell a 0 from a 1?

  • Thread starter EnumaElish
  • Start date
  • Tags
    Computer
In summary: It's interesting that something which is variably random at the most basic layer becomes something fairly deterministic at the top. Maybe randomness & determinism aren't mutually exclusive after all.In summary, when it comes to digital circuitry, a transistor is essentially a switch that either generates no output (off) or is saturated (on).
  • #1
EnumaElish
Science Advisor
Homework Helper
2,350
124
Since a CPU is essentially a chip, a transistor, or a capacitor (?), it must be operating on a continuous scale. How does it translate a continuous scale into a binary one?
 
Computer science news on Phys.org
  • #2
On or off.
A transistor is simply a switch.
 
  • #3
A transistor is an amplifier. The transistors in your analog equipment are designed to faithfully amplify the inputs, that is, linearly. All transistors have an upper limit to their linear amplification range. The transistors output flattens out beyond this limit and eventually reaches some saturation level. This saturation effect is very undesirable in analog circuitry but is essential for making digital circuitry.

The transistors in digital circuitry have a very high gain, making the linear range negligible. Instead, the transistors in digital circuitry are either generating no output (off) or are saturated (on). The transistor becomes a switch.
 
  • #4
What constitutes "0" and "1" is defined as part of the specification of different logic circuit types (CMOS, TTL, etc). It's then the chip designer's job to make sure the chip responds properly to input signals and generates valid outputs.

See http://www.interfacebus.com/voltage_threshold.html
 
  • #5
Most commonly, logic 1 is defined as any voltage above some threshold; logic 0 is defined as any voltage below some threshold. (There are some kinds of exotic logic that use currents or other signalling mechanisms, but you can safely ignore pretty much all of them.)

In CMOS static digital logic, the output of each gate is connected to either the positive supply (VDD) or to ground, via a conducting (turned-on) transistor, at all times. Thus, the gate is producing a clear, unambiguous logic 1 or logic 0 signal.

- Warren
 
  • #6
Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?
 
  • #7
EnumaElish said:
Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?

Yes. For example, if you are connecting two digital devices by a cable, what is the maximum length of cable you can use for a given error rate in the transmitted signal.
 
  • #8
EnumaElish said:
Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?

There are enormous branches of electrical engineering devoted to exactly this possibility. Every piece of digital logic ever designed includes many such considerations.

In the real world, supplies have noise and surges. Power wires on chips have resistance and thus lose voltage over their length. Transistors take time to turn on and turn off. Wires and transistors have unavoidable parasitic capacitances that must be charged and discharged. Cosmic rays can strike memory cells and change their contents. Clocks can reach flip-flops at the wrong time and put the flip-flop into indeterminate "metastable" states. There are dozens and dozens of failure modes that can cause poorly-designed digital circuits to malfunction because, at some point, a logic low is confused with a logic high, or vice versa.

- Warren
 
  • #9
It's interesting that something which is variably random at the most basic layer becomes something fairly deterministic at the top.
Maybe randomness & determinism aren't mutually exclusive after all.
 
  • #10
-Job- said:
It's interesting that something which is variably random at the most basic layer becomes something fairly deterministic at the top.
Maybe randomness & determinism aren't mutually exclusive after all.
Is this because of some kind of averaging algorithm (execute an operation many times, then take the average [or some other summary statistic]), or is there some other explanation?
 
  • #11
In a sense, yes, it's an averaging. If you try to send just one electron down a wire, you'll find that its motion is almost completely random -- moving at hundreds of thousands of meters per second in random directions due to its own thermal energy and collisions with the metal atoms. Its motion is almost entirely dominated by thermal energy, and it just barely drifts down the wire at all, at a leisurely couple of centimeters per hour.

On the other hand, if you observe not just one electron, but billions, you can make a very accurate calculation of the number of electrons passing a specific point in the wire every second, or of the average velocity of those electrons.

- Warren
 
  • #12
One time we had a memory that would flip its state once every few weeks. After much money and investigation it turned out that the packaging that we were using was emiting alpha particles of all things. You never know what you are going to find.
 

FAQ: How does a computer tell a 0 from a 1?

1. How does a computer store and process information in binary code?

A computer uses a binary system, which is made up of two digits (0 and 1) to store and process information. Each digit is represented by an electrical signal, and these signals are interpreted by the computer's hardware to perform calculations and execute instructions.

2. What is the role of transistors in a computer's ability to tell a 0 from a 1?

Transistors are tiny electronic switches that can be turned on or off by an electrical signal. They are used in computer processors to represent the 0 and 1 digits of binary code. When a transistor is turned on, it represents a 1, and when it is turned off, it represents a 0.

3. How does a computer interpret the binary code to perform specific tasks?

A computer has a central processing unit (CPU) that is responsible for interpreting and executing instructions. The CPU contains circuits that can perform basic calculations and logic operations based on the binary code. These operations are then used to execute more complex instructions and run programs.

4. Can a computer only understand and process binary code?

Yes, a computer can only understand and process information in binary code. This is because the hardware of a computer is designed to only interpret electrical signals as either on (1) or off (0). However, we can write programs and code in higher-level languages that are then translated into binary code by the computer's software.

5. How does a computer tell if a 0 or 1 is being represented by an electrical signal?

A computer uses a clock signal to synchronize the flow of electrical signals. When the clock signal is high, the computer reads the electrical signal as a 1, and when the clock signal is low, the computer reads it as a 0. This allows the computer to accurately interpret the binary code and perform tasks accordingly.

Similar threads

Back
Top