- #1
- 2,350
- 124
Since a CPU is essentially a chip, a transistor, or a capacitor (?), it must be operating on a continuous scale. How does it translate a continuous scale into a binary one?
EnumaElish said:Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?
EnumaElish said:Has anyone seen the probability of a "false zero" (1 read as 0) or a "false one" (0 read as 1) being calculated on any type of circuitry?
Is this because of some kind of averaging algorithm (execute an operation many times, then take the average [or some other summary statistic]), or is there some other explanation?-Job- said:It's interesting that something which is variably random at the most basic layer becomes something fairly deterministic at the top.
Maybe randomness & determinism aren't mutually exclusive after all.
A computer uses a binary system, which is made up of two digits (0 and 1) to store and process information. Each digit is represented by an electrical signal, and these signals are interpreted by the computer's hardware to perform calculations and execute instructions.
Transistors are tiny electronic switches that can be turned on or off by an electrical signal. They are used in computer processors to represent the 0 and 1 digits of binary code. When a transistor is turned on, it represents a 1, and when it is turned off, it represents a 0.
A computer has a central processing unit (CPU) that is responsible for interpreting and executing instructions. The CPU contains circuits that can perform basic calculations and logic operations based on the binary code. These operations are then used to execute more complex instructions and run programs.
Yes, a computer can only understand and process information in binary code. This is because the hardware of a computer is designed to only interpret electrical signals as either on (1) or off (0). However, we can write programs and code in higher-level languages that are then translated into binary code by the computer's software.
A computer uses a clock signal to synchronize the flow of electrical signals. When the clock signal is high, the computer reads the electrical signal as a 1, and when the clock signal is low, the computer reads it as a 0. This allows the computer to accurately interpret the binary code and perform tasks accordingly.