How do computers add,subtract,etc?

  • Thread starter aychamo
  • Start date
  • Tags
    Computers
In summary, computers are able to do math by using binary and logic gates to perform basic operations like addition. These operations are programmed by assigning numbers to sets of instructions, and the computer blindly executes these instructions. Using a different base language for computers, such as hexadecimal, would be possible but not practical due to the difficulty and lack of advantages.
  • #1
aychamo
375
0
How do computers do math? I mean, it seems kinda like the math part is the most basic of the functions of the computer.

But I mean, how does it KNOW that 1+1=2?

To give a background, I understand binary: 1+1=10. I have a rudimentary understanding of assembly, I tried to program in it many years ago, so I could look at this:

mov ax, 1
add ax, 1

But how does that add 1 into ax, and then have the value of 2? (or b10).

I can almost see how like shl, and shr (is that square and square root?) can work, but shifting the bits, but not adding, etc.

I'm assuming the microchip (x86?) has to be preprogrammed to know the order of numbers.. But I have no clue. May someone shed light on this?
 
Computer science news on Phys.org
  • #2
How do you add decimal numbers?
Code:
  1
 158
+ 23
-----
 181

So why can't you do that in binary as well?
Code:
 111
  101
+  11
------
 1000

Do you refer to 1, 2, 3, 4, ... as the "order of numbers"? If so, no, a computer doesn't the inifinite series of numbers programmed in it.
 
  • #3
Well, I know that I can add numbers, because I can visualize it. I can see that if I have three objects, and take in an additional 4 objects, that I can now count 7 objects. But how does a computer do that?

Even this equation: 4+1=5. If you just put the value "4" in a computer, and you add the value of 1 to it, how does it know how to add?
 
  • #4
How do you add 1 to 4? Probably "intuitively", because these are simple numbers. But what about 412793 + 23871?

If you tell a computer to add 1 to 4, it first takes both numbers and converts them to binary. 1 becomes 1, 4 becomes 100. Now it adds them 1 + 100 = 101 and converts the answer back to decimal: 5.
 
  • #5
I think I am doing a poor job at conveying what I am trying to ask.

How does the computer know how to add 1 + 100 to give 101 ? I mean, how do you program a computer to be able to add numbers?
 
  • #6
Let's start with 4 empty registers, and call them A, B, C and R. We want to add A to B, use the C register for carrying results and R for storing the result. (Obviously this is done a lot more efficiently inside the computer, but I'm trying to simplify things.)

Initially:
C = 0000
A = 1011
B = 0011
R = 0000

The computer goes from right to left, and does the following: it adds* the two bits in A and B, saves the result in the R register. After the first operation the registers look like this:

C = 0010
R = 0000

This is because 1 + 1 = 10, so we put 0 in the result and save 1 in the carry. Now the computer continues for the 2nd pair of bits, again 1 and 1. 1 + 1 = 10, but we also have a carry from earlier! So it adds the carry to the result 10 + 1 = 11 and registers it:

C = 0110
R = 0010

And it does the same again and again until it's done.

* You may still not understand how computer can add 1 and 1, or 1 and 0, or 0 and 0. This is simple and can be done with two gates, AND and XOR. Consider the following truth table:
Code:
 A | B | AND | XOR
---+---+-----+-----
 0 | 0 |  0  |  0
---+---+-----+-----
 0 | 1 |  0  |  1
---+---+-----+-----
 1 | 0 |  0  |  1
---+---+-----+-----
 1 | 1 |  1  |  0
---+---+-----+-----
If (A XOR B) is 1, the result is 1. Otherwise, the result is 0. As for the carry, that is determined with by the (A AND B) result - if it's 1, there is carry, otherwise there isn't.

Again I should stress that the computer does this much more efficiently. For example, it doesn't use a whole register for the carrying - it uses one bit in a special flag register to know whether or not the last operation had a carry. Also, it doesn't store the result in a completely new register, it stores it in the one of the registers A or B (as you probably guessed by the look of the ADD command in Assembly).
 
  • #7
Chen is showing you the mechanics, the reason the computer knows how to do those mechanics is because at some point in time a programmer wrote a set of instructions and assigned a number to that set of instructions. When you give the compiler a ADD command it transforms that mnemonic into the number associated with the set of instructions for adding 2 numbers (or registers). When the CPU encounters the number associated with a set of instruction it blindly executes that command. This is the work of low level programmers to create the set of instructions that the CPU actually executes. I believe one of the least favorite courses toward a CS degree is the course in compilers where you will write the code for a simple compiler.
 
  • #8
Ahhhhhhhhh, I honestly think I understand it now! It was the thing about the (logic, aren't they called??) gates.

Since this is how computers do the most basic stuff, with binary and the gates and all that, would it be kinda impossible to have a different base language for computers? I mean, say you used hex as the most basic thing, could you do logic gates with hexadecimal?

By the way, thank you both for writing that stuff out. That hit it home. I really appreciate it.
 
  • #9
Yes, they are called logic gates. What kind of meaning would (4 AND 7) have, though? I don't know if it would be impossible, but certainly very very difficult and not worthwhile (what would be the advantages?).
 
  • #10
All logic is done in binary, the reason we use hex is the ease of translation between hex and binary. 1 hex digit translats to 4 binary digits. Not only is the Hex is easier to "read" then binary, it is far more compact. So the Hex is for humans, but it make the binary very accesable.

0001 = 1
0010=2
0011=3
0100=4
0101=5
0110=6
0111=7
1000=8
1001=9
1010=A
1011=B
1100=C
1101=D
1110=E
1111=F

so 4 and 7 = 0010 and 0111 = 0010
 
Last edited:
  • #11
Integral said:
so 4 and 7 = 0010 and 0111 = 0010
Let's see you plug that into a single gate operation, though. :smile: (That is what I meant, obviously there is a meaning to 4 AND 7.)
 
  • #12
Serial?

You are correct each individual gate handles a single bit at a time. Thus the old 8 bit computers (where I did a bit of machine language dabbling) had sets of 8 logic gates, new ones,... we're up to 64 now, I believe.
 
  • #13
And these gates, in actuality are they just some type of thing that controls electricity?
 
  • #14
Yep.

Just a bunch of really small transistors, for the most part.
 
  • #15
Here is a thread which dicsusses operation of a basic solid state device.
 
  • #16
Basically, all of this comes from building appropriate gates. The same way you can make an "and" gate, that produces the proper "output" given two inputs, you can put together one that "adds" and gives you also a carry-on bit, which you can then add to the next two binary digits.

The design of these type of circuits is actually one level below compilers. IIRC, it is called "microprogramming", and it can be done using logic gates and "flip-flop" circuits, which allow you to synchronize the operation of a circuit with a "clock" signal, so that operations are performed in the proper sequence.
 
  • #17
ahrkron,

Not quite. Microprogamming refers to CISC (Complex Instruction Set Computing) processors, in which large instructions are internally broken down into a routine of more fundamental instructions. Microprogramming has gone the way of the dinosaur, thankfully.

The simple answer is that a computer processor is made of tiny transistors, which can switch electrical current. The transistors are combined together into groups with function as logic gates, and can perform simple operations like AND, OR, NOT, XOR, and so on. These logic gates can then be grouped into larger-scale strucures which can take in two 8-bit (or 16-, or 32-, or 64-) strings of binary digits and produce their sum or difference. Some intelligent person who designed the processor wired up his logic gates in the proper way to perform those operations.

- Warren
 
  • #18
Integral said:
All logic is done in binary, the reason we use hex is the ease of translation between hex and binary. 1 hex digit translats to 4 binary digits. Not only is the Hex is easier to "read" then binary, it is far more compact. So the Hex is for humans, but it make the binary very accesable.

0001 = 1
0010=2
0011=3
0100=4
0101=5
0110=6
0111=7
1000=8
1001=9
1010=A
1011=B
1100=C
1101=D
1110=E
1111=F

so 4 and 7 = 0100 and 0111 = 0100

Hey I know this an old thread, but tell me if it is just chance, or does it actually mean something, the above. I'm not sure what I mean, but check it out.

The operation: (4 and 7), into binary gives 0100 and 0111 = 0100. So it gives 4. That's kinda like in statistics or whatever when you have two sets and you do the "and" thing, you get whatever is in common with the two sets. And the first four digits of the numbers that comprise 7 are also in the number 4. That's neat, right?
 

FAQ: How do computers add,subtract,etc?

How do computers perform addition?

Computers perform addition using binary code, which is a system of representing numbers using only 0's and 1's. The computer's central processing unit (CPU) has circuits that can perform basic arithmetic operations, including addition. The CPU takes in two binary numbers, adds them together, and produces a result in binary form.

What is the process of subtraction in computers?

Subtraction in computers is similar to addition, but with an additional step of borrowing. The computer's CPU uses binary code to represent numbers and performs subtraction by converting the numbers to their two's complement form, which involves flipping all the bits and adding 1. The CPU then performs addition on the two's complement numbers to get the correct result.

How do computers handle multiplication?

Computers handle multiplication by using a combination of addition and shifting. Multiplication is essentially repeated addition, so the computer's CPU will add one number to itself multiple times to get the final result. Shifting, or moving bits to the left or right, is also used to speed up the process.

What is the algorithm for division in computers?

Division in computers is a more complex process than addition, subtraction, and multiplication. It involves multiple steps, including repeated subtraction and shifting. The CPU uses a division algorithm, such as the long division method, to divide two numbers and produce a quotient and remainder.

Can computers handle more complex mathematical operations?

Yes, computers can handle a wide range of mathematical operations, including exponentiation, logarithms, and trigonometric functions. These operations are usually built into the computer's hardware or software, and the CPU uses algorithms and lookup tables to perform them efficiently.

Back
Top