How does the computer understand computer code

In summary, the computer understands commands by breaking them down into binary code which is then decoded and executed at various levels of abstraction. This process involves logic gates, machine code, and high-level programming languages, each building upon the previous layer to create something as complex as a video game or an operating system.
  • #1
Niaboc67
249
3
I've been programming on and off for a few years just recently taking it seriously. Something I've always wondering about is how the computer understands the commands I type in. Is it essentially binary? and if so how could 1's and 0's possibly create something as complex as a video game or an operating system?
 
Technology news on Phys.org
  • #2
If yup know a little bit about high level programming languages, there are at least two additional levels on which this question can be answered. One level is a machine code (assembler) - that's the way your commands are analyzed, interpreted and executed. Other level is a level of logic gates, where the zeros and ones are a way of changing logical states of other gates, which in turn makes it possible for a processor to execute the machine code.
 
  • #3
Niaboc67 said:
I've been programming on and off for a few years just recently taking it seriously. Something I've always wondering about is how the computer understands the commands I type in. Is it essentially binary?
Yes

and if so how could 1's and 0's possibly create something as complex as a video game or an operating system?

Since you obviously know that they DO, why do you pose the question as though it seems unlikely to you that they could?

Just study some basic computer architecture. As Borek said, the lowest level of understanding is logic gates and above that is groups of logic gates controlled by machine code (1's and 0's) and above that are high level languages. Each of these things builds from the bottom up.

As you are aware, by the time you get up to high-level languages, you are not actually programming the machine, you are programming a program (a compiler) which takes your statements back down to machine code.
 
  • #4
Niaboc67 said:
and if so how could 1's and 0's possibly create something as complex as a video game or an operating system?

How could a bunch of simple protons, neutrons and electrons possibly create something as complex as the human body? See what I did there? :)
 
  • Like
Likes 1 person
  • #5
Niaboc67 said:
I've been programming on and off for a few years just recently taking it seriously. Something I've always wondering about is how the computer understands the commands I type in. Is it essentially binary? and if so how could 1's and 0's possibly create something as complex as a video game or an operating system?

I think others have pointed this out, but there are several different layers of abstraction between high level languages and what is going on in the hardware.

For example, take the following C++ statement:

Code:
a = b+c; // a,b,c are integers

Go down one level of abstraction, and you end up with assembly language:
Code:
mov eax, b      // fetch variable b from memory.
mov ebx, c      // fetch variable c from memory.
add eax, ebx   // add b + c.
mov a, eax     // store the result in variable a.

The next layer of abstraction is to turn the assembly code into opcode. So each of those instructions above will translated to a different binary number. In addition, opcodes can be variable sizes on some platforms. So there is a bitmap that is looked at to compare the instruction names to the opocde format needed. Generally, the more common instructions have smaller instruction codes. But again it's platform dependent. For example, ARM processors uses fix sized opcodes.

Output of a lst file:
Code:
0000002A  A1 00000008 R	    mov eax, b
0000002F  8B 1D 0000000C R	    mov ebx, cc
00000035  03 C3		            add eax, ebx
00000037  A3 00000004 R	    mov a, eax

The first column is the offset in code. The second column is the opcode. The third column is the offset for the variable. Notice some instructions generated more than one byte.


The next step down is to look at the architecture itself. For example, most likely the system will use pipelines. And it breaks the process of fetching, decoding, and executing instructions into multiple tasks.

The next step down is to look at digital circuit design.

Looking at an adder is a good but simple example of a circuit design.
 
  • #7
Once at machine language, there are various methods to decode and execute instructions. One method used on some old 16 bit mini computers was to use some number of bits from the instruction data to index into a "bit per function" table. Say the processor has 79 possible operations and 49 nops that are represented as 7 bits of an instruction. The processor reads an instruction, and uses the 7 bits to index into an table that is 80 bits wide. Only a single bit in each table entry is set, and the bit that is set triggers a specific operation (load, add, subtract, nop, ... ). A similar decoding process is used for most processors, using combinatoins of multiple table lookups and/or decoders and/or demultiplexers and/or ... .
 

FAQ: How does the computer understand computer code

1. How does a computer read and understand computer code?

A computer reads and understands computer code through a process called "compiling". This involves converting the human-readable code into a format that the computer can understand and execute, known as "machine code". This is done by a compiler program, which translates the code line by line into a series of binary instructions that the computer can follow.

2. What is the role of the operating system in understanding computer code?

The operating system acts as a middleman between the computer hardware and the code that is being executed. It manages the resources of the computer, such as memory and processing power, and ensures that the code is executed efficiently and correctly. It also provides a set of libraries and functions that can be used by the code to interact with the hardware and perform tasks such as input/output operations.

3. How does the computer know which code to execute first?

The computer follows a specific set of rules, known as the "execution order" or "control flow", to determine which code to execute first. This is usually based on the structure of the code, such as the use of conditional statements and loops. The code is executed line by line, and the computer will jump to different parts of the code depending on the conditions and instructions given.

4. Does the computer understand all programming languages the same way?

No, different programming languages have their own syntax and rules that the computer must follow to understand and execute the code. While the basic principles of compiling and executing code are the same, the specific instructions and operations may vary between languages. Some languages may also be more efficient or better suited for certain tasks than others.

5. Can a computer learn to understand code on its own?

At this time, computers do not have the ability to learn and understand code on their own. They still require human programmers to write and debug the code. However, advancements in artificial intelligence and machine learning may lead to the development of computers that can learn and understand code in the future.

Back
Top