C and Java questions that I asked my teacher

  • Java
  • Thread starter pairofstrings
  • Start date
  • Tags
    Java Teacher
In summary: Why C language programs cannot be run on other platforms? Is it because of different hardware architecture of the computer that I want to run my C language program on or is it the Operating system issue?C language programs cannot be run on other platforms because they require a specific compiler and operating system.
  • #1
pairofstrings
411
7
My teacher says : C language programs are portable on all platforms, which includes, Unix, Linux, windows etc.

My question : What is the difference between C language programs and Java language programs when all the programs of either languages can be run on any platforms?

My teacher says: C language .exe files cannot be run on all platform. They can only run on the platform where the program was originally developed and executed. But .c files can be run on any platforms. (I know it's not a good idea to spread my .c files to everyone). But in case of Java language programs .class files are given to JVM software which converts the byte code to machine code. And processor knows how to read machine code. So, these files which comes out of JVM can be run on any platform. All computers suppose to have JVM software if you want to run Java language program.

My question: So, if I want my .exe files to run on all platforms I need a software like JVM which makes .exe files run on any platform?

My teacher says: Yes.

My question is:

1. Suppose if I take a only .class file of Java from one computer with an intention that I want to run it on another computer. Then can I do it? If yes. Then does it matter what version of JVM I am using?

2. Why C language programs cannot be run on other platforms? Is it because of different hardware architecture of the computer that I want to run my C language program on or is it the Operating system issue?

3. I know that we need a JVM software for java programs to make the java programs run on all platforms but why do we need a JVM software at all? Is it because the internal hardware architecture of other computer is not compatible with the code or the Operating system of other computer that I want to run my java program on does not handle java program. Operating system issue?

I asked my teacher above questions but he just kept stressing on JVM software. And how C language don't have any such software.

Please answer.
 
Last edited:
Technology news on Phys.org
  • #2
Different computers have different machine languages, which are the ONLY language they each actually run. A C program compiled into an EXE file on one machine contains the machine code for THAT machine, not for any other machine. You have to compile your code on another machine to get the program to run on that machine, and you MAY have problems due to compiler and/or operating system differences.

Another way to do all that is to have a slightly different KIND of compilation process whereby the "compiler" generates an intermediate code and then you have a final part of the compiler placed on each machine and it turns the intermediate code into the machine code for that machine.

The latter process is what JAVA / JVM do.
 
  • #3
Why computers have different machine language? Is it because of internal hardware architecture of the processor? I think this as a reason because, in general, a compiler converts instructions into binary code and only processor can read these binary code.
We have 32 bit processors and 64 bit processor. So, my thinking says that C program developed on 32 bit processor, cannot be run on 64 bit processor. And I have to find out how these 0s and 1s are processed inside the processor. I know that before these 0s and 1s reaches the processor, these 0s and 1s are loaded in a "loader".

I know that when it comes to programs developed in assembly language for windows platform, the program will not run on unix platform because the instruction sets are not similar. Because, if a program takes 5 lines of instructions on windows platform the same program will take 10 lines of instructions on other platform. And there are other reasons also regarding statements like add ax, bx. In one platform data from ax, is added to bx and in other platform data from bx, is added to ax. It happens quite oppositely. I don't remember this perfectly. Maybe I am stating it all wrong.

Please set me right.
 
Last edited:
  • #4
pairofstrings said:
Why computers have different machine language? Is it because of internal hardware architecture of the processor?

Yes, the "machine language" is just a way of interacting DIRECTLY with the architecture of the machine and they are all different in the details.

We have 32 bit processors and 64 bit processor. So, my thinking says that C program developed on 32 bit processor, cannot be run on 64 bit processor.

Not necessarily. I use Window 7 in 64 bit mode but it runs all 32-bit programs just fine. It will NOT work the other way around. A program that specifically makes use of 64-bit capabilties will not run on a 32-bit processor.

And I have to find out how these 0s and 1s are processed inside the processor. I know that before these 0s and 1s reaches the processor, these 0s and 1s are loaded in a "loader".

Loaded "IN" a loader is not correct. They are loaded into the machine BY a loader.

I know that when it comes to programs developed in assembly language for windows platform, the program will not run on unix platform because the instruction sets are not similar. Because, if a program takes 5 lines of instructions on windows platform the same program will take 10 lines of instructions on other platform. And there are other reasons also regarding statements like add ax, bx. In one platform data from ax, is added to bx and in other platform data from bx, is added to ax. It happens quite oppositely. I don't remember this perfectly. Maybe I am stating it all wrong.

You have this right although if you want to understand computers completely, you do need to study all this a bit more.
 
  • #5
pairofstrings said:
We have 32 bit processors and 64 bit processor. So, my thinking says that C program developed on 32 bit processor, cannot be run on 64 bit processor.
In the case of Visual Studio, you can create a 64 bit program on a 32-bit processor. The compiler itself is a 32 bit program, but it can optionally produce 64 bit code. The created 64 bit program will only run on 64 bit processors. This is an example of "cross" compiling where the program is compiled and linked on one computer, but will be run on another computer. Another example of this is the ARM processor used in many consumer devices, where the compiler runs on a PC, but produces code for the ARM processor that runs in some consumer device.

pairofstrings said:
I know that when it comes to programs developed in assembly language for windows platform, the program will not run on unix platform because the instruction sets are not similar.
You can have windows and unix or linux running on the same PC (dual boot or virtual PC). The instruction set is the same, but the calls to the operating system (windows versus unix) are not the same.
 
Last edited:
  • #6
pairofstrings said:
Why computers have different machine language? Is it because of internal hardware architecture of the processor? I think this as a reason because, in general, a compiler converts instructions into binary code and only processor can read these binary code.

There are different types of platforms created for various reasons.

As an example consider two types of processors: standard x86 CPU's and common modern graphics cards.

The CPU has a lot more flexibility in terms of the instruction set as well as the amount of code it can execute at a time. Nowadays you find around 4-8 cores in a CPU.

On the other hand a GPU (graphics processor unit) has a lot more cores that operate completely in parallel, but they do not have the amount of instructions that CPU's have and they do not have the memory capacity that normal CPU's have.

Both of these are designed for different purposes. The CPU is a more general purpose piece of hardware, while the GPU has a more specific role that is based on the standard rendering pipeline that consists largely of vertex and fragment processing, and unsurprisingly, has been designed as such.

This kind of idea applies to every specific platform: they are all designed for a particular purpose just like computer programs are designed and later optimized to do certain things really well.
 
  • #7
I know that there are different compilers for different platforms. But am I suppose to have a different compiler if the hardware architecture of computer is same but the operating system is different?
Suppose I have two computers. One running Windows Vista 32 bit processor. Second PC running Linux/Unix/Solaris but it has the same 32 bit processor. Is a different compiler still required or I can use the same compiler that I have on Windows Vista. I know that Unix comes with built-in C language compiler.
 
  • #8
You need to use a compiler that targets your machine, and your operating system, and the standard libraries either provided by your operating system or the compiler. Some of the system functions you call in your program (e.g., printf if you are using C, the stream insertion operator operator<< if you are using C++) are not a part of the generated executable. Those functions/methods are instead part of some standard library that is dynamically loaded at execution time. The name of the library function loaded at execution time, and its behavior, had better be consonant with the assumptions built into the compiler.
 
  • #9
You can't run a C++ compiler written for Windows, on a Linux system, even on the same processor, for the same reason that you can't take the Windows version of Microsoft Word and run it on a Linux system. The two compilers use different OS-specific functions to interact with the user and the operating system that they run on.

In addition, the compiler has to generate code that properly uses any OS-specific functions for the OS that the program being compiled is going to run on (the "target system"), and link to appropriate libraries. In principle, a compiler that runs under Windows could generate both programs that run under Windows and programs that run under Linux, but I don't know if any actually do this.
 
  • #10
Compilers.jpg
Compilers installed adjacent to Operating system. Wrong or correct?

I know that the examples of System softwares are Operating system softwares and Compiler softwares. All the application software needs Operating system so that the application softwares can be able to run. What about the compiler softwares are they installed on Operating system or they are some special software which are installed adjacent to operating system? I get this doubt because as Unix OS has a built in C compiler, the C compiler must be installed adjacent to Operating system, not on the Operating system like the way Application systems are installed. Am I wrong? I don't know if I am using the correct terminology. Sorry about that.
 
Last edited:
  • #11
pairofstrings said:
Compilers.jpg
Compilers installed adjacent to Operating system. Wrong or correct?

I know that the examples of System softwares are Operating system softwares and Compiler softwares. All the application software needs Operating system so that the application softwares can be able to run. What about the compiler softwares are they installed on Operating system or they are some special software which are installed adjacent to operating system? I get this doubt because as Unix OS has a built in C compiler, the C compiler must be installed adjacent to Operating system, not on the Operating system like the way Application systems are installed. Am I wrong? I don't know if I am using the correct terminology. Sorry about that.

A compiler IS an application. It's application is to compile programs. ANY application can be built in as part of an operating system if you so desire.
 
  • #12
D H said:
You need to use a compiler that targets your machine.

In the above statement the word "machine" is only to refer to the processor or the entire hardware of a PC? I get this doubt because I know that, after the data is processed in the processor, the RAM and the BIOS comes into picture( I don't know which order) and BIOS knows the peripherals very well. So, the word "machine" must only refer to processor?

D H said:
The name of the library function loaded at execution time, and its behavior, had better be consonant with the assumptions built into the compiler.

Too complex for a newbie. I don't understand the above statement. I am writing what I understand : When a program is executed, and if we write any (pre-defined(?)) library functions(and header files(?)) in that program then the name of the (pre-defined(?)) library functions (and header files(?)) loaded during the execution time and the behavior of these functions (and header files(?)) must be in consonant with ... (what?).

What about the header files? No header files, only (pre-defined(?)) library functions?
 
  • #13
Header files are not a part of our executable. You've been told this, multiple times. One of the very first steps in the compilation process is to deal with preprocessor commands such as #include <stdio.h>. Those header files are just a part of your code as far as the compiler proper is concerned.
 
  • #14
D H said:
Some of the system functions you call in your program ... C , C++, ... are not a part of the generated executable. Those functions/methods are instead part of some standard library that is dynamically loaded at execution time.
The dynamic loading can be optional for older C type programs. With Microsoft compilers, it is common for older C type programs to include all library functions as part of the generated executable during the link stage, called static (versus dynamic) linking. For C++ programs, some of the library calls are dynamically loaded from a "redistributable" or ".net" component installed on a system. It is possible to statically link some libraries (include those libraries in your program's executable file), such as MFC (Microsoft Foundation Class), but the preferred (Microsoft's recommendation) method is to use dynamic links for those libraries. Non-standard dynamically linked libraries may require calls in your program to load and make calls to any non-standard dynamic linked library functions.

pairofstrings said:
In the above statement the word "machine" is only to refer to the processor or the entire hardware of a PC?
The entire PC, or at least most of it (there may be components of the PC unused by a particular program, such as game controllers).

pairofstrings said:
After the data is processed in the processor, the RAM and the BIOS comes into picture, and BIOS knows the peripherals very well.
Normally, other than at boot time, the BIOS isn't used. The operating system will include it's own drivers for all the components on a computer. The main exception to this would be MSDOS.

pairofstrings said:
When a program is executed ... library functions ... header files must be in consonant with (what?).
The (what?) is your PC and the operating system the PC is currently running on.

pairofstrings said:
What about the header files? No header files, only (pre-defined(?)) library functions?
The header files mostly include definitions for variables, and prototypes (or sometimes macros) for functions. The linker and/or the loader and/or your program will load the libraries and/or make calls to the library functions.
 
Last edited:
  • #15
CCompilerandInterfacing.png


1. Preprocessor actually copies the header files into the source code and then the combined file is given to the compiler. Preprocessor takes the header files from the HDD. The compiler generates the equivalent binary code of the file and we call it as .obj file.

2. Linker is a part of compiler. Linker adds the binary code of .obj files of library funtions into the binary .obj code and a new file is generated called .exe file. This .exe file is stored in HDD.

3. When we run the .exe file, Loader, which is an Operating system software loads the .exe file into the RAM. RAM contains segments.

4. During execution all the mathematical operations are performed by ALU present in the processor. Required data is fetched from RAM and data is stored in registers and ALU takes-in the required 'numbers' from these registers and computes them as required and result is put in some register in CPU and then this result is sent back to the segment in RAM. CPU don't store any data.

I have noted the point that BIOS is used only during boot time, and the operating system will include it's own drivers for all the components on a computer and main exception to this would be MSDOS, but ...


5. (Figure explanation, continuation from point 4) Data is sent back to the RAM. We can even make changes to the values assigned to the variables during execution. Now, if I want to send data to the devices. (My thinking) Then there is "some component" to which the binary equivalent of this result can be sent. The result is attached with the binary code of device. This predefined binary code is stored in "some component". This signature which is [binary code + binary equivalent of result] is now sent to the devices. Now, let's say I want to display the result of the following program on the monitor.


#include <stdio.h>
#include <conio.h>
void main(){
int a,b, sum;
a=10;
b=20;
sum= a + b;
printf("%d", sum);
}

If the binary code is matched at the terminal of the monitor, then data/result is allowed by that device into monitor's processing circuitry and there the data/result is processed and it is displayed on the screen. Correct or Wrong?

And

If "some component" really exists and if I want to program this "some component" and create the device drivers am I suppose to use Assembly language?

The exception, which is MSDOS, where BIOS is used, how could BIOS be different from "some component" ?

Please help.

EDIT: I don't want to know how monitor or other devices connected to a computer works. I am just asking that if BIOS is not used on the other times then can "some component" (if it exists, please see figure) be used? Should there be "some component" attached to the processor? And we know that processor is sending data back to RAM but is the processor really sending data to other devices as well or, is RAM sending the data to the other devices but not the processor?

Sorry if my post is confusing, but I tried to give good picture of how I see processor in giving signals to other devices. I think when processors execute drivers provided by windows, the drivers must also have some predefined binary code and I think that this predefined code is stored in "some component" which will help in identifying the correct device attached to the computer.
My main question is point 5. But if you see any problems in other points, please correct it.
 
Last edited:
  • #16
If the binary code is matched at the terminal of the monitor, then data/result is allowed by that device into monitor's processing circuitry and there the data/result is processed and it is displayed on the screen. Correct or Wrong?

I think you would do well to read up more on how monitors work. Your terminology is very confusing and suggests that you do not have a good understand yet of how things end up getting sent to the monitor and in what form, although it is somewhat close.
 
  • #17
pairofstrings said:
The exception, which is MSDOS, where BIOS is used, how could BIOS be different from "some component"?
The BIOS includes code that interfaces with most components on the system (monitor, keyboard, disk drives, ...). Most operating systems other than MSDOS, include their own code to interface with those components, and the code for each component is called a device driver.

The answer to your question is that device drivers or BIOS are "software", machine language code that interfaces with some component, while an actual component is "hardware" made up of elecrical circuits and mechanical components.
 
  • #18
For the OP's sake, I'll clarify a bit on BIOS and "device drivers". Device drivers are just programs that are written to interact directly with hardware. The BIOS (Basic Input Output System) is just device drivers that have been burned into ROM so that they are there as soon as the computer is turned on and do not need to load from a hard drive the way all other software does. All computers have some form of BIOS because it is the BIOS that does the first step in loading the operating system (which then takes over and may or may not use the BIOS code any further --- early versions of Windows used the BIOS but I don't know if current versions do or not.)
 
  • #19
Thank you.

My mistake: In post #15, I thought that BIOS is a hardware device and we are
writing device drivers in the hardware which is BIOS here. Which is wrong.

Correct: Device drivers or BIOS are "software" which is written inside ROM. ROM
is an electrical device here.

And I think in the case where Operating system takes care of newly attached devices by providing the required device drivers, if the device driver is not present then that is the reason why we load the device drivers from the CD that the manufacturer of the device provides us, so, the device drivers are loaded and operating system acquaints itself with the drivers and computer is restarted and the newly attached device starts working.
CompilerandInterfacing1.jpg
In the above figure the device(below processor) attached to the processor is what I now understand is ROM.

Now, if BIOS is not being used then as Operating system is taking care of the devices, then I thought there must "some component" (below figure) other than ROM which should take care of the peripheral devices. But I think this "some component" is not required because I think when data is sent out from the processor which is suppose to be given to the, say, monitor, then I think all we need is an Multiplexers near the processor end and a De-multiplexer at the monitor end because data cannot be sent in parallel way, I think, parallel way can be done but it's not feasible. So, therefore, we are using Multiplexers and De-multiplexers. And possibly ROM could also send the using Multiplexer and De-multiplexer. Something like the figure below. The below figure and the explanation written above is just my thinking. If you think it's wrong, please correct it.
CompilerandInterfacing3.jpg


Aside from all of the above concept, I have to see why there are different computer architectures that are being implemented. Are all these connections from HDD, RAM, Processor and other components (into and out-of) are architecture dependent. Can you please outline what must be in the computer architecture theory and how I should tackle that subject? I am targeting Artificial Intelligence. I have a background in electronics and communication.
 
Last edited:
  • #20
It's possible that you'll get some more replies to this, but my own take is that you are asking for WAY too much as a simple forum response. You need to just start reading about computer architecture and then ask questions rather than ask us here to write the book for you.
 
  • #21
phinds said:
It's possible that you'll get some more replies to this, but my own take is that you are asking for WAY too much as a simple forum response. You need to just start reading about computer architecture and then ask questions rather than ask us here to write the book for you.
I agree with phinds.

If you (pairofstrings) have specific, focused questions, ask them, but for very broad questions, you should be doing the research.
 
  • #22
pairofstrings said:
Multiplexers near the processor end and a De-multiplexer at the monitor end because data cannot be sent in parallel way.
Current PC processors have over 1000 connector pins (1155, 1156, 1366). A lot of those pins are used for ground, but there are a few parallel busses on a typical processor, memory address bus, memory data bus, pci express busses, usb ports, sata ports, ...

You can start at wiki and then go browsing through other articles and references. Example wiki link:

http://en.wikipedia.org/wiki/LGA_1155

Note that a monitor is going to be connected to the graphics card, not as a separate device.
 
  • #23
then I think all we need is an Multiplexers near the processor end and a De-multiplexer at the monitor end because data cannot be sent in parallel way, I think, parallel way can be done but it's not feasible

Total nonsense. Parallel data is faster than serial data and is COMPLETELY feasible. When PC's first came out, all printers were driven by a parallel cable and I think many monitors were as well. Serial is cheaper which is why it is used now.
 
  • #24
phinds said:
Parallel data is faster than serial data and is completely feasible. When PC's first came out, all printers were driven by a parallel cable and I think many monitors were as well. Serial is cheaper which is why it is used now.
For some high speed peripherals that use relatively long and cheap, somewhat noisy ribbon cables like IDE hard drives, keeping parallel signals in sync at high speeds became an issue, which is part of the reason (or the "excuse") for the change from parallel IDE cables to serial SATA cables, and cd/dvd peripherals had to follow along as IDE got replaced by SATA, but the main reason was probably that reduced pin count connectors and cables are cheaper. SCSI devices use a better cable system, and acheived high tranfer rates up to 320 giga-bytes / second, but cable length and the number of attached devices became an issue when trying to go faster, so there is a transition towards SAS (serial attached scsi).

Memory data busses are very fast and still parallel, 64 bits wide for LGA 1155 and 1156, and 96 bits wide for LGA 1366. PCI Express busses are 16 bits wide and some motherboards (X58) have two of them, in addition to single bit wide PCI Express busses. These parallel interfaces are connected directly to the motherboard, not over cables.

Monitors using VGA connectors use analog signals sent in parallel for red, green, blue, and sync data. DVI cables can send VGA type data or digital pixel data sent as three serial data streams for red, green, and blue. HDMI drops VGA type data, but adds the ability to send RGB color data as YCbCr.


I forgot to mention that current high end Intel processors now have 2011 pins:

http://en.wikipedia.org/wiki/LGA_2011
 
Last edited:
  • #25
rcgldr said:
... For some high speed peripherals that use relatively long and cheap, somewhat noisy ribbon cables like IDE hard drives, keeping parallel signals in sync at high speeds became an issue ...

Good point. Another consideration is that as electronics became faster and cheaper, the extra circuitry associated with serial transmission because much more reasonable.
 
  • #26
In C programming language, the loader which is part OS software helps in loading the executables into the RAM and RAM is connected to processor and HDD as we have seen earlier. ROM has no work in extracting data from the HDD and feeding the data to the processor. All this extraction and feeding of data is done by OS, especially Loader, through RAM. Right?

My question is: If I am using a keyword 'const'. Please see the following C language program.

Code:
#include <stdio.h>
void main{
int a = 5;
const int b = 10;
printf("%d %d", a , b);
*/b = b + a ;
printf("%d", b);*/
}

While the program is successfully compiled, the 'const' keyword used help in assigning values to the variables and memory is allocated to the values on ROM. Right? Now, as program is compiled nicely, a .exe file is generated and file is stored on HDD. End of story.

Now, I want to run this .exe file that was generated. I double click it and as usual loader loads the .exe file into the RAM and we have processor but there is no ROM in our discussion here again. The usage of 'const' keyword is taken care of during compilation. That means, a particular code is generated by the compiler when we use 'const' keyword that makes ROM not to get involved in loading the .exe file through loader into the RAM.
Same concept goes for # define ? A preprocessor?

So, basically compilers are softwares written to implement the language which is documented. I tried to deduce the answers from the posts in my thread called "How programming languages are created?" but now I see clearly how compilers intend to do their work. So, same thinking goes for all translators? What if I use only machine code (not assembly language) i.e., 0s and 1s to interact with the digital electronic devices? I think that will need a philosophical answer that I cannot think about. Help.
 
Last edited:
  • #27
pairofstrings said:
In C programming language, the loader which is part OS software helps in loading the executables into the RAM and RAM is connected to processor and HDD as we have seen earlier. ROM has no work in extracting data from the HDD and feeding the data to the processor. All this extraction and feeding of data is done by OS, especially Loader, through RAM. Right?

"Data" and "Code" are interchangeable at the hardware level (that is, the hardware that moves information to/from the RAM/CPU does not no or care whether it is code or date. BUT ... your terminology is somewhat misleading. It should say CODE, not data, where I have bolded it.

My question is: If I am using a keyword 'const'. Please see the following C language program.

Code:
#include <stdio.h>
void main{
int a = 5;
const int b = 10;
printf("%d %d", a , b);
*/b = b + a ;
printf("%d", b);*/
}

While the program is successfully compiled, the 'const' keyword used help in assigning values to the variables and memory is allocated to the values on ROM. Right? Now, as program is compiled nicely, a .exe file is generated and file is stored on HDD. End of story.

Absolutely not. NOTHING changes the values in ROM. Do you even understand what the acronym ROM stands for?" Look it up.

The "const" just causes the compiler just puts a numeric value in the memory that is allocated for the variable "b"

Now, I want to run this .exe file that was generated. I double click it and as usual loader loads the .exe file into the RAM and we have processor but there is no ROM in our discussion here again. The usage of 'const' keyword is taken care of during compilation. That means, a particular code is generated by the compiler when we use 'const' keyword that makes ROM not to get involved in loading the .exe file through loader into the RAM.
Same concept goes for # define ? A preprocessor?

I'm not clear what you mean by all of this. As you correctly point out, ROM has nothing to do with the process you are describing, and the "const" keyword has NOTHING to do with ROM.

So, basically compilers are softwares written to implement the language which is documented. I tried to deduce the answers from the posts in my thread called "How programming languages are created?" but now I see clearly how compilers intend to do their work. So, same thinking goes for all translators?

This is a somewhat vague statement when applied across the board to both compilers and assemblers on the one hand and interpreters on the other hand.

What if I use only machine code (not assembly language) i.e., 0s and 1s to interact with the digital electronic devices? I think that will need a philosophical answer that I cannot think about. Help.

I'm not sure what you mean here. Assembly language IS machine language. Assembly language adheres to the "principle of one to one correspondence", which says each assembly language instruction is exactly one machine language instruction. That is essentially never true for compilers and is absolutely never true for interpreters.

It IS possible to interact with computers without any assemblers / compilers / interpreters, etc, although that is NOT true of standard computers today.

In the 1960's I had a minicomputer [well, *I* didn't have the computer ... NASA did and I worked there]. Every morning when I came in, I had to load 10 instructions into the computer. These instructions made up a "bootstrap loader" which had just enough intelligence to load a large block of code from a magnetic tape into RAM and turn over execution to it. That block of code was a full loader, and then loaded the rest of the operating system [which was VERY primitive, and of my own design].

The WAY that I loaded the 10 instructions was by way of 16 data switches and some control switches on the front panel of the minicomputer [which, by the way, was about the size of a refrigerator]. I would put in each of the 10 instructions as 16 bits in the data switches and then hit the control switch that said "load this to memory and increment the memory pointer", and I would do that for all 10. So basically, *I* was the "assembler" and my fingers combined with the switches to be the "loader".
 
  • #28
pairofstrings said:
'const' keyword
'const' is used to tell the compiler that the variable is initialized one time and never changed. Some computers have the ability to disable writing to a range of memory, and the compiler, linker, and the OS could work together so that 'const' variables end up in a range of memory with writes disabled.

phinds said:
I loaded the 10 instructions was by way of 16 data switches ...
One clever implementation of a loader was an old Monrobot mini-computer. It used a spool with nodes to toggle switchs, similar to a music box to toggle in the initial load program.

Another "clever" trick used on an old mini-computers operating system I worked on was to add the ability to read commands from a tape drive, one of which could load a program from the tape drive, so we could create a "batch" file that consisted of a combination of commands and programs on a tape. Later on another system, we monitored the sequence of programs loaded during OS boot up, and added the ability to boot from tape from the computers BIOS. This was back in the 1970's and 1980's long before boot from cd-rom became a standard for PC's.
 
  • #29
phinds said:
"Data" and "Code" are interchangeable at the hardware level
That is true for a von Neumann architecture machine, but not a Harvard architecture machine. In a Harvard architecture machine, data and code are very different beasts. Data and code might even have different word sizes. Do some embedded processor programming and you will run into Harvard architecture machines.

The Intel chips that now dominate the market are von Neumann architecture at the assembly level. However, Intel chips have separate level 1 caches for data and code. This makes those Intel chips Harvard architecture machines at the micro level.
 
  • #30
phinds said:
The WAY that I loaded the 10 instructions was by way of 16 data switches and some control switches on the front panel of the minicomputer [which, by the way, was about the size of a refrigerator]. I would put in each of the 10 instructions as 16 bits in the data switches and then hit the control switch that said "load this to memory and increment the memory pointer", and I would do that for all 10. So basically, *I* was the "assembler" and my fingers combined with the switches to be the "loader".

When I was in college in the early 1970s, my physics department acquired a surplus DEC PDP-5 which I had to initialize using the same procedure. On our machine, this bootstrap loader operated the paper tape reader, not the magnetic tape drive. We didn't have an operating system for this machine; we simply kept the bootstrap loader available in the first few bytes of memory, taking care not to overwrite it with our own programs. Every time we wanted to run a program, we had to load it from paper tape.

My big project was writing a program that let us put the programs on magnetic tape and load them from there, by selecting from a menu on the teletype console. But I still had to use the paper-tape bootstrap loader to get the magtape-loader program into memory!
 
  • #31
phinds said:
"Data" and "Code" are interchangeable at the hardware level (that is, the hardware that moves information to/from the RAM/CPU does not no or care whether it is code or date.
Depending on the process, this might not be true. Some processors will not allow the execution of code that appears in a data section.
phinds said:
BUT ... your terminology is somewhat misleading. It should say CODE, not data, where I have bolded it.

I'm not sure what you mean here. Assembly language IS machine language.
That is not true. Assembly language is human readable (somewhat), while machine code is straight binary code. An assembler translates human readable assembly code to object code, and a linker brings in any other needed object code to create an executable.
phinds said:
Assembly language adheres to the "principle of one to one correspondence", which says each assembly language instruction is exactly one machine language instruction. That is essentially never true for compilers and is absolutely never true for interpreters.

It IS possible to interact with computers without any assemblers / compilers / interpreters, etc, although that is NOT true of standard computers today.

In the 1960's I had a minicomputer [well, *I* didn't have the computer ... NASA did and I worked there]. Every morning when I came in, I had to load 10 instructions into the computer. These instructions made up a "bootstrap loader" which had just enough intelligence to load a large block of code from a magnetic tape into RAM and turn over execution to it. That block of code was a full loader, and then loaded the rest of the operating system [which was VERY primitive, and of my own design].

The WAY that I loaded the 10 instructions was by way of 16 data switches and some control switches on the front panel of the minicomputer [which, by the way, was about the size of a refrigerator]. I would put in each of the 10 instructions as 16 bits in the data switches and then hit the control switch that said "load this to memory and increment the memory pointer", and I would do that for all 10. So basically, *I* was the "assembler" and my fingers combined with the switches to be the "loader".
 
  • #32
Okay. Thanks for making it clear for me. I somehow managed to realize that answers to my questions lies in studying computer architecture and doing embedded system programming.

Please observe the first figure in post # 32

Please see the code.
Code:
#include <stdio.h>
void main{
int a = 5;
const int b = 10;
printf("%d %d", a , b);
*/b = b + a ;
printf("%d", b);*/
}

QuestiononROM.jpg


I asked my teacher one question on this
I said: So, when compiler encounters the 'const' keyword, a memory location is allocated in the ROM and the corresponding data value is assigned to the variable. And a .exe file is generated and stored in HDD and now if I want to run this generated .exe file, I need to double click the .exe file and the Loader loads the .exe file into the RAM and ROM is not used here at all. So, ROM is not used when Loader loads the .exe file into the RAM and ROM is only used during compilation to assign data values to the variable.

Teacher said: Yes.

I thought maybe engineers have compromised with the name, meaning and function of the ROM and instead of having a non erasable memory inside the ROM, the engineers might have constructed an erasable memory adjacent to non-erasable memory. No big deal. Then erasable memory will have a laser mechanism that will help in delete the already present data inside the erasable memory once the laser is switched on. And compilers are programmed to switch this laser on and off. This is totally achievable. But I thought this is good as long as this process is fast enough. I mean, sending data to a different memory(ROM), especially erasable memory here and with laser mechanism to delete data that is already present in it is quite good but much time consuming and we need faster mechanism. But, I didn't bother to ask him this question about how we are going to deal with time consumption problem when we use this concept of using different memory(ROM) to temporarily store/assign data values to the 'const' declared variables inside erasable memory with laser mechanism to delete already present data and which is built adjacent to non-erasable memory inside ROM. Like in the figure below. Please see.

TailoredROM.jpg


My thoughts on what can be right: I think that if I don't use ROM at all in assigning data values to the variables that are declared as 'const', then I must program the compiler to do all the assigning work while the data is in the RAM. That way, I won't have to use any other special memory for assignment of data values to the variables that are declared as 'const', or any other variables and data values. I think, I will come across all of these concepts when I try to understand the development/programming of Operating system and compilers.

There is another thing that want to know about the system that was discussed in the last paragraph of post # 27

I have great respect for people who worked/work at the highest level of technology. One of my priorities is knowing the initial plans of something that these people worked on. For instance, if Johann Carl Friedrich Gauss and the person who invented exponential function told us the initial plans on how they got the initial idea that helped them to come up with their beautiful creations of Gauss function and exponential function, It would have been a great thing today.

So, please take a look at this scenario:

It's 1960s and you came into your office and you started working on your system and you were flipping switches and observing the outcome on you console and then you realized that there is a brand new Intel computer from 2010 A.D sitting on your desk. And you opened the cabin of PC and figured out exactly what each hardware is doing in about an hour, except one thing, that is Hard Drive! Now, you thought 'it's no big deal, let me use my switches to feed data into this new Intel computer from 2010 A.D' and you replaced the HardDrive with the Switches mechanism. Switches are now connected to RAM and RAM is connected to Processor. Processor is connected to the consoles.

Switchesmechanism.jpg


I did not bother to draw ROM and Graphics card. Did they exist back in 1960s?

My question is: If you set up a system like that described in the above scenario, I think the system is only designed to calculate the performance of the processor. If I speak a little technically, you knew exactly what voltage and electrical energy is required to make the system work.
I have no idea what "bootstrap loader" is.
Your answer will help me find why exactly switches were used in your creation. I find this very elegant to use switches to tell the processor to do what you want the system to do. That way you can have ultimate control over the processes of the machine. As you mentioned in post# 27 that machine language is same as Assembly language. I want to make myself clear about what I meant when I wrote "(not Assembly language)" in my post# 26 . So that you can tell me where I was wrong.

This is what I thought about the difference between Machine language and Assembly language:

I thought when I am dealing with machine language I am only dealing with 0s and 1s. So, if I want to use machine language to do complicated programming then while I am programming I am suppose to have an interface in which I can write 0's and 1's which represents my actual program and then directly feed it to the processor of the system. I can make this portable my using a memory device and burning all these 0s and 1s on this memory device that I wrote using machine language on an interface. I will attach this memory to my system. So, that when I switch on my system the program is runnable.

In assembly language I will be using mnemonics like ADD, MOV etcetra, instead of using 0s and 1s. As you said, that each machine code instruction is same as each assembly code instruction I am feeling little relieved because I intend to use a language that will help me in interacting with the components at the lowest level possible. So, Assembly language is good for that, I think. Your thoughts please.

So, please comment on my reasoning s and help me find wrong things in this post.
I have another question related to HDD and assembly language and about the mechanism that reads the bits from the HDD, in my coming post.
 
  • #33
pairofstrings said:
I somehow managed to realize that answers to my questions lies in studying computer architecture and doing embedded system programming. ... rom
You're still confusing when things happen. In most cases of embedded system programming, the compiling is done on another computer, perhaps a standard PC. The ROM on the PC is not used during this process. Most programmable devices use EEPROM (electronically erasable programmable rom) to store some or all of the code. In the case of a hard drive, some of the code could be stored on hidden sectors on the hard drive. For most devices, there is usually a small rom (you can call this a "bootstrap" loader) with just enough functionality to download a program and run it. The downloaded program contains code to program the EEPROM and the image of the program to be programmed into the EEPROM.

For embedded programing there's an extra step after the linker, this is the normal sequence:

compile source files to produce object files
link those object files and object files from libraries to produce exe files.
use a tool to convert the exe into an image file appropriate for the embedded system.

Getting back to how "const" is implemented, it could be ignored by a compiler in terms of output and only used to check for source code errors. The classic standard for a program image is:

code
initialized data and where it should be moved to
address and length of data to be zeroed out

At boot time, the code moves the initialized data into RAM, then zeroes out some more RAM, and then setup the stack (or stacks for multi-tasking system), and starts up. "const" data may end up in RAM depending on the architecture of the CPU.

pairofstrings said:
loading in bootstrap programs via switches
Old computers use core memory, which retains data even when power is off, so some of the older mini-computer systems required that the operator manually toggle in the binary data that represented the code and data for a small "bootstrap" program if the core memory got corrupted. When computers switched to using ram, most of them used rom or prom for the "bootstrap" program. The "bootstrap" program would then load in a program from some other device on the computer (punched card, paper tape, magnetic tape, disk).
 
Last edited:
  • #34
rcgldr said:
Getting back to how "const" is implemented, it could be ignored by a compiler in terms of output and only used to check for source code errors.

Not to mention declaring a variable "const volatile". Yup, you can do that in C, and it really does have a well-defined meaning :smile:
 

FAQ: C and Java questions that I asked my teacher

1. What is the difference between C and Java?

C and Java are both programming languages, but they have some key differences. C is a low-level language, meaning it is closer to the machine code that computers understand. Java is a high-level language, which means it is more abstract and easier for humans to understand. C is also a compiled language, while Java is an interpreted language. This means that C code is converted into machine code before it can be executed, while Java code is converted into an intermediate form and then executed by a virtual machine.

2. Which language is more suitable for system programming, C or Java?

C is generally considered to be more suitable for system programming, as it is a low-level language and can interact more closely with hardware and operating systems. Java, on the other hand, is better suited for application development and web programming.

3. What are the main features of C and Java?

C is a procedural language, meaning it follows a step-by-step approach to solving problems. It also has a small and simple syntax, making it relatively easy to learn. Java, on the other hand, is an object-oriented language, meaning it focuses on creating objects that interact with each other to solve problems. It also has a larger and more complex syntax compared to C.

4. Can C and Java be used together?

Yes, C and Java can be used together. In fact, Java was originally designed to be used with C and C++ to provide a higher level of abstraction and platform independence. C code can also be called from Java programs using the Java Native Interface (JNI).

5. Which language is more efficient, C or Java?

It is difficult to determine which language is more efficient, as it depends on the specific task and implementation. C is generally considered to be faster and more memory-efficient, as it is closer to the machine code. However, Java has a garbage collector that manages memory, which can lead to more efficient memory usage in some cases. Ultimately, the efficiency of a program depends on the skill of the programmer and the optimization techniques used.

Back
Top