Programming determines how well memory hierarchy is utilized

In summary: This is true, but it's not the kind of efficiency the attachment you give is talking about. Eliminating redundant logical steps will make the program not need to execute as many CPU instructions to accomplish the same goal; but, as above, programs don't spend the majority of their time waiting for the CPU to execute instructions, so the speed gains to be had by eliminating redundant logical steps are limited.
  • #1
PainterGuy
940
70
TL;DR Summary
a question about memory hierarchy performance
Hi,

Please have a look on the attachment. It says, "In a computer system, the overall processing speed is usually limited by the memory, not the processor. Programming determines how well a particular memory hierarchy is utilized. The goal is to process data at the fastest rate possible."

I understand that programming is actually a set of directives for a computer hardware to take 'logical' steps to get a certain job done. It does make sense that if if there is a useless redundancy in those logical steps then it wastes time as well as storage of a computer system. For example, if there are three positive numbers A, B, and C, and if A=B and B<C, then obviously A<C and you don't even need to make a comparison between A and C. But if the logical step of comparing A and C is taken then it would waste storage as well as processing time.

I'm not sure if what I said above is completely correct, I was only trying to convey how I think of it. I have worked in C++ and many a time, you just write a code and a compiler translates your code into machine language. I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized. Do I have it right? Or, perhaps a compiler translation also plays a role but not as much as the programming itself. Here, I'm think of programming as 'high level language' programming.

Could you please help me to get a better understanding of it? Thanks a lot.
 

Attachments

  • memory_hierarchy1.jpg
    memory_hierarchy1.jpg
    82.4 KB · Views: 371
Technology news on Phys.org
  • #2
Where does this attachment come from? Can you give a reference? A link?
 
  • Like
Likes PainterGuy
  • #3
PainterGuy said:
In a computer system, the overall processing speed is usually limited by the memory, not the processor. Programming determines how well a particular memory hierarchy is utilized. The goal is to process data at the fastest rate possible.

What this quote is talking about is the fact that, in modern computer systems, the CPU is faster than the memory is, so any program will spend most of its time waiting on memory to be read from or written to, not waiting for the CPU to compute the next instruction. So there is much more speed gain to be had by optimizing how programs read from and write to memory, as opposed to optimizing how programs execute CPU instructions.

PainterGuy said:
It does make sense that if if there is a useless redundancy in those logical steps then it wastes time as well as storage of a computer system.

This is true, but it's not the kind of efficiency the attachment you give is talking about. Eliminating redundant logical steps will make the program not need to execute as many CPU instructions to accomplish the same goal; but, as above, programs don't spend the majority of their time waiting for the CPU to execute instructions, so the speed gains to be had by eliminating redundant logical steps are limited.

PainterGuy said:
I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized.

Yes, that's true; how a compiler translates source code into machine code can have a huge effect on how efficiently the program reads from and writes to memory.
 
  • Like
Likes sysprog and PainterGuy
  • #4
A useful perspective on memory utilization is to consider memory access as I/O (input output). Accessing a data structure loaded into main memory is faster than access to intermediate memory and much faster than accessing storage. A similar paradigm applies to operating system performance. Program set instructions loaded and resident perform faster than program sets waiting on a queue and much faster than reading from storage.

If the above is reasonable and accurate, then code/memory optimization depends strongly on data structure -- including variables, objects, and allocations -- choice and design within the available computer architecture.

There are several applicable Insight articles including this tutorial by @QuantumQuest https://www.physicsforums.com/insights/intro-to-data-structures-for-programming/

This article conforms with an I/O model of memory hierarchical optimization with attention to iterative structure placement in code threads. https://suif.stanford.edu/papers/mowry92/subsection3_1_2.html
 
  • Like
Likes QuantumQuest, sysprog and PainterGuy
  • #5
PeterDonis said:
Where does this attachment come from? Can you give a reference? A link?

Thank you.

It comes from a book Digital Fundamentals by Thomas Floyd.
 
Last edited:
  • #6
I have included some quotes from a related thread on "cache controller" below.

PeterDonis said:
So there is much more speed gain to be had by optimizing how programs read from and write to memory

PainterGuy said:
So, it might be possible that when a certain program is written, including OS, it is written in such a way to help a microprocessor to coordinate with a cache controller to speed up the action.

Rive said:
In modern CPUs there are usually instructions which can modify cache behavior and trigger certain functionality, but it is hard to use them efficiently. For most programmers they are just kind of 'eyecandy'. Compilers are regularly using them as far as I know, but still most cache management is done by HW.
 
  • #7
PainterGuy said:
I'd say that how well a compiler handles its translation job also greatly determines how well a particular memory hierarchy is utilized. Do I have it right?

Some programs are IO intensive: some others are memory intensive, some depends on raw calculating capacity. (Many others are just a pile of rubbish.) Optimization - fitting the SW to get the best performance on a given HW - is a difficult topic, always depends on the actual software.
Compilers (and their different optimization levels) do have an effect, but it is not exclusive to memory.
Don't limit the topic only on memory.
 
  • Like
Likes Klystron, PainterGuy and anorlunda

FAQ: Programming determines how well memory hierarchy is utilized

What is memory hierarchy?

Memory hierarchy refers to the organization of different types of memory in a computer system. It is designed to optimize the use of memory resources and improve system performance.

Why is memory hierarchy important in programming?

Memory hierarchy plays a crucial role in programming as it determines how efficiently memory resources are utilized. By understanding the hierarchy, programmers can make informed decisions about which type of memory to use for different tasks, leading to better performance.

How does programming affect memory hierarchy?

Programming directly impacts memory hierarchy as the design and implementation of code can determine how well the system utilizes the available memory resources. Efficient programming can lead to better utilization of the memory hierarchy and improved performance.

What factors influence the utilization of memory hierarchy in programming?

The utilization of memory hierarchy in programming is influenced by various factors such as the type of programming language used, the design of the application, the data structures and algorithms implemented, and the overall efficiency of the code.

How can programmers improve the utilization of memory hierarchy?

There are several ways programmers can improve the utilization of memory hierarchy, such as optimizing code for better memory usage, using appropriate data structures and algorithms, and minimizing unnecessary memory operations. Additionally, understanding the memory hierarchy and its impact on performance can help programmers make informed decisions about memory usage in their code.

Back
Top