Exploring Data in the CPU Cache

  • Thread starter fluidistic
  • Start date
  • Tags
    cpu Data
In summary, the CPU cache is a storage area on a computer where the CPU can store data that it needs to access quickly. This is helpful for programs with loops, as well as for data. The CPU cache is generally smaller in size than RAM, but is faster to access.
  • #36
FactChecker said:
And they would probably do it with frequent consultation with the manufacturer.
Common folk has to rely on the various 'performance optimization guide' documents, like this. Almost all CPU (and: GPU) manufactuer provides similar documents.
You have to have some serious background to get (real) personal attention.

Apart from professionals there are many amateurs who tries to give it a try on this field. It's just the matter of performance bottlenecks, and without access to professional programming materials it's surprisingly frequent.
 
Technology news on Phys.org
  • #37
Rive said:
Apart from professionals there are many amateurs who tries to give it a try on this field. It's just the matter of performance bottlenecks, and without access to professional programming materials it's surprisingly frequent.
Ok. I'll buy that. My experience was in an unusual environment.
 
  • #38
phinds said:
But it is NOT just data. Not just things like loop counters, it is the program code as well as data.

True, but this takes care of itself. When an instruction is executed, the odds are very high that the next instruction executed is the next instruction in memory, and that was read into the cache at the same time the instruction in question was loaded. The data cache is what the programmer needs to think about.

newjerseyrunner said:
You can not access the cache in any language other than assembly

Not true. In the Intel Knights Landing, there is high speed memory (called MCDRAM) that is used as a cache between the main memory and the chip. The programmer can let it cache automatically, or she can use a variation on malloc to allocate memory directly on the cache, thus pinning those objects into the fast memory.

In general, one can do cache management indirectly by careful placement of objects - one can calculate c = a + b in such a way that when one of a or b is read into the cache, the other is as well.
 
  • #39
Vanadium 50 said:
True, but this takes care of itself. When an instruction is executed, the odds are very high that the next instruction executed is the next instruction in memory, and that was read into the cache at the same time the instruction in question was loaded.
But there are important clever exceptions. For instance, they usually assume that the last instruction of a loop is followed by the first instruction of the loop, because there are usually several loops before the loop is done. It is very hard to do better than the automatic optimization. It's usually best to respect it and work with it. Unfortunately, in a lot of safety critical work, the optimization must be kept at very low levels or turned off completely.
 
  • #40
phinds said:
I don't think cache management is part of the O.S. it is part of the on-board processing, the "computer within the computer" as it were. I'm not 100% sure that that's always the case.
Yes, for most processors, cache is primarily a processor feature that operates with little or no direct encouragement from the software.

Here are some situations where the knowledge of the cache performance is important:
1) Compiler-writing. This is perhaps the most important.
2) Debugging when using hardware debugging tools. The host is the processor where the debugging software is running. The target is the processor being debugged. When the host and target are the same processor, the caching can be invisible. But when they are not, the target may have asynchronous debugging features. Without awareness of the caching, the debugging environment can often produce perplexing situations.
3) Multicore environments. When you have several processor on a single chip that share memory, you will be provided machine-level instructions such as "instruction sync" and "data sync" that force cache to become synced with memory. You may also have mechanisms (such as supplemental address spaces) for accessing memory without caching.
4) If instruction timing becomes critical, you will need to consider caching - and that can be impractical. What you really need to do, is make the instruction timing to be non-critical.

So getting back to the part of the original question:
fluidistic said:
Is it possible to store a small .txt file into the cache?"
Kind of, but not really.
If you read a text file into RAM and begin searching through it, it will be pulled into cache memory. If its less than half the size of cache, it is likely to be pulled in in its entirety.

But it gets pulled in implicitly, not because of explicit instructions. And if you continuously interrupt the process with other memory-intensive procedures, it may never be wholly included in cache.
 
  • #41
Last edited by a moderator:
  • Like
Likes FactChecker
  • #42
The primitive video game 'space invaders' could be done in one kilobyte.
At the time that was very impressive
 
  • #43
rootone said:
The primitive video game 'space invaders' could be done in one kilobyte.

1/8 kilobyte on the Atari 2600. Not as nice as the arcade version but still impressive what you can do with 128 bytes.

BoB
 
  • #44
rbelli1 said:
https://software.intel.com/en-us/bl...dram-high-bandwidth-memory-on-knights-landing
In what world is 16GB a small amount of RAM? It's not an impressively large amount but still quite a lot.

Swing that at me in a few years and it will probably be a whole different story.

BoB
Interesting. It looks like the MCDRAM is Level-3 memory that can be used entirely as cache, entirely as addressable memory, or split between the two, depending on the BIOS settings. https://colfaxresearch.com/knl-mcdram/ has examples of how to use it in each case. So it can be directly controlled by the programmer as addressable memory and be faster than Level-3 cache.
 
  • #45
rbelli1 said:
1/8 kilobyte on the Atari 2600. Not as nice as the arcade version but still impressive what you can do with 128 bytes.

BoB
Atari yeah, things like CPU directly addressing video RAM.
What, er?, video RAM?
 
  • #46
rootone said:
CPU directly addressing video RAM

No video RAM on the 2600 and contemporary machines. They directly addressed the beam.

Direct VRAM access was standard on all CGA XGA EGA and VGA systems. Also most of that era systems of all brands. It was mapped into the normal address space. Some systems used that ability to access more colors than were possible with datasheet operation.

FactChecker said:
programmer as addressable memory and be faster than Level-3 cache
16GB of close DRAM is certainly a performance opportunity. Bump that to SRAM and you can fly.

BoB
 
  • #47
I think this link would be really appropriate as an answer to the first question, as the thread seems to have 'wandered'. Look for a discussion of data locality.

https://www.akkadia.org/drepper/cpumemory.pdf

This is a bit old, but still very relevant.
 
  • Like
Likes FactChecker
  • #48
rbelli1 said:
In what world is 16GB a small amount of RAM?

In a world where it is shared by 256 processes.
 
  • #49
Vanadium 50 said:
In a world where it is shared by 256 processes.

I just looked at the Intel Xeon Phi series. I had no idea anything like that existed.

BoB
 
Back
Top