Understanding Virtually Indexed, Virtually Tagged Caches

  • Thread starter computerorg
  • Start date
In summary: Short miss in cache refers to data that's been loaded into the cache but not yet accessed. A short miss is detected when the CPU is attempting to access the data and it isn't found in the cache. Cache operation would then cause the data to be loaded from physical memory into the cache and the CPU would then be able to access the data. I'm not sure what a cache...
  • #1
computerorg
3
0
Hi all,

I'm trying to understand the difference between virtually indexed virtually tagged cache and virtually indexed physically tagged cache. I read some material but not able to get the intuitive concept especially the virtually indexed virtually tagged cache .

The tag in a cache is basically the location of the data in the physical memory right? In that case, how can a virtual address be set as a tag ?

Any help is appreciated.
 
Technology news on Phys.org
  • #2
computerorg said:
difference between virtually indexed virtually tagged cache and virtually indexed physically tagged cache.

Note I found the wiki cache entry structure section to be confusing so I proposed a change in the discussion page:

http://en.wikipedia.org/wiki/Talk:CPU_cache#Cache_entry_structure

wiki articles:

http://en.wikipedia.org/wiki/CPU_cache#Address_translation

http://en.wikipedia.org/wiki/Translation_lookaside_buffer

Each cache row entry includes data, tag, and a valid bit (there may be other information for replacement algorithms). Each cache row entry is addressed by index (unless it's a fully associative cache), and the displacement is used to address the data within a cache row.

The tag in a cache is basically the location of the data in the physical memory right?
Not if tag and index are both based on the virtual address, since the data is accessed from the cache, not from memory. The physical address isn't required once the data is loaded into a virtually addressed cache. The cache would probably still have physical address information, needed for writes to cache (so the writes could be flushed to physical memory) and for "snooping" for any writes to physically addressed memory, such as DMA type memory writes, in order to invalidate or update cache entries.

The wiki article mentions using virtual indexed, physical tag for level 1 caches. For a non-associative cache, a portion of the virtual address is used to index cache row(s) (n rows for a n - way cache), the physical tag data is then read from the cache row(s) in parallel with the TLB virtual to physical address translation, then the cache and TLB physical tags are compared. Since the index to the cache is based on a virtual address, the physical tag in a cache row entry needs to include all address bits except for the displacement portion.

For a fully associative cache, the index isn't used, so the address tag needs to be all virtual or all physical.
 
Last edited:
  • #3
Thank you for your reply!

I was reading a paper about the Synonym Lookaside buffer. If multiple virtual addresses map to the same physical address, then they are synonyms. The ambiguity is resolved by having one as primary virtual address and the rest as secondary virtual address and providing s synonym lookaside buffer to translate the secondary virtual addresses to the primary.

The paper says that the Translation Lookaside buffer has the information whether the virtual address is primary/secondary. I can't understand how the TLB has this information. Can someone please explain?

Here's the link to the paper:
http://www.computer.org/portal/web/csdl/doi/10.1109/TC.2008.108
 
  • #4
computerorg said:
I was reading a paper about the Synonym Lookaside buffer. If multiple virtual addresses map to the same physical address, then they are synonyms.
I'm not a member of IEEE, so I don't have acces to that paper. I was able to read the summary.

The paper says that the Translation Lookaside buffer has the information whether the virtual address is primary/secondary. I can't understand how the TLB has this information.
Not being able to read the paper, I'm not sure why this was stated.

The summary refers to a level 1 cache, which I assume referes to the level 1 cache as implemented on a x86 type CPU, which also uses a TLB. These use virtual address bits for indexing, but physical address bits (all but displacement address bits) for the tags. You don't get synonym's with this scheme, since there are no synonyms there aren't any primary / secondary virtual addresses.

The proposed scheme appears to want to change the scheme to one accessed solely by virtual address bits, which works, but you'd still need physical address information stored in the cache in order to maintain cache cohernency by "snooping" or "snarfing" the memory bus for any external writes or paging activity.

http://en.wikipedia.org/wiki/Cache_coherence

What I don't understand is why you'd want a cache design that would allow a synonym to be created. It seems this can only happen if you allow a cache data block boundaries to overlap with each other, rather than keeping them separated to fixed size blocks based on physical memory boundaries, perhaps based on the largest single ram memory access, which would be triple wide on a core i7 cpu.
 
  • #5
I'd like to know what a short miss is in cache. How is it different from cache miss?
 
  • #6
computerorg said:
I'd like to know what a short miss is in cache. How is it different from cache miss?
A cache miss means that the datablock requested by the virtual address isn't in the cache. A "short miss" apparently indicates some type of conflict restricting writing to a cached datablock. I'm not familiar with this, and I don't know how consistently the term "short miss" is used. I did a web search and found a patent that includes the term "short miss".

http://www.freepatentsonline.com/5113514.html

In the patent description, the same datablock can end up in two (or more) caches, and if so the cache state for that datablock is marked as shared. A short miss occurs when a write is done to cached operand data marked as shared (because that same data is cached in a secondary cache as well). The patent describes how it deals with such situations.
 

Related to Understanding Virtually Indexed, Virtually Tagged Caches

What is a virtually indexed, virtually tagged cache?

A virtually indexed, virtually tagged cache is a type of cache memory used in computer systems to improve performance. It is a combination of both a virtually indexed cache and a virtually tagged cache, which work together to reduce the number of cache misses and improve overall system performance.

How does a virtually indexed, virtually tagged cache work?

In a virtually indexed, virtually tagged cache, the virtual address of a memory block is used to determine the location of the block in the cache. This virtual address is then compared to the virtual tags stored in the cache to determine if the requested data is present. If the data is present, it is returned to the processor without having to access main memory, reducing the overall access time.

What are the benefits of using a virtually indexed, virtually tagged cache?

The main benefit of using a virtually indexed, virtually tagged cache is the reduction in cache misses, which can greatly improve system performance. This is achieved by using the virtual address to locate the data in the cache, rather than relying on a physical address. Additionally, this type of cache requires less hardware and is easier to implement compared to other types of caches.

What are the drawbacks of using a virtually indexed, virtually tagged cache?

One drawback of a virtually indexed, virtually tagged cache is the increased complexity of the hardware required to implement it. This can lead to higher cost and longer development time. Additionally, this type of cache may not be suitable for all systems and may not provide significant performance improvements in certain scenarios.

How does a virtually indexed, virtually tagged cache differ from other types of caches?

A virtually indexed, virtually tagged cache differs from other types of caches, such as physically indexed, physically tagged caches, in the way it uses virtual addresses to locate data in the cache. It also differs from a fully associative cache in that it uses a combination of indexing and tagging, rather than just tagging, to locate data in the cache. This makes it a more efficient and cost-effective solution for improving system performance.

Similar threads

  • Programming and Computer Science
Replies
4
Views
1K
  • Quantum Physics
Replies
10
Views
2K
  • Programming and Computer Science
Replies
6
Views
1K
  • Computing and Technology
Replies
10
Views
2K
Replies
46
Views
2K
  • Programming and Computer Science
3
Replies
75
Views
4K
Replies
4
Views
1K
Replies
6
Views
1K
  • Other Physics Topics
Replies
2
Views
4K
Back
Top