Follow us on Twitter

When Is a CPU’s Cache Flushed Back to Main Memory?


If you are just starting to learn how multi-core CPUs, caching, cache coherency, and memory works, it may seem a little bit confusing at first. With that in mind, today’s SuperUser Q&A post has answers to a curious reader’s question.

Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.

The Question

SuperUser reader CarmeloS wants to know when a CPU’s cache is flushed back to main memory:

If I have a CPU with two cores and each core has its own L1 cache, is it possible that Core1 and Core2 both cache the same part of memory at the same time? If it is possible, what will the value of main memory be if both Core1 and Core2 have edited their values in cache?

When is a CPU’s cache flushed back to main memory?

The Answer

SuperUser contributors David Schwartz, sleske, and Kimberly W have the answer for us. First up, David Schwartz:

If I have a CPU with two cores and each core has its own L1 cache, is it possible that Core1 and Core2 both cache the same part of memory at the same time?

Yes, performance would be terrible if this was not the case. Consider two threads running the same code. You want that code in both L1 caches.

If it is possible, what will the value of main memory be if both Core1 and Core2 have edited their values in cache?

The old value will be in main memory, which will not matter since neither core will read it. Before ejecting a modified value from cache, it must be written to memory. Typically, some variant of the MESI protocol is used. In the traditional implementation of MESI, if a value is modified in one cache, it cannot be present at all in any other cache at that same level.

Followed by the answer from sleske:

Yes, having two caches cache the same memory region can happen and is actually a problem that occurs a lot in practice. There are various solutions, for example:

  • The two caches can communicate to make sure they do not disagree
  • You can have some sort of supervisor which monitors all caches and updates them accordingly
  • Each processor monitors the memory areas that it has cached, and when it detects a write, it throws out its (now invalid) cache

The problem is called cache coherency and the Wikipedia article on the topic has a nice overview of the problem and possible solutions.

And our final answer from Kimberly W:

To answer the question in your post’s title, it depends on what the caching protocol is. If it is write-back, the cache will only be flushed back to main memory when the cache controller has no choice but to put a new cache block in already occupied space. The block that previously occupied the space is removed and its value is written back to main memory.

The other protocol is write-through. In that case, anytime the cache block is written on level n, the corresponding block on level n+1 is updated. It is similar in concept to filling out a form with carbon paper underneath; whatever you write on top is copied on the sheet below. This is slower because it obviously involves more writing operations, but the values between caches are more consistent. In the write-back scheme, only the highest level cache would have the most up-to-date value for a particular memory block.


Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

Image Credit: Lemsipmatt (Flickr)

Akemi Iwaya is a devoted Mozilla Firefox user who enjoys working with multiple browsers and occasionally dabbling with Linux. She also loves reading fantasy and sci-fi stories as well as playing “old school” role-playing games. You can visit her on Twitter and .


How to Disable Screenshots On Your Apple Watch
Which Chromecast Should I Buy (and Should I Upgrade My Old One)?

Comments

comments