This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

OMAP3530 L2 cache

I am struggling to understand the L2 cache. I know I have a lot more reading to go but I was wondering if some one could give a simple explanation.

The way I understand it we have 16k L1 data cache and a 16k L1 instruction cache.

The TLB buffers are seperate all together?? 

Is the 256k L2 cache basically a "data" cache and does not have anything to do with "instructions" caching?

What does the term "unified" mean when refering to cache?

 

DV

  • DV said:

    I am struggling to understand the L2 cache. I know I have a lot more reading to go but I was wondering if some one could give a simple explanation.

    I assume from your later mention of TLB that you're talking about the Cortex A8, right?

    DV said:

    The way I understand it we have 16k L1 data cache and a 16k L1 instruction cache.

    The TLB buffers are seperate all together?? 

    Correct.

    DV said:

    Is the 256k L2 cache basically a "data" cache and does not have anything to do with "instructions" caching?

    What does the term "unified" mean when refering to cache?

    The L2 cache is "unified" in the sense that it will cache both data and instructions.

     

  • Which leads to my next question. On context switches my os notifies me of Data or Instructions lines to invalidate and/or flush. I know how to handle this with just L1 enabled. The question is how to handle with an L2 added to the mix and the fact the L2 is "unified"?

     

     

  • I have much more experience on the 64x+ core than I do the Cortex A8, so I can only give you a generic answer.  I'm trying to find someone with Cortex A8 expertise to give a more specific response.  The 64x+ core also has L1P, L1D, and L2 cache with the L2 being a "unified" cache.  In that scenario to perform a user-initiated cache operation one would simply put in the base address and size of the memory into some registers and the cache controller would do all the work.  One does not need to pay attention to whether it's data or instructions.  All you need to know is the address and size.  The L2 cache controller will in turn pass that same info to the L1D and L1P cache controllers to also operate on the L1 caches.  I *suspect* the same is true on Cortex A8 but could not find that definitively stated anywhere.

    Also, you might want to double-check to be sure whether you need to do these cache operations during a context switch at all.  I recall hearing that beginning with the ARM11 the cache/MMU architecture was significantly changed such that it was caching PHYSICAL addresses instead of virtual addresses.  The impetus for this change was to avoid the need for cache operations during a context switch.  Hopefully someone with more ARM expertise can give better details there, but I wanted to mention it.

    Brad