This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IPU UNICACHE and AMMU

Hello,

Can you please give a more detailed explanation about the fields of the IPU UNICACHE CACHE_OCP register and the AMMU pages POLICY registers

and more specifically about what is , how does in affect ,when should it be enabled

-) EXCLUSION 

-)PRELOAD (when will the preload happen if enabled)

-) VOLATILE

-) L1_ALLOCATE - what are the sidebands??

-) posted / non posted - when will it be advised to use posted (no confirmation) instead of non-posted 

if possible small usage example for each to illustrate the usage and reasoning will be appreciated

Thanks

  • Hi Guy,

    Please find below the answers:

    IPU UNICACHE CACHE_OCP is the Unicache interface's configuration register. You don't need to modify this register when you set the AMMU or perform any cache maintenance operation.

    1. Exclusion: This is a sideband signal which specifies whether a particular translation is cached or non cached. I have asked the IP expert for more details on the use of this signal.

    2. Preload: There are three prefetch buffers. Preload bit allows the data to be preloaded into these buffers.

    3. Volatile: It is used to determine cache behavior on cache miss. Please refer to the attached table for description on the volatile:

    4. L1 Allocate: This is the write allocate policy policy i.e. data at the missed-write location is loaded to cache, followed by a write-hit operation.

    5. Posted/ Non posted: Posted writes are preferred in areas where you want high write up speeds.

    Regards,

    Rishabh

  • Hi,
    I am not sure i understand how am i suppose to read the table you attached.
    I assume the top horizontal line (below the blue heading) refers to the AMMU page setting (the policy register) but i am not sure what the left vertical bar refers to?
    It also appears that whatever the table is trying to describe , the volatile will only affect whether the request will be posted or non-posted - this is really not clear to me..

    As for the preload - you mentioned preload buffers but when does this preload suppose to happen and on what? the cache should bring an entire line on a miss so does the preload also reads more lines - what does it suppose to do?

    Thanks
    Guy
  • Hi Guy,

    The assumption for horizontal line is correct.
    Vertical left bar represents cache miss type i.e. if the miss is for volatile or non volatile data. Volatile affects whether request will be posted/non-posted as well as cached/non-cached.

    There are three prefetch buffers i.e. Cache stores three lines for Speculative instruction requests. They are stored to service hits if required. If there is another allocation which does not hit to the prefetch buffer, they are discarded.

    Regards,
    Rishabh
  • Hi, Thanks.

    Regarding Volatile:
    what do you mean by a volatile miss - how does a miss is identified by HW as volatile or non-volatile (the volatile on C variable is only for the compiler)?
    i assume that a table entry marked by N/A means don't-care (setting has no affect on the result)?
    if volatile refers only to misses then how does this relates to cache - the memory will need to be accessed anyway - does this mean that it will be write/read no allocate - cache will be skipped for that specific miss?
    can you give an example for usage that illustrate the usefulness of using volatile?

    Preload: does this mean that by setting the preload attribute, the HW will occupy the bus for a longer time as it will always bring x3 times lines from memory? (so in effect it will be as if i have "longer line size" for consecutive memory accesses?)

    Thanks
    Guy
  • Hi Guy,

    As per my understanding volatile is like a sideband signal and not only used by compiler. N/A means don't care.
    Yes the cache will be skipped for a specific volatile miss as per the table.

    Prefetch happens in the background and the time taken will be less than or at most equal to the case when prefetch is disabled.

    Regards,
    Rishabh