This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MCU-PLUS-SDK-AM263X: OCRAM distribution for lockstep?

Part Number: MCU-PLUS-SDK-AM263X
Other Parts Discussed in Thread: UNIFLASH, AM263P4, AM2634

Tool/software:

Hi

How OCRAM memory regions should be defined for lockstep case?

From the documentation I understood that TCM doubles and if there is no image for 2nd CPU, the qspi sbl will switch to lockstep mode automatically.

Does in this case both CPU's in lockstep have the same OCRAM zone?

Checked the example which shows the lockstep error mechanism: CCM but the OCRAM is not shared and the targetconfigs seems to be incomplete (at least no secondary or other flags are set anywhere, all four cores are active).

Is there a better example or application note which shows the RAM zones when CPU's are running in lockstep?
External memory is a topic as well since we might be forced to use the GPMC.

Best regards,

Barna Cs.

  • Hello Barna,

    When the CPU's of a given cluster are operating in Lockstep Mode, Core0 can utilize the TCM space of Core1, providing 64KB TCMA and 64KB TCMB. While in Dualcore/Split mode each Core has access to 32KB TCMA and 32KB TCMB. The actual accessible memory zone for Core0 is still local to the core (same base address), but doubled in length.

    The OCSRAM of (2MB, 1MB, or 0.5MB - depending on OPN) is still shared between Core0 of each cluster.

    Can you elaborate more on your comment below, What would be the ideal solution to provide here?

    the OCRAM is not shared and the targetconfigs seems to be incomplete (at least no secondary or other flags are set anywhere, all four cores are active).

    Best Regards,

    Zackary Fleenor

  • Hi Zackary

    The TCM is clear from the documentation (core0 uses both A and B when in lockstep).

    What I am interested in, is the OCRAM - in lockstep mode the 2nd controller from the same cluster shares the same region with Core0?

    Example:

    Both Core0_0 and Core0_1 will use this same RAM section starting from 0x70040000 and length of 0xC8000?

    I want to maximize the OCRAM sections and since we are running in lockstep I am interested if I can simply cut in half the RAM section between 0x70040000 (sbl is from 0x70000000) and 0x701D0000 (user_shared_mem section) and assign one half for the 1st cluster (like prev image) and 2nd half to 2nd cluster:

    The memory map would look like:
    256k SBL (starting from 0x7000000)
    ~800k 1st cluster (0x70040000)
    ~800k 2nd cluster (0x70108000)
    ~144k shared stuff (shared memory, logger, ipc starting from 0x701D0000) 

    Can you elaborate more on your comment below, What would be the ideal solution to provide here?

    I am new in lockstep for Sitara, and was searching for any settings, was not sure if the stack and the heap is common for the same cluster or not.
    The targetconfig when a new project is created is always on all 4 cores by default even if the project is for 0_0 or 0_1 (like empty example) and this is a bit confusing for a newcomer :) 

    Meantime managed to modify the empty project to have lockstep running - at least SOC_rcmIsR5FInLockStepMode returns 1 if I write the test project+sblqspi image into the devboard with uniflash.

    Best regards,
    Barna

  • Hello Barna,

    I want to follow up with some additional experts regarding this use-case and expectations. From my understanding of your questions allocating half OCRAM to cluster1 and half to cluster2 would not be an issue, they should be associated with Core0 of each cluster, Core1 will also receive/transmit data in parallel to achieve lockstep between the two cores, so I believe the stack/heap will be in a common memory space per cluster.

    That is great to hear that you've made progress with the empty project running in lockstep. Looking forward to hearing further progress and additional questions after comments above.

    Best Regards,

    Zackary Fleenor

  • Hi Zackary

    I want to follow up with some additional experts regarding this use-case and expectations

    We will have quite a lot of code - libraries can take a toll on program memory (we will need several CAN libraries and lot's of output control) so I am looking to have as big as possible OCRAM to fit everything in it. This is the reason I need to use every byte I have. The usecase is 2 clusters in lockstep.

    I was also checking the GPMC (general purpose memory controller) but I did not find anything about how this memory mapped, if it is mapped at all into the memory space. As of description and the code in gpmc_psram_io I rather see it as a read/write type of external memory to store/retrieve data but it is not intended to store code in it. Correct?

    As a last option we are checking AM263P4 which has 3G ram and it is pincombatible and also has XPI, but could not find any stock of it anywhere. Do you have any dates for this when it will be more widely available or it is that new that there is no roadmap yet for production stocks? 

    Best regards,
    Barna

  • Hello Barna,

    I understand your need for a large amount of program memory. GPMC is an option, but does not support any type of XIP functionality, it would only be used to store data until it can be loaded to the device and executed from on-chip memory. 

    I will loop in another expert to provide timeline expectations for AM263Px devices.

    Best Regards,

    Zackary Fleenor

  • Hello Barna,

    There are devices for AM263P4 available on TI.com:

    If you are needing production level quantities / orders, that is not something we can help with on E2E. You'd need to reach out to your local sales rep or the customer support center.

    Best Regards,

    Ralph Jacobi

  • Hi

    It was not two weeks ago but good to see it is available - 1st batch probably is enough to have 100 or something for prototyping. I will come up on our meetings to see if this is something we can swap.

    I am interested in the interchangeability of AM263P4 and AM2634 - if there is a known issue which could stop it?

    Checked the portability and on what I am aware is the QSPI vs Optiflash SPI interface (don't know yet the differences, we'll have to look into it) and the TCM and SRAM becomes bigger and XIP becomes an option.

    Other than this it is pin compatible, can be switched?

    best regards,

    Barna

  • Hello Barna,

    There are definitely changes between QSPI and OSPI that require SW changes. The devices are pin compatible if you use the AM263P4AC package where that C designator marks the Pin Compatible package.

    The following Migration Guide is offered to cover high level details about all differences: https://www.ti.com/lit/pdf/spradb3

    Best Regards,

    Ralph Jacobi