This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Sharing DSP L1 SRAM between algorithms on DM8168

Guru 10685 points
Other Parts Discussed in Thread: SYSBIOS

Hi, I have an L1 SRAM buffer that is allocated statically for the DSP and accessible to the 2 different codecs that I have built into the same codec server. Is it possible to share that buffer between 2 different codecs so that each codec can have full use of that buffer?

I'm using codec engine of course to run codecs on the DSP that are controlled by the ARM.

i.e. When a codec has finished or been told by SYSBIOS to do a context switch, it would store the contents of the L1 SRAM on the stack so that the next codec can use the L1 SRAM (the next algorithm would have to have its own stack put back into L1 SRAM before it ran).

Thanks,

Ralph

  • Ralph,

    It is rare that the best allocation for an application is to use L1D as SRAM. Why have you chosen to do this instead of using it as cache and allocating the buffers in L2 or other memory?

    Using L1D as cache allows each algorithm to have access to that space as fast memory for the time in which that algorithm is active, then another algorithm can use all or part of L1D cache as it needs to. This is handled automatically by the hardware and there is no context switching overhead. There is cache loading overhead, but this would also be done each time you did a memory save/restore, so you would not gain anything with the save/restore in terms of performance of L1D.

    SYS/BIOS has no mechanism for automatically doing what you ask to do.

    Regards,
    RandyP

  • Hi Randy,

    I based my decision to use L1 as SRAM instead of L2 on the fact that L1 is much faster than L2 so I can do processing of L1 SRAM while using DMA to transfer data into/out of a different part of the L1 SRAM. If I don't use L1 as cache then I have much better control over how my algorithm manages memory.

    I haven't tried using L2, however given that I am finding that the processing is the limiting factor (and not the DMA) I think things would be slower if I used L2.

    Good to know that the L1 cache that I do have (only 4K left :-( ) is being context switched correctly. At the moment I'm not sharing L1 buffers between algorithms so all is fine.

    Thanks,
    Ralph