This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Questions about simdm6437.cfg

Hi,

I got an evaluation version of CCS to test how accurate its DM6437 simulator is. When I tried to profile my algorithm (video processing stuff), CCS crashed before finishing the first frame. So I took a closer look at 'simdm6437.cfg', and found something interesting. My questions are listed below:

1. The file 'simdm6437.cfg' itself doesn't have revision history, but the file I have is dated '1/23/2008 12:40AM'. Is this the latest version? If not, where can I get the latest version?

2. I want the simulator to simulate the DM6437 EVM as accurately as possible, so I changed the SRAM_START_ADDRESS from 0x00F04000 to 0x10F04000 in module DMC. Likewise, I changed the SRAM_START_ADDRESS from 0x00E08000 to 0x10E08000 in module PMC. However, I am not no sure of the module UMC. Does it control the L2 cache/SRAM? If yes, what's the proper number for RAM_WAIT_STATES? It's current set to 0, and seems too optimistic.

3. For the same purpose, In module DDR_EMIF, I should change cpu_clk_freq from 600 to 594, mem_clk_freq from 266 to 162, and system_bus_width from 64 to 32. Are these changes correct?

4. What else changes are needed?

Thanks in advance

  • Just a note, I did some tinkering with the dm6437 simulator and I would say buyer beware if you are trying to get hard accuracy.  I saw it do a few fluky things that make me lean towards not trusting it.

    As far as the memory locations, if you check the memory map summary on the dm6437 datasheet, you will notice that the various internal ram blocks are mapped to multiple address locations, and that the original settings in the .cfg file match one of those areas.  That being said, I left them as they were and didn't have any problems with them.

    I believe the ram wait state of 0 is correct, the only other option is 1.  Somebody from TI could verify this.

    As far as the ddr2, put the actual parameters for your device in.  I left the system bus width at 64 and the mem clk freq is 2x what the actual ddr2 clock is.

    The only other changes I made were to the cache configuration since I don't use the default max cache sizes.

  • Hi,

      SRAM_START_ADDRESS for DMC & PMC are correct & they represent the internal address, so don't change them. Module UMC repesents the L2 cache/SRAM & the RAM_WAIT_STATES is 0 for fast cache which is set as per the spec.

    You can change the cpu_clk_freq & mem_clk_freq but not System bus width. System bus width represent the bus width between SCR & EMIF, which is set as per spec; if you want to change the EMIF to DDR bus width that can be acheived programmatically via EMIF (Narrow mode)

    Mem_clk_freq is the DDR2 clock once changed, you have to program the EMIF registers for correct latency corresponding to the DDR. This can be done using GEL script or via program.

    The crash you faced seems to be different issue, can you give the details of CCS version & CGT version you are using?

    regards,

    Mani

  • Thank you Matt and Mani for the replies.

    I have two versions of CCS. The evaluation CCS is version 3.3.82.13; its CGT version is v6.1.9. The CCS DSK (it came with the DM6437 EVM) is 3.3.38.2; its CGT version is v6.0.8.

    I used the evaluation CCS to build the code and generate an OUT file, then used the CCS DSK to load the OUT file onto the board. It runs without any problem. The evaluation CCS, in its simulator mode, also runs OUT file without any problem, at least for the first several frames. I didn't wait for it to finish.

    I have to change SRAM_START_ADDRESS for DMC & PMC. I have some data sections in L1D. In my 'linker.cmd', if I specify L1D's starting address the same as in the orginal CFG file, those sections can't be loaded into the DSP. But after I changed the 'linker.cmd' and 'simdm6437.cfg', my code runs on both HW and the simulator, as described above.

  • After limiting the profiling scope to CPU, L1P, and L1D only, I was able to run profiling to the finish. However, I'm seeing 100% L1D cache miss rate. The L1P miss/hit rate is as expected. I believe something is wrong with the configuration, so I tried both the original and the modified configuration files. In both cases, L1D cache miss is 100%.

    I added code to explicitly enable the L1D cache, i.e. writing 7 into the L1DCFG register, and 1 into L1DINV register; that seems to make no difference.

    What might be the cause?

     

  • Have you allowed the address you are accessing to be cached via the MAR registers?

  • Not explicitly, will check. Thanks for the help!

     

  • Yes that solves my problem; now I'm seeing L1D cache hits. Thanks a lot for your help!