I have a system where I allocate some buffers and DMA data into them, when I go to read it I can see in the memory view that I am looking at cached data not the underlying data in RAM.
The system is a 6433, with all the latest libraries (sysbios 6, edma3 etc).
In the config file I am loading the 64p specific cache controls.
I have aligned the start of the data block on a 128 byte boundary and made sure the allocation is rounded up to a 128 byte boundary too.
After allocating the blocks I call wbInv.
Upon DMA IRQ/callback (set for a single completion irq, no intermediates) I issue a BACHE_inv with the wait option. I do some endian twiddling and pass the buffer on for processing. After processing I do a wbInv and give the buffer back to the DMA controller.
This mostly works, but about 1 in 20 times I get invalid data, I put in some code to detect invalid data and hence a breakpoint.
At the breakpoint I can look with the memory viewer and it merrily tells me that data in L1, L2 and RAM are different, the RAM version is a BE version of valid data, L2 and L1 have the BE and LE version of rubbish.
So have I misunderstood the purpose of the cache_invalidate, I expect it to discard those lines in cache, discarding any changes that had not yet been written to main RAM (faults caused by this would be my fault), then to direct the next read of those addresses to main ram and to repopulate the L2 and L1 caches if appropriate?
Am I using the right API, I have a choice of CSL, sysbios or BCACHE, I used BCACHE as I think it is the recommended path, automatically going to the 64P specialised instructions?
Any other buried workarounds/errata I have not found?
Ta
Chris
ps Is it still not advisable to use the L2 cache as RAM?
pps When I ask the cache size, it tells me that the L2 is 256K, is that a bonus or a symptom?