This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSPM0G3507: UART TX DMA data corruption depending on MCLK/ULPCLK configuration when running >32MHz CPUCLKs

Part Number: MSPM0G3507
Other Parts Discussed in Thread: , SYSCONFIG

Hello,

I would like to present you some problem regarding the UART TX DMA data getting corrupted while running the CPU/MCLK clocks above 32MHz. I had problems on my custom hardware and firmware project that I am currently working on but was able to narrow it down to what seems to be problems in the clocking domain of the microcontroller. For that, I was able to reliably reproduce the faulty behaviour on a LP-MSPM0G3507 launchpad (marked Rev. A) and having the UART TX DMA example ("uart_tx_multibyte_fifo_dma_interrupts_LP_MSPM0G3507_nortos_ticlang") provided by TI only slightly modified so that it busywaits on EOT and DMADONE flags repetitively and outputs an array of ASCII characters (that is not being changed) in an infinite while loop on 115200baud.

The example uses a CPU/MCLK/ULPCLK directly provided by the SYSOSC and everything works flawlessly. When I provide those clocks from the SYSPLL but CPUCLK=MCLK=ULPCLK=32MHz everything works still without problems. It also works if I change up the UDIV so that ULPCLK=MCLK/2=16MHz without data corruption. If I now try to clock CPUCLK/MCLK any higher than 32MHz I start running into problems with the data received on the PC side being corrupted.

It looks to me that if MCLK >32MHz data will get corrupted in those cases where MCLK != ULPCLK. If for example I clock MCLK=40MHz and ULPCLK=40MHz (UDIV=/1) then everything still works without corruption. If I now change the UDIV to /2 and ULPCLK=MCLK/2=20MHz I start to see corruption. This also happens for example if I try (like in my real project that I am working on) to clock the MCLK=80MHz and ULPCLK=40MHz. It looks to me that at the moment this will limit the total usable CPUCLK/MCLK to max. 40MHz without seeing corruptions because ULPCLK_max is 40MHz.

I have also tried to supply a different clocking source to the UART0 peripheral (MFCLK=4MHz) while keeping MCLK clock speeds way up. This also still leads to corruptions on the received TX data.

As I mentioned I changed up the provided UART TX DMA example for this purposes while keeping track via git over the changes and fault introducing configuration states that I have written about in this text, so that you can backtrack my changes, diff the commits and easily bring the MSP into those faulty data corrupting states that I have described.

Edit: In my real project with 80MHz clocks the UART data does not get corrupted if I use blocking UART writing, so I assume that the problem is with the DMA clocking.

Every bit of help and information on this matter is highly appreciated and thanks in advance
Jonas

uart_tx_multibyte_fifo_dma_interrupts_LP_MSPM0G3507_nortos_ticlang-DATA-CORRUPTION.tar.gz

  • Hi Jonas,

    I have just run the code from our SDK (uart_tx_multibyte_fifo_dma_interrupts) with MCLK at 80MHz and ULPCLK at 40 MHz, and the UART output looks correct on my oscilloscope. I then tried the same code, but adjusted MCLK to 40MHz and ULPCLK to 20MHz, and I am still not seeing this issue. 

    So it sounds like the issue may come from some of the changes you have made. I see you said you made very small changes, but could you try verifying by opening the un-edited version of the example and trying the testing again? If you do end up seeing the issue, could you please show me the settings you are using to set MCLK, ULPCLK, and the clock that UART is using?

  • Hi Dylan,

    I'm working with Jonas on the project. The UART output looks correct on the scope because it does not happen very often. You get missing/corrupted data maybe one byte per 1000 bytes or so. So you can only observe by receiving the data on the PC and checking that everything arrived for a longer period of time.

    Best

    Cornelius

  • Thanks for the update, Cornelius. I will do some additional testing today to try to replicate and will get back to you then with my findings.

  • Hi Cornelius, Hi Jonas,

    I have been able to run your project and I do now see the problematic UART behavior.

    After reviewing this for a bit, it looks like the problem is in our Sysconfig tool: When you utilize the "Clock tree enable" option, and choose to use MFCLK, it appears that this clock does not actually get enabled.

    My suggestion for now would be to open the SYSCTL tab in Sysconfig, uncheck the "Use Clock Tree" feature, then scroll down to the MFCLK section. Expand that section and click to enable it. For any other clock settings you want, enter them in the SYSCTL tab instead of in the clock tree. Now when you rebuild the project, and reprogram your device, you should be able to see the UART data being sent properly. I've tried this on my end and it looks like the UART data is being sent properly without corruption.

    Please let me know when you've had the chance to try making this change, and let me know if there are additional issues. I'll be working with our software team to get this feature functioning properly. Thanks for pointing this out.

  • Hello Dylan, thank you for your answer. If I deactivate the Clock tree and enable the MFCLK as you suggest and keep the rest in default then it is working. The problem is that those default values will initiate MCLK=ULPCLK=SYSOSC=32MHz and everything is working (as it does with the clock tree enabled with the same settings). Now if I continue and set up the clocking in the SYSCTL without the clock tree feature and go for a setup with SYSPLL and MCLK=80MHz and ULPCLK=80MHz (by letting the default ULPCLK divider stay at 1, which strangely works even though the reference manual states that PD0/ULPCLK clock is 40MHz max.) it works without problems but I don't know which side effects that will introduce later as the reference manual for the MSPM0G states that PD0 clock is 40MHz max. So I set the ULPCLK divider again to 2 so that MCLK=80MHz, ULPCLK=40MHz, MFCLK=4MHz and the same problems like before are happing with data being corrupted. This therefore does not look like a fix.

  • Thanks for pointing that out. I actually thought I had checked that but I think I had left my UART CLK source as BUSCLK. When I changed this to MFCLK like in your project I once again saw the errors that you mention.

    I made a lot of changes to your project that slowly started to alleviate the corruption, and while some of them helped, there was still corruption on the line. This became quite frustrating when I got to the point of having two projects, my starting project and yours, and I had altered both of them to the point that they were nearly the exact same, while one was working and one was not.

    Finally, I looked at the project properties and checked the optimization level, and realized that your project used an optimization level of 2, and my had optimization off. So I turned optimization off on your project and the corruption stopped, even when I fully reverted to all of your initial settings, including using clock tree.

    So for one thing, I think that if you turn optimization off, you should be able to transfer the UART data without corruption. I tried this on my end by completely re-importing your project, and changing nothing but the optimization, so you should not have issues here.

    Secondly, optimization should not be affecting whether UART data is sent without corruption. This is something else I'll have to look at with our software team. 

    Let me know when you've had a chance to try this. You can find this setting by right clicking on your project, clicking properties, expanding the "Arm Compiler" tab, clicking optimization, then setting the "Select optimization paradigm/leve (-O)" setting to the the blank selection. Then click apply and close.

  • Hi Dylan,

    I have changed the optimization to -O0 (and also tested with leaving it blank), the problem still remains.

  • Hi Jonas,

    On your end, are you setting your gTxDone and gDmaDone flags back to false after they are polled in the wait_dma_ready() function? After exploring some of the other possibilities of timing issues, I realized that the original project that you sent me does not set these flags back to false, so the function does not wait for them to be set after the first iteration of the main body of the loop.

    If you have already taken care of this and are still seeing the issue, could you please verify which version of CCS, the SDK, Sysconfig, and the compiler you are using? I want to verify that I am using the same tool set as you to continue looking at this potential timing issue.