This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320F2812 SCIA interface

Other Parts Discussed in Thread: TMS320F2812

HI,

I'm trying to implement a high speed memory dump solution using the TMS320F2812 SCIA interface at 921000 bits/s but am missing interrupts and the delay between the end of byte reception and interrupt trigger is kinda high. Here is how I played with it.


I hooked a digital line monitoring device a.k.a digiView up to the RX line.

I configured my serial interface to receive a specific sequence sent to the embedded system from a python script. standard mode is used and FIFO is disabled.

In my interrupt service routine, I toggled a digital I/O pin monitored using my digiView, read the RXBUF to a dummy variable and acknowledged the interrupt.

Here is the result:

@576000 I got all my interrupts. I mean by getting all my interrupt that the I/O toggled within the interrupt service routine is toggled as expected. because I was monitoring both the RX line and the pin toggle, I could calculate the delta between the end of a received byte (viewed in my digiView) and the beginning of my ISR (pin toggle). This was about 600ns and looked great.

@921000 the trouble began. the time delay (measured as described above) between the end of reception of the first byte and the beginning of the ISR was about 4.5us. furthermore, I did not get all my interrupts.

I tried many things including disabling all interrupts but SCIA RX interrupt and still the same result.

Note that was using 60Mhz system clock and 15Mhz low speed peripheral clock (LSPCLK) to generate communication data rates. I'm also using DSP/BIOS RTOS for my application.

has anyone ever tried using one of the TMS320F2812 SCI interfaces at 921000 b/s?

Thanks.

  • Hi,

    I managed to setup and run the SCI test code from TI. The major difference between that code and mine is that the project was created to run from H0 SARAM as mine was created to run from flash. The sample code is running fine at high speed ( up to 937500 bits/s) with interrupt response time of about 2 microseconds and a interrupt service routine execution time of about 1.5 microseconds, perfectly suitable for my application.

    When I run the same interrupt service routine from my project however, I end up with and longer interrupt response time ( about 60 microseconds) and a interrupt service routine execution time of 20 microseconds. I took time to check all the clock configuration in comparison with the working code and it looks similar. I also compared the ISR assembly code generated by the  compiler in both projects. The code is exactly the same and I cannot figure out why I end up with a high interrupt response time and too much ISR overhead with my project. Do you guys have any idea how I can troubleshoot this?

    It is another story when it comes to using the code in my final DSP/BIOS project. The interrupt response is about the same (about 60 microseconds) while the ISR execution time blows up ( about 60 microseconds). It looks to me like the  CPU timer 2 used by DSP/BIOS is preempting my ISR since it has higher priority. This is only my guess and I have not started looking into this part yet. I thinks there is a code sample on how to modify hardware interrupt priorities but I want to resolve the problem of interrupt response time and overhead first.

  • Thanks.

    Yannick Sereckissy said:

    Hi,

    I managed to setup and run the SCI test code from TI. The major difference between that code and mine is that the project was created to run from H0 SARAM as mine was created to run from flash. The sample code is running fine at high speed ( up to 937500 bits/s) with interrupt response time of about 2 microseconds and a interrupt service routine execution time of about 1.5 microseconds, perfectly suitable for my application.

    When I run the same interrupt service routine from my project however, I end up with and longer interrupt response time ( about 60 microseconds) and a interrupt service routine execution time of 20 microseconds. I took time to check all the clock configuration in comparison with the working code and it looks similar. I also compared the ISR assembly code generated by the  compiler in both projects. The code is exactly the same and I cannot figure out why I end up with a high interrupt response time and too much ISR overhead with my project. Do you guys have any idea how I can troubleshoot this?

    I figured out what was the problem. I was using a buffer to store the received bytes and in the linker file of the project running from flash, the .ebss section was mapped to external memory. That was what made the execution very slow.