This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS570LC4357: TMS570LC4357 RTP Direct Data mode overflow / buffer status not correctly detecting overflow

Part Number: TMS570LC4357


Hi,

I am trying to make direct data mode work between RTP on one TMS570 MCU and DMM on a second TMS570 MCU.I wrote a simple program to fill 1K of memory space using the DDMW register, and am running the interface at full data rate.  My program has a uint32 counter value, which gets passed into the DDMW register.  When I directly write the DDMW register with the counter value, I increment that value and try to write the value again.   I only write an updated value when an overflow has not occurred (RTPGSR bit 0 = 0) and when FIFO1 is empty (RTPGSR bit 8 = 1).  Wjhen I examine the memory on the slave device (I allow updates during debug mode in the Global Control Register), the data in memory does not sequentially increase, but increases by a varying count.  However, if I add some wait cycles to burn up processor time, I get a nice, cleanly incrementing, wrapping memory array.  I have enabled the RTP and DMM enable pins to hold off the bus in case of bus busy, and monitor the above two bits.  Am I missing something?  I figured monitoring a FIFO empty bit or not overflow would be sufficient in terms of transferring data over to the DMM without loss of data, but that is not the case.  It appears it's possible to write DDMW many times over before FifoEmpty changes to not empty.  The only way I've been able to get a nice linear counter is to insert open-loop wait states, such as a for loop to burn cycles.  What else can I look at to make sure the data written to DDMW ends up properly on the DMM side (the second processor)?  Is this a DMM issue or an RTP issue?  

Best,

Josh Karch

  • A bit more information:  I found that bit 12 of the RTP GSR says when the serializer is transferring data:  Is that what I should be looking at before writing rtpREG->DDMW?  Also, note, even with "COS" bit of DMMGLBCTRL cleared, DMM still fills the memory when I'm in debug mode and viewing the memory.

    It looks like some of my issues may be related to data overflow that happens while reading in debug mode.  Since the data is being read in the debugger, it's also being overwritten by the master processor.  Byte aligning the received data caused the counts to work sequentially.  So I guess my question morphs into:

    (1) If I want to efficiently transfer data over DDMW register to the slave processor, should I be using bit 12 of the GSR( SerializerEmpty), or should I be looking at FifoEmpty (bit 8), or should I be looking at overflow bit 1 of the GSR and stop sending data once FIFO is full?

    (2), Does the "COS" bit even work?  When I pause CCS after clearing "COS", it appears data is still modified on the DMM port.

  • It appears the debug mode caused aliasing, which caused the numbers written to memory to jump around. Is there a way to freeze DMM when I set a breakpoint or pause code?
  • The datasheet states the maximum data rate available is 100MBPS, whereas the TRM states 100MBPS/Pin. Experimentally, if I write DDMW and insert 5 wait cycles, for example increment a memory value 5 times, I can get 185MBPS before the DDM NENABLE pin goes high to stop the transfer. Basically I have a 75MHZ clock (HCLK/2) driving RTP. After two full clock pulses with 32 bits of data sent (16 bits RTP-DMM link), if I don't wait about 150Nsec before starting to write DDMW, I get an overflow condition that causes NENABLE to go high. What is the maximum continuous data rate possible between RTP and DDM with 16 pins and maximum clock speed considering 300MHz is the clock rate on the TMS570LC4357 MCU and HCLK is set to 150MHz? Can anything be done to speed up the deserializer on the DMM end?  Which manual is correct and which one is incorrect?  the TMS570 datasheet says "100MBPS Terminal Rate" while the TRM States the RTP supports "up to 100MBPS/Pin, up to 100MB/s " and the DMM states up to 100mbit/s/pin.  What's right, what's wrong, and what's the fastest I can get data from RTP to DMM and how can I make that happen?

  • Another note in the TRM on the DMM side (36.2.2) says it takes 4 clock cycles of VCLK, or 75MHZ/4 to receive and process one complete packet, about 53nsec. At 32 bits and a 16 bit wide bus, doesn't this correlate to approximately 600 MBPS maximum data rate continuously?  What I see is the DMMNENA going high and staying high for a lot longer than 53 nsec after three packets are sent from the master RTP device.  In some cases, the delay is in the microsecond range.  incrementing some dummy value four times between writing to DDMW vastly shortened the time that DMMNENA stayed high to about 350 nsec.In this case, four complete Direct Data Mode packets get sent to the DMM and then the DMM NENA goes high for 350 nsec.  Is this expected behavior?  By adding six dummy increments between each DDMW write, I get transfers without the DMMNENA going high, and it takes 182nsec, which is approximately 150nsec more than documented.  

  • Hello,

    I am looking into your observations and comments, and will get back to you soon. For the differences in timings specified in datasheet versus TRM, please use the datasheet numbers.

    Regards,
    Sunil
  • Sunil, That includes behavioral descriptions too? For example the DMM and RTP talk about 100 MBIT/s per pin capability in the TRM, but the datasheet just says 100MBIT. that's an order of magnitude difference in performance... Plus, I'm getting faster than datasheet performance at 185MBIT but not 100 MBYTES/Sec. So the question is what's the consistent answer, and how do I make optimal use of this device as a result? Best, Josh Karch
  • Another thing to note, if I enable Trace on reads in DDM, the data rate becomes 120MBIT/sec.
  • Sunil, any updates from your team on this issue? DMM/RTP has the potential to be a powerful interface set, if it were correctly/consistently documented. While I'm able to make communications work, It's unclear what the actual maximum data rates are, etc.
  • Hi Josh,

    Unfortunately I am unable to locate the bandwidth / throughput data on these interfaces, and may have to rely on others to collect the data again. The most critical timing aspect of this RTP - DMM combination is the overflow error that causes the DMM to drive the nENA signal high. This timing becomes even more critical on the LC4357 MCU compared to the other TMS570 MCUs as the DMM on the LC4357 typically writes to the L2RAM, which is "further away" in terms of number of cycles to write than the TCRAM on the other devices.

    Your data shows that having 6 dummy increments (x ns delay) between successive writes allows you to sustain continuous transfers from RTP to DMM. You can use this data to calculate the actual bandwidth of the interface, given the number of bits transferred each cycle (16) and the max RTPCLK frequency that you can use.

    Regards,
    Sunil
  • Sunil, That's basically what I did, was try to figure out the bandwidth by varying the number of wait cycles needed, but wanted to note that my experimental results did not appropriately line up with the data sheet, which was inconsistent with the TRM. It would be helpful to have the folks who designed the DMM/RTP interfaces verify the correct information, and correct the datasheet / TRM with the right data. Thank you, Josh
  • In addition, Sunil, I'm interested in finding out if the mechanism responsible for generating RTP packets in Direct Data Mode using the DDMW register is reliable enough (thoroughly tested and verified as an interface the way other interfaces like SPI or SCI are) for use in safety critical systems? Same thing for DMM.
  • Josh,

    The DMM and RTP interfaces are verified for functionality for sure. These are defined to be used strictly for calibration purposes and not included in FIT estimation, as stated in the LC4357 safety manual.

    Regards,
    Sunil