This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM3357: EtherCAT mailbox issue

Part Number: AM3357


Hello,

we are using the PRU-ICSS-EtherCAT_Slave on AM3357.
The design is based on the code from TI but the object dictionary has been enhanced over the last 4 years.
The device works since years with several masters like TwinCAT, Acontis, 3S and some more.
The slave stack has been updated to PRU-ICSS-EtherCAT_Slave_01.00.05.00 and TI-ESC reports build 04EC.

With the new TI-ESC we noticed during the mailbox communication that some mailbox requests got lost.
It happens a few per mill of all mailbox requests and hit any object at random time.

First we debugged the SSC and set traces on any mailbox protocol failure in the slave stack.
But we never trapped into any error or abnormal handling, but the master stack reports timeouts.
So at the next step of examination we used wireshark and found protocol errors exactly at the timestamps when the master stack reports timeout errors.

It came out that hundreds of mailbox requests are handled correctly with request and respond.
But thus that are not responded are send by the master but obviously not received by the PRU, because the request got no working counter incrementation.
So with this knowledge it is clear why we never seen any mailbox protocol error inside the slave SSC, because the request on the EtherCAT bus was not given to the Sitara by the PRUs.

Attached you find a snapshot with wireshark, that shows
* mailbox request that are acknowledged by the PRU (green marking) and are responded
* one identical mailbox request that has not been acknowledged by the PRU (red marking)
We use only one AM3357-slave directly connected to the master.


Since this abnormal working counter behaviour we came to the conclusion that this is a internal PRU problem.
Is this correct?
Under which circumstances does the PRU not receive a request, that has been received hundreds of times before and after this loss?
How can we fix this problem?

Thank you.