Folks,
Does ICE v1 support Fast Hot Connect for EtherCAT?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Folks,
Does ICE v1 support Fast Hot Connect for EtherCAT?
I see.
I am asking these questions because I have a big problem with my EtherCAT slaves.
I have a system where my master is connected to four slaves:
1) Port 0 of my first slave is connected to my master
2) Port 1 of my last slave is connected to my master
3) Redundancy is Enabled using TwinCAT 2.11
4) My last three slaves are in a Hot Connect Group, while my first slave is not
This is the issue:
When I disconnect port 0 of my first slave, or port 1 of my last slave, then I may loose between 10 to 200 frames at once. This does not happen with the slaves/connections in the middle. It only happens with the two end slaves.
Could this be a configuration issue on TwinCAT or on my ecat_def.h file (macros: DIAGNOSIS_ENABLED, etc.)? Could this be a hardware (ICE v1) issue? Or could this be my software/firmware issue?
I've been reading the Application Notes 1i2 from ETG for further information as well as the Hardware Data sheet from ETG. My slaves are all using ICE v1 (ETG.5003).
Are you aware of this issue, or is it just happening in my ICE v1 slaves?
I am using the AM335x SDK 01.01.00.10 from back in December 2015. I looked at the release notes for the new version (01.01.01.01 from March 2016), but did not see anything regarding this problem.
I can also ask this question in the appropriate forum if you let me know which one is the appropriate forum.
Pototo,
do you have a clear description what 'hot connect' means? I couldn't find a spec yet. Even Beckhoff seems to support 'hot connect' only on special devices (ports are marked with dotted lines...).
Anyway from what you describe I assume this is related to link detection. We usually support slow and fast variants of link detection. The fast one currently requires a TLK1xx phy and appropriate software configuration. I think on ICEv1 this should be enabled by default but I am not sure as we moved to ICEv2 internally since a long time. ICEv1 may even have old silicon PG and I am not sure how much system test we still run on that old HW. I definitely recommend to change to ICEv2 for evaluation purposes.
For sure you also need to check the link detection time on your master side. If you say the issue only occurs on ports connected directly to the master then the delay in detecting (and therefor changing to other port ) may just reside on the master. Obviously the longer it takes the more packets will be lost. You didn't mention any (packet) timing yet. Have you tried the same test with other EtherCAT devices already?
Regards,