Part Number: TMS320C6678
Hi,
we have an embedded platform using the TMS320C6678 and several TI packages (SYSBIOS (bios_6_76_02_02 + XDC), NDK_3_61_01_01, IPC_3_50_04_07, PDK_c667x_2_0_10, DSPLIB_c66x_3_4_0_4, ...).
Our project is very big and there is a lot going on in several other Threads (e.g. signal analysis, PCIe and with that a lot of memcpys etc.)
All of the mentioned computations happen on core0 and the other 7 cores just linger around.
We need to transfer some computed data to our host. We do this via TCP and with help of the functions of NDK.
We use a daemon handler (NDK's DaemonNew(...)) and NDK_recv(...) to receive data and we send data with NDK_send(...).
For the NDK stack we use the following options:
- 
NC_SystemOpen(NC_PRIORITY_HIGH, NC_OPMODE_INTERRUPT) 
- 
TCP Transmit buffer size: 64000 
- 
TCP Receive buffer size: 64000 
- 
TCP Receive limit: 64000 
- 
UDP Receive limit: 8192 
Without the TCP communication our Core0 utilization is around 20%. Enabling the ethernet communication (sending around 50 Mbit/s) yields a Core0 usage 100% and the core is most of the time in TcpPrSend() which calls SBWrite() which calls mmCopy() while randomly pausing the program via a JTag debugger. This "kills" core 0 and we are not able to send enough data to our host (where we do not reach the limits of the ethernet port (theoretically capable of 1 Gbit/s throughput).
Do you have any idea why sending tcp packets via ndk yields such high CPU usage?
Best
Paul
 
				 
		 
					 
                           
				