This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320C6678: c6678 multicast Rx not working

Part Number: TMS320C6678

Hi,

I'm unable to receive multicast packets using the NDK 3_61_01_01 that came with processor sdk 6.3..0.106.

The project network stack configuration is derived from the pdk_c667x_2_0_16\packages\ti\transport\ndk\nimu\example\helloworld project.

As a sanity check, I ran the telnet console program that was delivered with the NDK and executed the 'test multicast' command. When I send a multicast packet to: 224.1.2.5 port 4040 from a windows host, there is no response from the multicast test program.

The console multicast test implementation is located by the function 'MulticastTest ( ) in  file: ndk_3_61_01_01\packages\ti\ndk\tools\console\contest.c.

The c6678 project I'm running was ported from a c6654-target project. I tried running the telnet console muticast test on the c6654 EVM, and the test replied that the multicast packet was successfully received on both sockets.

To see whether the receive packet was forwarded to the NIMU, I put a breakpoint in the function EmacRxPktISR( ) (in file pdk_c667x_2_0_16\packages\ti\transport\ndk\nimu\src\v1\nimu_eth.c):

...

/* Is it a standard ethernet type? */
if (protocol != ETHERTYPE_IP && protocol != ETHERTYPE_IPv6 && protocol != ETHERTYPE_VLAN
&& protocol != ETHERTYPE_PPPOECTL && protocol != ETHERTYPE_PPPOEDATA )
{
/* This is a raw packet, enqueue in Raw Rx Queue */
PBMQ_enq( &ptr_pvt_data->pdi.PBMQ_rawrx, (PBM_Handle) hPkt);
}
else
{ /* This is a normal IP packet. Enqueue in Rx Queue */
PBMQ_enq( &ptr_pvt_data->pdi.PBMQ_rx, (PBM_Handle) hPkt );  // -------------- SET BREAKPOINT HERE
}

The breakpoint is never hit when the multicast packet is transmitted from the windows host.

I've read numerous TI forum posts reporting this same issue - some are very old - but nowhere have I seen a response from TI describing how to resolve the problem.

I appreciate your help in advance.

Jim

  • Hi Jim,

    Will look into it and post a response ASAP.

    Thanks

  • H - This response is not relevant to the RxMulticast issue reported in this thread...did you send the wrong reply?

  • Hi Jim,

    I apologize for the inconvenience. I would like to give some clarification.

    The "Multicast" project which was derived from the PDK source is a custom project. I intended to give information regarding the NIMU example client which uses the NDK library (Same as the "Multicast" project). This NIMU example project comes with the PDK.

    I am working on the Multicast query that you have posted, and I am trying to give information about example projects in PDK that might help you.

    Thanks

    Rajarajan U

  • Rajarajan,

    I'm not sure what 'multicast' project you're referring to... I did not run a multicast-specific project.. I've tried NIMU_emacClientExample_EVMC6678C66BiosExampleProject located at: ti\pdk_c667x_2_0_16\packages\ti\transport\ndk\nimu\example\helloWorld\src

    I updated the stack configuration in the StackTest( ) task thread in helloWorld.c to enable the telnet service.

    #if 1 //  added telnet
    // Specify TELNET service for our Console example
    bzero( &telnet, sizeof(telnet) );
    telnet.cisargs.IPAddr = INADDR_ANY;
    telnet.cisargs.pCbSrv = &ServiceReport;
    telnet.param.MaxCon = 2;
    telnet.param.Callback = &ConsoleOpen;
    CfgAddEntry( hCfg, CFGTAG_SERVICE, CFGITEM_SERVICE_TELNET, 0,
    sizeof(telnet), (uint8_t *)&telnet, 0 );
    #endif

    After the example is up and running, connect to the C66x telnet console and run the multicast test as described in my original posting. When I send a  multicast UDP packet to the multicast IP:port specified in the telnet console, no multicast IP rx packets are received in the EmacRxPktISR ( ).

    As an experiment, I modified the EmacStart function in the file ti\pdk_c667x_2_0_16\packages\ti\transport\ndk\nimu\src\v1\nimu_eth,c to change the receive filter to ETH_PKTFLT_ALL  (see below) this didn't change anything.

    Do I need to modify the Emac_Start( ) function to enabled multicast packet forwarding from the PA, and if so, what?

    Thanks,

    Jim

    static int EmacStart
    (
    NETIF_DEVICE* ptr_net_device
    )
    {

    EMAC_DATA* ptr_pvt_data;
    paMacAddr_t broadcast_mac_addr = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF};
    paEthInfo_t ethInfo = { { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, /* Src mac = dont care */
    { 0x10, 0x11, 0x12, 0x13, 0x14, 0x15 }, /* Default Dest mac */
    0, /* vlan = dont care */
    0, /* ignore ether type */
    0 /* MPLS tag = don't care */
    };
    paRouteInfo_t routeInfo = { pa_DEST_HOST, /* Route a match to the host */
    0, /* Flow ID 0 */
    0, /* Destination queue */
    -1, /* Multi route disabled */
    0xaaaaaaaa, /* SwInfo 0 */
    0, /* SwInfo 1 is dont care */
    0, /* customType = pa_CUSTOM_TYPE_NONE */ \
    0, /* customIndex: not used */ \
    0, /* pkyType: for SRIO only */ \
    NULL /* No commands */
    };

    /* Get the pointer to the private data */
    ptr_pvt_data = (EMAC_DATA *)ptr_net_device->pvt_data;

    /* Setup Tx */
    if (Setup_Tx () != 0)
    {
    NIMU_drv_log ("Tx setup failed \n");
    return -1;
    }

    /* Setup Rx */
    if (Setup_Rx (ptr_net_device) != 0)
    {
    NIMU_drv_log ("Rx setup failed \n");
    return -1;
    }

    memcpy (&ethInfo.dst[0], ptr_pvt_data->pdi.bMacAddr, sizeof(paMacAddr_t));

    /* Set up the MAC Address LUT*/
    if (Add_MACAddress (&ethInfo, &routeInfo) != 0)
    {
    NIMU_drv_log ("Add_MACAddress failed \n");
    return -1;
    }

    memcpy (&ethInfo.dst[0], broadcast_mac_addr, sizeof(paMacAddr_t));
    /* Set up the MAC Address LUT for Broadcast */
    if (Add_MACAddress (&ethInfo, &routeInfo) != 0)
    {
    NIMU_drv_log ("Add_MACAddress failed \n");
    return -1;
    }
    /* Verify the Tx and Rx Initializations */
    if (Verify_Init () != 0)
    {
    NIMU_drv_log ("Warning:Queue handler Verification failed \n");
    }

    /* Copy the MAC Address into the network interface object here. */
    mmCopy(&ptr_net_device->mac_address[0], &ptr_pvt_data->pdi.bMacAddr[0], 6);

    #if 0 // original
    /* Set the 'initial' Receive Filter */
    ptr_pvt_data->pdi.Filter = ETH_PKTFLT_MULTICAST;
    #else
    ptr_pvt_data->pdi.Filter = ETH_PKTFLT_ALL;
    #endif
    ptr_pvt_data->pdi.TxFree = 1;

    return 0;
    }

  • Hi James,

    I'm not sure what 'multicast' project you're referring to... I did not run a multicast-specific project.. I've tried NIMU_emacClientExample_EVMC6678C66BiosExampleProject located at: ti\pdk_c667x_2_0_16\packages\ti\transport\ndk\nimu\example\helloWorld\src

    I was referring the project that you have mentioned in the query as "Multicast" project.

    I have found some Documents regarding NIMU multicast.

    Network_Developers_Kit_FAQ.pdf

    Unfortunately, we can not help in debugging the code. We can help with the resources for your reqarding this query.

    https://software-dl.ti.com/processor-sdk-rtos/esd/docs/latest/rtos/index_Foundational_Components.html#ndk

    Thanks

    Rajarajan U

  • Hi Rajarajan,

    I'm astonished an important feature like multicast does not work on the c6678 when it works fine on the C6654. In the multiple forum posts I reviewed regarding this issue (going back 10 years) none seem to indicate a resolution. We have a c6678 customer project whose entire application depends upon multicast availability. Are there no engineers at TI who can point us in the right direction?

    If the issue is that the packet accelerator is filtering multicast packets, how can that be determined and/or disabled?

    Thanks,

    JIm

  • James,

    when you say it works fine on C6654, would you please point to us, what is the name and version of the software package? is it the Processor SDK or the MCSDK.

    Because, we do not find any examples for c6654 in the latest processor SDK 6.3.

    Regards

    Shankari G  

  • Shankari,

    The c6654 project uses NDK 3.40.01.01 and pdk c6654 2.0.13. Both components came with c6654 SDK RTOS 5.0.2.

    The network stack initialization for my c6654 project was derived from the pdk  transport\ndk\nimu\helloworld example project. When I run the telnet console's 'test multicast' command on c6654 and send a multicast packet from a windows host to the DSP, the multicast receiver indicates that the packet was received.

    Previous forum posts mention that since the c6654 does not have a packet accelerator, the emac driver handles multicast reception. On the c6678, I suspect the PA is filtering-out the receive multicast packets. How can I disable this?

    Thanks,

    Jim

  • Hi Jim,

    E2E Multicast Post

    Kindly refer to this E2E post, in which they discussed the "EMAC_setMulticast" function. Please check if that helps.

    Thanks,

    Rajarajan

  • Hi James Conway,

    To receive the Multicast packet, the following code must be used
     
    char *MulticastAddr = "224.1.2.3"; // Multicast group address
    struct ip_mreq mc_group; // Socket configuration structure for the multicast group membership
    SOCKET recv = INVALID_SOCKET; // Receiver socket
    
    mc_group.imr_multiaddr.s_addr = inet_addr(MulticastAddr);
    mc_group.imr_interface.s_addr = NA.IPAddr;
    setsockopt (recv, IPPROTO_IP, IP_ADD_MEMBERSHIP, (void *)&mc_group, sizeof(mc_group));

    And the explanation for this has been given on (page 68) of the NDK 1.94 Programmer's Guide (SPRU524E) and on the NDK Wiki page (PDF).

    Multicast packet follows IGMP protocol which is a type of IP protocol with ETHERTYPE = ETHERTYPE_IP. Hence, it never reaches the breakpoint.
     
     
    /* Is it a standard ethernet type? */
    if (protocol != ETHERTYPE_IP && protocol != ETHERTYPE_IPv6 && protocol != ETHERTYPE_VLAN
    && protocol != ETHERTYPE_PPPOECTL && protocol != ETHERTYPE_PPPOEDATA )
    {
    /* This is a raw packet, enqueue in Raw Rx Queue */
    PBMQ_enq( &ptr_pvt_data->pdi.PBMQ_rawrx, (PBM_Handle) hPkt); //-------------- BUT IT ACTUALLY GOES HERE
    }
    else
    { /* This is a normal IP packet. Enqueue in Rx Queue */
    PBMQ_enq( &ptr_pvt_data->pdi.PBMQ_rx, (PBM_Handle) hPkt );  // -------------- SETs BREAKPOINT HERE
    }

     
    He is sending the Multicast Test packet at L3 (Network Layer) and detecting it on the L2 layer (Data link layer). The best way to check the Multicast packet from EVM can be done using "Wireshark" and using the filter "(eth.dst[0] & 1)".

    The following URL, helped me

    https://en.wikipedia.org/wiki/Internet_Protocol_version_4#Options

    https://en.wikipedia.org/wiki/EtherType

    Thanks & Regards,

    Rajarajan U

  • Hi James,

    Please refer to the Multicast test code which uses Multicast send and receive functions ("contest.c"),

    /*---------------------------------------------------------------------- */
    /* MulticastTest() */
    /* Test the Multicast socket API. */
    /*---------------------------------------------------------------------- */
    static void MulticastTest (void)
    {
        SOCKET          sudp1 = INVALID_SOCKET;
        SOCKET          sudp2 = INVALID_SOCKET;
        struct sockaddr_in sin1;
        char            buffer[1000];
        int             reuse = 1;
        struct ip_mreq  group;
        NDK_fd_set      msockets;
        int             iterations = 0;
        int             cnt;
        CI_IPNET        NA;
    
        ConPrintf ("=== Executing Multicast Test on Interface 1 ===\n");
    
        /* Create our UDP Multicast socket1 */
        sudp1 = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
        if( sudp1 == INVALID_SOCKET )
        {
            ConPrintf ("Error: Unable to create socket\n");
            return;
        }
    
        /* Create our UDP Multicast socket1 */
        sudp2 = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
        if( sudp2 == INVALID_SOCKET )
        {
            ConPrintf ("Error: Unable to create socket\n");
            return;
        }
    
        /* Set Port = 4040, leaving IP address = Any */
        memset( &sin1, 0, sizeof(struct sockaddr_in) );
        sin1.sin_family = AF_INET;
        sin1.sin_port   = NDK_htons(4040);
    
        /* Print the IP address information only if one is present. */
        if (CfgGetImmediate( 0, CFGTAG_IPNET, 1, 1, sizeof(NA), (unsigned char *)&NA) != sizeof(NA))
        {
            ConPrintf ("Error: Unable to get IP Address Information\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
    
        /* Set the Reuse Ports Socket Option for both the sockets.  */
        if (setsockopt(sudp1, SOL_SOCKET, SO_REUSEPORT, (char *)&reuse, sizeof(reuse)) < 0)
        {
            ConPrintf ("Error: Unable to set the reuse port socket option\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
        /* Reuse the ports; since multiple multicast clients will be executing. */
        if (setsockopt(sudp2, SOL_SOCKET, SO_REUSEPORT, (char *)&reuse, sizeof(reuse)) < 0)
        {
            ConPrintf ("Error: Unable to set the reuse port socket option\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
    
        /* Now bind both the sockets. */
        if (bind (sudp1, (struct sockaddr *) &sin1, sizeof(sin1)) < 0)
        {
            ConPrintf ("Error: Unable to bind the socket.\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
        if (bind (sudp2, (struct sockaddr *) &sin1, sizeof(sin1)) < 0)
        {
            ConPrintf ("Error: Unable to bind the socket.\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
    
        /* Now we join the groups for socket1
         *  Group: 224.1.2.4
         *  Group: 224.1.2.5 */
        group.imr_multiaddr.s_addr = inet_addr("224.1.2.4");
        group.imr_interface.s_addr = NA.IPAddr;
        if (setsockopt (sudp1, IPPROTO_IP, IP_ADD_MEMBERSHIP, (void *)&group, sizeof(group)) < 0)
        {
            ConPrintf ("Error: Unable to join multicast group\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
        group.imr_multiaddr.s_addr = inet_addr("224.1.2.5");
        group.imr_interface.s_addr = NA.IPAddr;
        if (setsockopt (sudp1, IPPROTO_IP, IP_ADD_MEMBERSHIP, (void *)&group, sizeof(group)) < 0)
        {
            ConPrintf ("Error: Unable to join multicast group\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
        ConPrintf ("-----------------------------------------\n");
        ConPrintf ("Socket Identifier %d has joined the following:-\n", sudp1);
        ConPrintf (" - Group 224.1.2.4\n");
        ConPrintf (" - Group 224.1.2.5\n");
        ConPrintf ("-----------------------------------------\n");
    
        /* Now we join the groups for socket2
         *  Group: 224.1.2.5
         *  Group: 224.1.2.6 */
        group.imr_multiaddr.s_addr = inet_addr("224.1.2.5");
        group.imr_interface.s_addr = NA.IPAddr;
        if (setsockopt (sudp2, IPPROTO_IP, IP_ADD_MEMBERSHIP, (void *)&group, sizeof(group)) < 0)
        {
            ConPrintf ("Error: Unable to join multicast group\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
        group.imr_multiaddr.s_addr = inet_addr("224.1.2.6");
        group.imr_interface.s_addr = NA.IPAddr;
        if (setsockopt (sudp2, IPPROTO_IP, IP_ADD_MEMBERSHIP, (void *)&group, sizeof(group)) < 0)
        {
            ConPrintf ("Error: Unable to join multicast group\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
        ConPrintf ("-----------------------------------------\n");
        ConPrintf ("Socket Identifier %d has joined the following:-\n", sudp2);
        ConPrintf (" - Group 224.1.2.5\n");
        ConPrintf (" - Group 224.1.2.6\n");
        ConPrintf ("-----------------------------------------\n");
    
        while (iterations < 4)
        {
            /* Initialize the FD Set. */
            NDK_FD_ZERO(&msockets);
            NDK_FD_SET(sudp1, &msockets);
            NDK_FD_SET(sudp2, &msockets);
    
            /* Wait for the multicast packets to arrive. */
            /* fdSelect 1st arg is a don't care, pass 0 64-bit compatibility */
            cnt = fdSelect( 0, &msockets, 0, 0 , 0);
    
            if(NDK_FD_ISSET(sudp1, &msockets))
            {
                cnt = (int)recv (sudp1, (void *)&buffer, sizeof(buffer), 0);
                if( cnt >= 0 )
                    ConPrintf ("Socket Identifier %d received %d bytes of multicast data\n", sudp1, cnt);
                else
                    ConPrintf ("Error: Unable to receive data\n");
    
                /* Increment the iterations. */
                iterations++;
            }
            if(NDK_FD_ISSET(sudp2, &msockets))
            {
                cnt = (int)recv (sudp2, (void *)&buffer, sizeof(buffer), 0);
                if( cnt >= 0 )
                    ConPrintf ("Socket Identifier %d received %d bytes of multicast data\n", sudp2, cnt);
                else
                    ConPrintf ("Error: Unable to receive data\n");
    
                /* Increment the iterations. */
                iterations++;
            }
        }
    
        /* Once the packet has been received. Leave the Multicast group! */
        if (setsockopt (sudp2, IPPROTO_IP, IP_DROP_MEMBERSHIP, (void *)&group, sizeof(group)) < 0)
        {
            ConPrintf ("Error: Unable to leave multicast group\n");
            fdClose (sudp1);
            fdClose (sudp2);
            return;
        }
    
        /* Leave only one of the multicast groups through the proper API. */
        NtIPN2Str (group.imr_multiaddr.s_addr, &buffer[0]);
        ConPrintf ("Leaving group %s through IP_DROP_MEMBERSHIP\n", buffer);
    
        /* Once we get out of the loop close socket2; this should internally leave all the groups. */
        fdClose (sudp1);
        fdClose (sudp2);
        ConPrintf("== End Multicast Test ==\n\n");
    }

     The File is located in "C:\ti\ndk_3_61_01_01\packages\ti\ndk\tools\console".

    Further NDK API reference for Multicast has been mentioned in the API guide, NDK API guide. Using "NDK_setsockopt()" we can set the option as "SO_REUSEPORT" for UDP Multicast send and receive. Please use "NDK_recvfrom()" (NDK API function) to receive multicast and triggered by Interrupt.

    Also, There is an option to enable Multicast replies on,

    1. Icmp.xml 
          /*!
           *  Enable or disable replies to multicast.
           *  
           *  When enabled, the stack *does not* reply to ICMP echo request packets
           *  sent to multicast addresses.
           */ 
          config Bool icmpDontReplyToMcast = defaultIcmpMcastReply;
    2. Ip.xdc 
      <tr>
               <td colspan="2"><control type="checkbox" 
                   label="Disable multicast replies"
                   value="value:ti.ndk.config.Ip.icmpDontReplyToMcast" 
                   tooltip="value:ti.ndk.config.Ip.icmpDontReplyToMcast.$summary"/></td>
            </tr>

    Files are located in the "C:\ti\ndk_3_61_01_01\packages\ti\ndk\config" folder. There as no Multicast example project available in Processor SDK, Please try these examples and provide your feedback.

    Thanks,

    Rajarajan U

  • Hi James,
    I have analyzed it further. And the following are my observations,
    1. Though in the NIMU_EMAC client there were no multiple tests, in "PA_UnitTest_evmc6678_C66BiosTestProject" example there are multiple tests like 
     
            { paTestL4Routing, "Pa_addPort and L4 Routing",            },
     
     
    { paTestPatchRoute,     "Blind patch and route",                      },
     
     
    { paTestTxFmtRt,      "Tx checksum and routing",                    },
     
     
    { paTestCustom, "Custom routing", },
     
     
    { paTestMultiRouting,   "Multi-routing",         },
     
     
    { paTestIPv4FragReassem,"IPv4 Fragmentation and Reassembly",      },
     
     
    { paTestIPv6FragReassem,"IPv6 Fragmentation and Reassembly",      }.
     
      
    2.  In "paTestMultiRouting" test seems to have the code sequence for multicast.
    3.  Able to experiment this example on C6678 EVM and all the tests including "Multi-routing" run successfully.
    4. I would recommend the customer to look into this "paTestMultiRouting" and check whether this would suffice their requirement.
     

    The path of the PA unit test example project after installing Processor SDK (https://www.ti.com/tool/PROCESSOR-SDK-C667X) and generating the PDK example (https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1082251/faq-tms320c6678-how-to-generate-the-ccs-pdk-examples-for-c6678).

    Thanks,
    Rajarajan U