This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM263P4: CPSW latency improvements

Part Number: AM263P4
Other Parts Discussed in Thread: DP83869, SYSCONFIG

Hi, we are evaluating AM263Px for an application that requires low latency communication with a remote AM64x. Due to the distances involved and bandwidth required, we are limited to using Ethernet for communication (versus FSI or SPI or CAN). On the AM64x side we use custom firmware with the PRU-ICSSG to transmit and receive at gigabit speeds. This works great between AM64x boards and provides very low latency.

Unfortunately, the PWM/ADC peripherals on the AM64x are limited, so we are considering using AM263Px to communicate with the remote AM64x (instead of a 2nd AM64x). Since the AM263Px doesn't include a gigabit-capable PRU we are evaluating the CPSW. See below:

AM64x PRU-ICSSG -> Gigabit Ethernet link -> AM263Px CPSW

Our bandwidth needs are very modest; we need to send one 64-byte packet every 10 microseconds, or approx. 51 Mbps. We also need the total latency between TX and RX to be <1.5 microseconds which requires Gigabit speeds. The SDK v9.01 SDK has a benchmark showing the CPSW can handle <15Mpbs (with 64B UDP datagrams) before it begins dropping packets. Obviously there's a tradeoff between packet size and bandwidth, but even so this is remarkably bad performance. We considered using the PRU Ethernet but because it's limited to 100Mbps there is just too much latency (~5 microseconds just to transmit 64-bytes, not including PHY or processing latencies).

Since AM263Px is a new product, is it possible the low performance is an SDK latency limitation that will be improved in the future? Or is this a hardware bottleneck due to the DMA engine?

Said differently, does ENET-LLD have any APIs that bypass the DMA engine and allow us to interact with the raw packet stream directly to improve latency? Our product doesn't require general networking capabilities and only needs to receive a fixed size packet from a remote AM64x (direct connection, no switch or other devices in between).

  • Hi Steven,

    We have taken up this topic internally and discussing it. I will get back to you with some update early next week.

    Regards,

    Shaunak

  • Hi Steven,

    It would be helpful if you could share the performance (bandwidth, latency) you were able to achieve when you tested the link between two AM64x devices.

    Your performance requirement is ~51.2 Mbps at Layer2 correct? 

    Regards,

    Shaunak

  • Hi Steven,

    I do have some information about the performance and also some questions that would help me answer you better:

    , we are limited to using Ethernet for communication (versus FSI or SPI or CAN)

    Since you are looking for a replacement for SPI/ FSI/ CAN, Can you please confirm if ethernet frames are the standard IEEE802.3 format or you have a modified format.

    Since AM263Px is a new product, is it possible the low performance is an SDK latency limitation that will be improved in the future?

    The out-of-box examples in the MCU_PLUS_SDK are not optimized for the best performance but to have a lower memory footprint and good performance for larger packet sizes. The out-of-box application is written in order to accommodate large packet buffer usecases. If you check the same SDK datasheet for larger datagram lengths, you would see a lower or no packet loss.

    The application can be configured to have more buffers of smaller sizes to help increase the throughput.

    Our product doesn't require general networking capabilities and only needs to receive a fixed size packet from a remote AM64x
    • I would like to know if you only want to process an L2 packet without IP headers. Would that be enough here? Basically are you directly connecting the AM64x and AM263Px and forwarding packets based on the MAC address?
    • I would also like to discuss about the bandwidth, 1 packet sent every 10 microseconds. Is batch processing of network packets fine for you? For example, every 40 microseconds 4 packets are processed, it would still provide the required bandwidth but processing can be more efficient.
    does ENET-LLD have any APIs that bypass the DMA engine and allow us to interact with the raw packet stream directly to improve latency?

    Im afraid this might not lead to an improved latency or bandwidth. 

    Regards,

    Shaunak

  • Hi Shaunak,

    We've been running AM64x to AM64x with larger frame sizes (268 bytes L2) and correspondingly longer latencies (~4 microseconds from TX to RX into PRU SRAM). We are working to set up a test to decrease the packet size to 64 byte payload (76 bytes L2) to see what the lower bound on latency is.

    We are using a modified frame format to pack more data (no MAC addresses, no ETH type), although we could modify our PRU code to comply with IEEE802.3 if required.

    Yes, L2 packet without IP headers is what we are looking for. Unfortunately batch processing does not work for this application (which I suspect means more, smaller buffers wouldn't help either) since we are performing closed-loop control based on the packet information.

    We are taking a second look at ICSS on AM263Px with a 100Mbps link. Earlier I said 100Mbps is too slow, but that's only true if we have to send 64 bytes. Technically we only need an 8 byte payload (20 bytes L2). But I'm unsure if PRU-ICSS or PHY (DP83869 in our case) requires a minimum frame size. This info doesn't appear to be in the TRM or datasheet for AM263Px. Does PRU-ICSS on AM263Px require a minimum frame size for RMII RX?

  • Another question is whether or not we can select different mux modes for PRU0/PRU1 on AM263Px.

    For example, can we set PR0_PRU0_GP_MUX_SEL= 2 (MII mode) and PR0_PRU1_GP_MUX_SEL = 0 (GPIO mode)?

    On AM64x you cannot do this (both must be MUX=RMII to use RMII). But we need to use the 2nd PRU core to interface with external A/Ds, so we must be able to select different mux modes.

  • Hi Steven,

    Im a bit confused about the end use case. If im correct, we are using PRU-ICSS on AM64x.

    On AM263Px, are you using PRU-ICSS or CPSW? For meeting performance requirement with CPSW L2,  we will have to follow IEEE802.3 Standard ethernet frame format and I will be able to help you achieve the required bandwidth. Unfortunately im not an expert on PRU-ICSS and will have to take this thread to the PRU-ICSS experts before commenting on it. Can you please confirm what is the use-case on AM263Px, CPSW or PRU-ICSS?

    Thanks and regards,
    Shaunak

  • Hi Shaunak,

    Apologies for the confusion. We haven't decided whether to use PRU-ICSS or CPSW yet. We are trying to determine which one has the lowest latency. In this case, the latency we care about is the time from packet being received to packet payload being transferred to R5F TCM.

    AM64x => PHY TX => Ethernet => PHY RX => AM263Px (1000Mbps CPSW or 100Mbps PRU-ICSS) => R5F TCM

    We can use IEEE802.3 standard frames with CPSW, but my concern is that the latency is too high even with small packets. As previously mentioned, the UDP benchmark in MCU+ SDK v9.01 claims the max bandwidth without loss for 64 byte payload is ~10 Mbps. We are still working to set up a test to measure the latency ourselves on the AM263Px ControlCard.

    Do you have any recommendations for reducing latency with CPSW?

  • Hi Shaunak,

    We were able to run benchmarks sending a 64 byte UDP payload between processors.

    From AM64x (ICSSG) to AM64x (ICSSG) @ 1Gbps we measure ~2.0 microseconds of total latency.

    From AM64x (ICSSG) to AM263Px (CPSW) @ 1Gbps we measure ~16.2 microseconds of total latency.

    (These numbers include the time to transfer 64 bytes to/from R5F TCM on both sides).

    In addition to higher latency with AM263Px CPSW, we had to reduce the packet send rate from every 10us to every 100us. Even at this slower rate we still see some packet loss with 64 byte payloads. Can you provide any guidance for improving the performance with CPSW on AM263Px?

    To measure latency, we are using GPIO pins on both boards to indicate when data is sent (or received). On AM263Px specifically we toggle a GPIO pin in the rxIsrFxn callback after any packet is received and transferred to R5F TCM (via EnetDma_retrieveRxPktQ).

  • Hi Steven,

    Thanks for sharing the numbers that you benchmarked, a few questions and points.

    1. I think your use case does not require LwIP Stack and UDP (please correct me if im wrong), LwIP Stack will add a lot of overhead and since the application out-of-box is not optimized for smaller datagram lengths, we see a lot of packet loss. We have to try increasing Rx Buffer pools and memory pools for smaller datagram lengths I suppose.. Im still running some experiments on my end to try to reduce the packet loss and will share the same once I have any findings. 

    2. Since we are only concerned about packet Rx on AM263Px and processing it and don't need other general networking capabilities, can we just test Layer-2 performance and then analyze the same. I will try to get some information from my end on Layer-2 performance by next week. Since you already have a good test setup to measure the latency, can we please think about setting up a test which measures Layer-2 performance instead of measuring with LwIP stack and UDP overheads, since it might send us in the wrong direction here.

    Thanks for your patience and hope we make more progress soon!

    Regards,
    Shaunak

  • Hi Shaunak,

    Thanks for your help so far.

    1. I was actually incorrect in my last post. The numbers I shared were for 64 byte L2 frames, not UDP. I was originally testing with UDP when I wrote the post, but then switched to L2 and forgot to edit the text before submitting.

    2. Since the last post I realized my benchmark was compiled in debug mode. Switching to release reduced the latency to 7-8 microseconds. That is promising, but I need the worst case latency to be <4 microseconds for this application.

    Also, I've been unable to make the CPSW benchmark stable. For some packets the AM263Px receives and copies to R5F TCM within 7-8 microseconds, but often it will drop the packet entirely. This is true even if I back off the send rate to one 64 byte L2 packet every 100 microseconds.

    As other people on the forums have suggested, it would be helpful for TI to provide bare metal examples for all peripherals, including CPSW. I am working to strip down the "enet_l2_cpsw" example to make it less complicated. Hopefully that will help me better understand the sources of latency here.

  • Hi Steven,

    1. Can you please share the CPSW stats? The ENET L2 CPSW example has a function to print stats. Regarding the Baremetal example, I will initiate the discussion for this internally. It would definitely be helpful. 

    2. About the instability of AM263Px to TCM, how often does the packet drop occur? Any specific pattern? Like 1 out of every 10 packets?

    Regards,

    Shaunak

  • Hi Shaunak,

    CPSW statistics below after running for a short time:

    Cortex_R5_0: GEL Output: --------------------------------
    Cortex_R5_0: GEL Output:           PORT0 STATS          
    Cortex_R5_0: GEL Output: --------------------------------
    Cortex_R5_0: GEL Output: STAT_0_RXGOODFRAMES              = 0x0001C0BE
    Cortex_R5_0: GEL Output: STAT_0_RXOCTETS                  = 0x00773278
    Cortex_R5_0: GEL Output: STAT_0_TXGOODFRAMES              = 0x0001CB0F
    Cortex_R5_0: GEL Output: STAT_0_TXMULTICASTFRAMES         = 0x0001CB38
    Cortex_R5_0: GEL Output: STAT_0_TXOCTETS                  = 0x007A0030
    Cortex_R5_0: GEL Output: STAT_0_OCTETFRAMES65T127         = 0x00038C33
    Cortex_R5_0: GEL Output: STAT_0_NETOCTETS                 = 0x00F1581C
    Cortex_R5_0: GEL Output: --------------------------------
    Cortex_R5_0: GEL Output:           PORT1 STATS          
    Cortex_R5_0: GEL Output: --------------------------------
    Cortex_R5_0: GEL Output: --------------------------------
    Cortex_R5_0: GEL Output:           PORT2 STATS          
    Cortex_R5_0: GEL Output: --------------------------------
    Cortex_R5_0: GEL Output: STAT_2_RXGOODFRAMES              = 0x0007A349
    Cortex_R5_0: GEL Output: STAT_2_RXMULTICASTFRAMES         = 0x0003D1CA
    Cortex_R5_0: GEL Output: STAT_2_RXFRAGMENTS               = 0x0004CCEB
    Cortex_R5_0: GEL Output: STAT_2_ALE_DROP                  = 0x0005D274
    Cortex_R5_0: GEL Output: STAT_2_RXOCTETS                  = 0x0207E368
    Cortex_R5_0: GEL Output: STAT_2_TXGOODFRAMES              = 0x0001C0BE
    Cortex_R5_0: GEL Output: STAT_2_TXOCTETS                  = 0x00773278
    Cortex_R5_0: GEL Output: STAT_2_OCTETFRAMES65T127         = 0x00096876
    Cortex_R5_0: GEL Output: STAT_2_NETOCTETS                 = 0x02917E6E
    Cortex_R5_0: GEL Output: STAT_2_PORTMASK_DROP             = 0x0005D472
    Cortex_R5_0: GEL Output: STAT_2_ALE_UNKN_UNI              = 0x0003D512
    Cortex_R5_0: GEL Output: STAT_2_ALE_UNKN_UNI_BCNT         = 0x01049E5C
    Cortex_R5_0: GEL Output: STAT_2_ALE_UNKN_MLT              = 0x0003D534
    Cortex_R5_0: GEL Output: STAT_2_ALE_UNKN_MLT_BCNT         = 0x0104A764

    I was able to reduce the receive latency by another 0.5 microseconds by modifying rxIsrFxn to call EnetDma_retrieveRxPkt() instead of EnetDma_retrieveRxPktQ() followed by EnetQueue_deq(). With this config I see that roughly every other packet is dropped. Before making this change the majority of packets were dropped.

    void EnetApp_rxIsrFxn(void *appData) {
        EnetApp_PerCtxt *perCtxt = (EnetApp_PerCtxt *)appData;
    //    EnetDma_PktQ rxReadyQ;
        EnetDma_PktQ rxFreeQ;
        EnetDma_Pkt *rxPktInfo;
        uint8_t *rxFrame;
        int32_t i;
        uint32_t data[16];
    
        /* All peripherals have single hardware RX channel,
         * so we only need to retrieve packets from a single flow.*/
    //    EnetQueue_initQ(&rxReadyQ);
        EnetQueue_initQ(&rxFreeQ);
    
        /* Get the packets received so far */
    //    EnetDma_retrieveRxPktQ(perCtxt->hRxCh, &rxReadyQ);
        EnetDma_retrieveRxPkt(perCtxt->hRxCh, &rxPktInfo);
    
        /* Consume the received packets */
    //    rxPktInfo = (EnetDma_Pkt *)EnetQueue_deq(&rxReadyQ);
        while (rxPktInfo != NULL) {
            rxFrame = rxPktInfo->sgList.list[0].bufPtr;
            if( rxFrame != NULL ) {
                rxRecvd++;
                for( i=0; i<16; i++ ) data[i] = ((volatile uint32_t *)rxFrame)[i];
                GPIO_pinWriteHigh(gpioBaseAddr, pinNum);
                GPIO_pinWriteLow(gpioBaseAddr, pinNum);
            }
    
            /* Release the received packet */
            EnetQueue_enq(&rxFreeQ, &rxPktInfo->node);
    //        rxPktInfo = (EnetDma_Pkt *)EnetQueue_deq(&rxReadyQ);
            EnetDma_retrieveRxPkt(perCtxt->hRxCh, &rxPktInfo);
        }
    
        /* Submit now processed buffers */
        EnetDma_submitRxPktQ(perCtxt->hRxCh, &rxFreeQ);
        SemaphoreP_post(&perCtxt->rxSemObj);
    }

    I have given up on porting the ENET-LLD to bare metal for now. Unfortunately the driver makes heavy use of threaded tasks via TaskP, and would require a major rewrite to work without FreeRTOS. Still would be great if TI could provide a simple bare metal example showing how to send/receive with CPSW.

  • Hi Steven,

    We really appreciate the effort and patience you put in for evaluation so far. I did take up the requirement with the internal networking team for a baremetal packet by packet processing example to be a part of SDK and we are considering it for the MCU_PLUS_SDK 09.02 release (APRIL 2024 Timeframe). Will that be okay for you?

    I did go through the stats you posted, I could not see packet drops on Host port and im still trying to understand where the packets are being dropped. Interrupt pacing might have helped here but you already made it clear that batch processing of packets is not the need.

    I would also really appreciate if you could share the stripped down application code that you have right now. So I can try evaluating it on our end as well before the official example is released as a part of SDK release.

    Regards,

    Shaunak

  • Hi Shaunak,

    April 2024 should work. We aren't blocked yet since we're planning to use a Gigabit capable PHY and can decide whether to use CPSW or ICSSM later (luckily the pinmux on AM263Px makes this flexible). Either way a bare metal example for CPSW would be much appreciated.

    I was able to figure out why packets were dropping and now have a stable benchmark showing ~6-7us of latency (10us including outliers). The problem was that the "enet_l2_cpsw" example hardcodes the ALE config, which overrides any ALE settings configured in SysConfig. After figuring this out I was able to make a few changes (mostly enabling ALE bypass) which prevented packet drops.

    I've only changed 3 files from the default enet_l2_cpsw example in the AM263Px MCU+ SDK (see below). Note I'm using the ControlCard with the HSEC dock using pin 49 for the GPIO that toggles each time a packet is received. To transmit packets to the AM263Px I'm using an AM64x configured to send one 64 byte L2 frame every ~18 microseconds (the real application needs to TX every 10 microseconds).

    Scope capture below:

    Rising edge on top trace is when AM64x begins copying 64 bytes from R5F TCM to PRU and starts transmitting. Falling edge on bottom trace is when packet has been received and copied to R5F TCM on AM263Px.

    Files changed from "enet_l2_cpsw" example below:

    l2_cpsw_dataflow.c:

    /*
     *  Copyright (c) Texas Instruments Incorporated 2021
     *
     *  Redistribution and use in source and binary forms, with or without
     *  modification, are permitted provided that the following conditions
     *  are met:
     *
     *    Redistributions of source code must retain the above copyright
     *    notice, this list of conditions and the following disclaimer.
     *
     *    Redistributions in binary form must reproduce the above copyright
     *    notice, this list of conditions and the following disclaimer in the
     *    documentation and/or other materials provided with the
     *    distribution.
     *
     *    Neither the name of Texas Instruments Incorporated nor the names of
     *    its contributors may be used to endorse or promote products derived
     *    from this software without specific prior written permission.
     *
     *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
     *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
     *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
     *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
     *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
     *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
     *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
     *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
     *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
     *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     */
    
    /*!
     * \file  l2_cpsw_dataflow.c
     *
     * \brief This file contains the implementation of the APIs for data flow for L2 cpsw example.
     */
    
    /* ========================================================================== */
    /*                             Include Files                                  */
    /* ========================================================================== */
    #include "l2_cpsw_common.h"
    #include "l2_cpsw_dataflow.h"
    
    #include "ti_enet_config.h"
    #include <drivers/gpio.h>
    
    /* ========================================================================== */
    /*                           Macros & Typedefs                                */
    /* ========================================================================== */
    
    /* ========================================================================== */
    /*                         Structure Declarations                             */
    /* ========================================================================== */
    
    /* ========================================================================== */
    /*                          Function Declarations                             */
    /* ========================================================================== */
    
    /* ========================================================================== */
    /*                            Global Variables                                */
    /* ========================================================================== */
    static uint64_t numPacketMisses = 0;
    static volatile uint32_t __attribute__ ((aligned(32), section(".bss:TCM_MEMORY"))) tcmData[16];
    
    /* ========================================================================== */
    /*                          Function Definitions                              */
    /* ========================================================================== */
    
    void EnetApp_rxIsrFxn(void *appData) {
        EnetApp_PerCtxt *perCtxt = (EnetApp_PerCtxt *)appData;
        EnetDma_PktQ rxFreeQ;
        EnetDma_Pkt *rxPktInfo;
        uint8_t *rxFrame;
        int32_t i;
    
        /* All peripherals have single hardware RX channel,
         * so we only need to retrieve packets from a single flow.*/
        EnetQueue_initQ(&rxFreeQ);
    
        /* Get one packet from receive queue */
        EnetDma_retrieveRxPkt(perCtxt->hRxCh, &rxPktInfo);
    
        /* Loop over all packets */
        while ( rxPktInfo != NULL ) {
            rxFrame = rxPktInfo->sgList.list[0].bufPtr;
            if( rxFrame != NULL ) {
                // copy packet data to TCM (NOTE: TCM must be marked as strongly ordered/uncached)
                for( i=0; i<16; i++ ) tcmData[i] = ((volatile uint32_t *)rxFrame)[i];
    
                // toggle pin to show we received packet and data is present in TCM
                GPIO_pinWriteHigh(CONFIG_GPIO0_BASE_ADDR, CONFIG_GPIO0_PIN);
                GPIO_pinWriteLow(CONFIG_GPIO0_BASE_ADDR, CONFIG_GPIO0_PIN);
            }
    
            /* Release the received packet */
            EnetQueue_enq(&rxFreeQ, &rxPktInfo->node);
    
            /* Check for another packet received */
            EnetDma_retrieveRxPkt(perCtxt->hRxCh, &rxPktInfo);
    
            // if more than one packet was received here ==> we didn't get around in time;
            // the goal is to receive and process 1 packet with minimal latency before the
            // next packet comes in
            if( rxPktInfo != NULL ) numPacketMisses++;
        }
    
        /* Submit now processed buffers */
        EnetDma_submitRxPktQ(perCtxt->hRxCh, &rxFreeQ);
        SemaphoreP_post(&perCtxt->rxSemObj);
    }
    
    
    int32_t EnetApp_openDma(EnetApp_PerCtxt *perCtxt, uint32_t perCtxtIndex)
    {
        int32_t status = ENET_SOK;
        EnetApp_GetDmaHandleInArgs     txInArgs;
        EnetApp_GetTxDmaHandleOutArgs  txChInfo; 
    
        /* Open the TX channel */
        txInArgs.cbArg   = NULL;
        txInArgs.notifyCb = NULL;
    
        EnetApp_getTxDmaHandle((ENET_DMA_TX_CH0 + perCtxtIndex),
                               &txInArgs,
                               &txChInfo);
    
        perCtxt->txChNum = txChInfo.txChNum;
        perCtxt->hTxCh   = txChInfo.hTxCh;
    
    	/* Allocate TX packets and keep them locally enqueued */
        if (status == ENET_SOK)
        {
            EnetApp_initTxFreePktQ();
        }
    
        if (perCtxt->hTxCh == NULL)
        {
    #if FIX_RM
            /* Free the channel number if open Tx channel failed */
            EnetAppUtils_freeTxCh(gEnetApp.hEnet,
                                  perCtxt->coreKey,
                                  gEnetApp.coreId,
                                  gEnetApp.txChNum);
    #endif
            EnetAppUtils_print("EnetApp_openDma() failed to open TX channel\r\n");
            status = ENET_EFAIL;
            EnetAppUtils_assert(perCtxt->hTxCh != NULL);
        }
    	else
    	{
    		status = EnetDma_enableTxEvent(perCtxt->hTxCh);
    	}
    
    
    
        /* Open the RX flow for Regular frames */
        if (status == ENET_SOK)
        {
            EnetApp_GetDmaHandleInArgs     rxInArgs;
            EnetApp_GetRxDmaHandleOutArgs  rxChInfo; 
    
            rxInArgs.notifyCb = EnetApp_rxIsrFxn;
            rxInArgs.cbArg   = perCtxt;
    
            EnetApp_getRxDmaHandle((ENET_DMA_RX_CH0  + perCtxtIndex),
                                   &rxInArgs,
                                   &rxChInfo);
            perCtxt->rxChNum = rxChInfo.rxChNum;
            perCtxt->hRxCh  = rxChInfo.hRxCh;
            EnetAppUtils_assert(rxChInfo.numValidMacAddress == 1);
            EnetUtils_copyMacAddr(perCtxt->macAddr, rxChInfo.macAddr[rxChInfo.numValidMacAddress - 1]);
    
            if (perCtxt->hRxCh == NULL)
            {
                EnetAppUtils_print("EnetApp_openRxCh() failed to open RX flow\r\n");
                status = ENET_EFAIL;
                EnetAppUtils_assert(perCtxt->hRxCh != NULL);
            }
            else
            {
                /* Submit all ready RX buffers to DMA.*/
                EnetApp_initRxReadyPktQ(perCtxt->hRxCh);
    		}
        }
    
         return status;
    }
    
    void EnetApp_closeDma(EnetApp_PerCtxt *perCtxt, uint32_t perCtxtIndex)
    {
        EnetDma_PktQ fqPktInfoQ;
        EnetDma_PktQ cqPktInfoQ;
    
        EnetQueue_initQ(&fqPktInfoQ);
        EnetQueue_initQ(&cqPktInfoQ);
    
        /* Close Regular RX channel */
        EnetApp_closeRxDma((ENET_DMA_RX_CH0 + perCtxtIndex),
                           perCtxt->hEnet,
                           perCtxt->coreKey,
                           gEnetApp.coreId,
                           &fqPktInfoQ,
                           &cqPktInfoQ);
    
        EnetAppUtils_freePktInfoQ(&fqPktInfoQ);
        EnetAppUtils_freePktInfoQ(&cqPktInfoQ);
    
        /* Close TX channel */
        EnetQueue_initQ(&fqPktInfoQ);
        EnetQueue_initQ(&cqPktInfoQ);
    
        /* Retrieve any pending TX packets from driver */
        EnetApp_retrieveFreeTxPkts(perCtxt);
    
        EnetApp_closeTxDma((ENET_DMA_TX_CH0 + perCtxtIndex),
                           perCtxt->hEnet,
                           perCtxt->coreKey,
                           gEnetApp.coreId,
                           &fqPktInfoQ,
                           &cqPktInfoQ);
    
        EnetAppUtils_freePktInfoQ(&fqPktInfoQ);
        EnetAppUtils_freePktInfoQ(&cqPktInfoQ);
        EnetAppUtils_freePktInfoQ(&gEnetApp.txFreePktInfoQ);
    }
    
    void EnetApp_initTxFreePktQ(void)
    {
        EnetDma_Pkt *pPktInfo;
        uint32_t i;
        uint32_t scatterSegments[] = { ENET_MEM_LARGE_POOL_PKT_SIZE };
    
        /* Initialize TX EthPkts and queue them to txFreePktInfoQ */
        for (i = 0U; i < ENET_SYSCFG_TOTAL_NUM_TX_PKT; i++)
        {
            pPktInfo = EnetMem_allocEthPkt(&gEnetApp,
                                           ENETDMA_CACHELINE_ALIGNMENT,
                                           ENET_ARRAYSIZE(scatterSegments),
                                           scatterSegments);
            EnetAppUtils_assert(pPktInfo != NULL);
            ENET_UTILS_SET_PKT_APP_STATE(&pPktInfo->pktState, ENET_PKTSTATE_APP_WITH_FREEQ);
    
            EnetQueue_enq(&gEnetApp.txFreePktInfoQ, &pPktInfo->node);
        }
    
        EnetAppUtils_print("initQs() txFreePktInfoQ initialized with %d pkts\r\n",
                           EnetQueue_getQCount(&gEnetApp.txFreePktInfoQ));
    }
    
    void EnetApp_initRxReadyPktQ(EnetDma_RxChHandle hRxCh)
    {
        EnetDma_PktQ rxReadyQ;
        EnetDma_PktQ rxFreeQ;
        EnetDma_Pkt *pPktInfo;
        uint32_t i;
        int32_t status;
        uint32_t scatterSegments[] = { ENET_MEM_MEDIUM_POOL_PKT_SIZE };
    
        EnetQueue_initQ(&rxFreeQ);
    
        for (i = 0U; i < ENET_SYSCFG_TOTAL_NUM_RX_PKT; i++)
        {
            pPktInfo = EnetMem_allocEthPkt(&gEnetApp,
                                           ENETDMA_CACHELINE_ALIGNMENT,
                                           ENET_ARRAYSIZE(scatterSegments),
                                           scatterSegments);
            EnetAppUtils_assert(pPktInfo != NULL);
    
            ENET_UTILS_SET_PKT_APP_STATE(&pPktInfo->pktState, ENET_PKTSTATE_APP_WITH_FREEQ);
    
            EnetQueue_enq(&rxFreeQ, &pPktInfo->node);
        }
    
        /* Retrieve any packets which are ready */
        EnetQueue_initQ(&rxReadyQ);
        status = EnetDma_retrieveRxPktQ(hRxCh, &rxReadyQ);
        EnetAppUtils_assert(status == ENET_SOK);
    
        /* There should not be any packet with DMA during init */
        EnetAppUtils_assert(EnetQueue_getQCount(&rxReadyQ) == 0U);
    
        EnetAppUtils_validatePacketState(&rxFreeQ,
                                         ENET_PKTSTATE_APP_WITH_FREEQ,
                                         ENET_PKTSTATE_APP_WITH_DRIVER);
    
        EnetDma_submitRxPktQ(hRxCh, &rxFreeQ);
    
        /* Assert here, as during init, the number of DMA descriptors should be equal to
         * the number of free Ethernet buffers available with app */
        EnetAppUtils_assert(EnetQueue_getQCount(&rxFreeQ) == 0U);
    }
    
    uint32_t EnetApp_retrieveFreeTxPkts(EnetApp_PerCtxt *perCtxt)
    {
        EnetDma_PktQ txFreeQ;
        EnetDma_Pkt *pktInfo;
        uint32_t txFreeQCnt = 0U;
        int32_t status;
    
        EnetQueue_initQ(&txFreeQ);
    
        /* Retrieve any packets that may be free now */
        status = EnetDma_retrieveTxPktQ(perCtxt->hTxCh, &txFreeQ);
        if (status == ENET_SOK)
        {
            txFreeQCnt = EnetQueue_getQCount(&txFreeQ);
    
            pktInfo = (EnetDma_Pkt *)EnetQueue_deq(&txFreeQ);
            while (NULL != pktInfo)
            {
                EnetDma_checkPktState(&pktInfo->pktState,
                                        ENET_PKTSTATE_MODULE_APP,
                                        ENET_PKTSTATE_APP_WITH_DRIVER,
                                        ENET_PKTSTATE_APP_WITH_FREEQ);
    
                EnetQueue_enq(&gEnetApp.txFreePktInfoQ, &pktInfo->node);
                pktInfo = (EnetDma_Pkt *)EnetQueue_deq(&txFreeQ);
            }
        }
        else
        {
            EnetAppUtils_print("retrieveFreeTxPkts() failed to retrieve pkts: %d\r\n", status);
        }
    
        return txFreeQCnt;
    }
    
    void EnetApp_createRxTask(EnetApp_PerCtxt *perCtxt)
    {
        TaskP_Params taskParams;
        int32_t status;
        status = SemaphoreP_constructBinary(&perCtxt->rxSemObj, 0);
        DebugP_assert(SystemP_SUCCESS == status);
    
        status = SemaphoreP_constructCounting(&perCtxt->rxDoneSemObj, 0, COUNTING_SEM_COUNT);
        DebugP_assert(SystemP_SUCCESS == status);
    
        TaskP_Params_init(&taskParams);
        taskParams.priority       = 5U;
        taskParams.stack          = gEnetAppTaskStackRx;
        taskParams.stackSize      = sizeof(gEnetAppTaskStackRx);
        taskParams.args           = (void*)perCtxt;
        taskParams.name           = "Rx Task";
        taskParams.taskMain           = &EnetApp_rxTask;
    
        status = TaskP_construct(&perCtxt->rxTaskObj, &taskParams);
        DebugP_assert(SystemP_SUCCESS == status);
    
    }
    
    void EnetApp_destroyRxTask(EnetApp_PerCtxt *perCtxt)
    {
        SemaphoreP_destruct(&perCtxt->rxSemObj);
        SemaphoreP_destruct(&perCtxt->rxDoneSemObj);
        TaskP_destruct(&perCtxt->rxTaskObj);
    }
    
    void EnetApp_rxTask(void *args)
    {
        EnetApp_PerCtxt *perCtxt = (EnetApp_PerCtxt *)args;
    
        while (gEnetApp.run)
        {
            SemaphoreP_pend(&perCtxt->rxSemObj, SystemP_WAIT_FOREVER);
        }
    
        SemaphoreP_post(&perCtxt->rxDoneSemObj);
        TaskP_exit();
    }
    

    l2_cpsw_cfg.c:

    /*
     *  Copyright (c) Texas Instruments Incorporated 2021
     *
     *  Redistribution and use in source and binary forms, with or without
     *  modification, are permitted provided that the following conditions
     *  are met:
     *
     *    Redistributions of source code must retain the above copyright
     *    notice, this list of conditions and the following disclaimer.
     *
     *    Redistributions in binary form must reproduce the above copyright
     *    notice, this list of conditions and the following disclaimer in the
     *    documentation and/or other materials provided with the
     *    distribution.
     *
     *    Neither the name of Texas Instruments Incorporated nor the names of
     *    its contributors may be used to endorse or promote products derived
     *    from this software without specific prior written permission.
     *
     *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
     *  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
     *  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
     *  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
     *  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
     *  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
     *  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
     *  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
     *  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
     *  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
     *  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     */
    
    /*!
     * \file  l2_cpsw_cfg.c
     *
     * \brief This file contains the implementation of the APIs for peripheral configuration for l2 cpsw example.
     */
    
    /* ========================================================================== */
    /*                             Include Files                                  */
    /* ========================================================================== */
    #include "l2_cpsw_common.h"
    #include "l2_cpsw_cfg.h"
    #include "l2_cpsw_dataflow.h"
    #include "ti_enet_open_close.h"
    #include "ti_enet_config.h"
    #include <networking/enet/utils/include/enet_apputils.h>
    
    /* ========================================================================== */
    /*                           Macros & Typedefs                                */
    /* ========================================================================== */
    
    /* ========================================================================== */
    /*                         Structure Declarations                             */
    /* ========================================================================== */
    
    /* ========================================================================== */
    /*                          Function Declarations                             */
    /* ========================================================================== */
    
    static EnetApp_PerCtxt * EnetApp_getPerCtxt(Enet_Type enetType,
                                                uint32_t instId);
    
    /* ========================================================================== */
    /*                            Global Variables                                */
    /* ========================================================================== */
    
    /* ========================================================================== */
    /*                          Function Definitions                              */
    /* ========================================================================== */
    
    int32_t EnetApp_init(void)
    {
        int32_t status = ENET_SOK;
    
        gEnetApp.coreId = EnetSoc_getCoreId();
    
        /* Initialize all queues */
        EnetQueue_initQ(&gEnetApp.txFreePktInfoQ);
    
        return status;
    }
    
    void EnetApp_deinit(void)
    {
        EnetAppUtils_print("Deinit complete\r\n");
    
    }
    
    void EnetApp_showMenu(void)
    {
        EnetAppUtils_print("\nEnet L2 cpsw Menu:\r\n");
        EnetAppUtils_print(" 's'  -  Print statistics\r\n");
        EnetAppUtils_print(" 'r'  -  Reset statistics\r\n");
        EnetAppUtils_print(" 'm'  -  Show allocated MAC addresses\r\n");
        EnetAppUtils_print(" 'x'  -  Stop the test\r\n\n");
    }
    
    void EnetApp_portLinkStatusChangeCb(Enet_MacPort macPort,
                                              bool isLinkUp,
                                              void *appArg)
    {
        EnetAppUtils_print("MAC Port %u: link %s\r\n",
                           ENET_MACPORT_ID(macPort), isLinkUp ? "up" : "down");
    }
    
    void EnetApp_mdioLinkStatusChange(Cpsw_MdioLinkStateChangeInfo *info,
                                                 void *appArg)
    {
        if (info->linkChanged)
        {
            EnetAppUtils_print("Link Status Changed. PHY: 0x%x, state: %s\r\n",
                    info->phyAddr,
                    info->isLinked? "up" : "down");
        }
    }
    
    void EnetApp_initAleConfig(CpswAle_Cfg *aleCfg)
    {
        aleCfg->modeFlags = CPSW_ALE_CFG_MODULE_EN | CPSW_ALE_CFG_BYPASS_EN;
        aleCfg->agingCfg.autoAgingEn = false;
        aleCfg->agingCfg.agingPeriodInMs = 1000;
    
        aleCfg->nwSecCfg.vid0ModeEn                = true;
        aleCfg->vlanCfg.aleVlanAwareMode           = FALSE;
        aleCfg->vlanCfg.cpswVlanAwareMode          = FALSE;
        aleCfg->vlanCfg.unknownUnregMcastFloodMask = CPSW_ALE_ALL_PORTS_MASK;
        aleCfg->vlanCfg.unknownRegMcastFloodMask   = CPSW_ALE_ALL_PORTS_MASK;
        aleCfg->vlanCfg.unknownVlanMemberListMask  = CPSW_ALE_ALL_PORTS_MASK;
        aleCfg->policerGlobalCfg.policingEn        = false;
        aleCfg->policerGlobalCfg.yellowDropEn      = false;
        /* Enables the ALE to drop the red colored packets. */
        aleCfg->policerGlobalCfg.redDropEn         = false;
        /* Policing match mode */
        aleCfg->policerGlobalCfg.policerNoMatchMode = CPSW_ALE_POLICER_NOMATCH_MODE_GREEN;
    }
    
    void EnetApp_updateCpswInitCfg(Enet_Type enetType,  uint32_t instId,   Cpsw_Cfg *cpswCfg)
    {
        EnetApp_PerCtxt *perCtxt = EnetApp_getPerCtxt(enetType, instId);
        EnetCpdma_Cfg *dmaCfg;
    	EnetBoard_EthPort ethPort;
        const EnetBoard_PhyCfg *boardPhyCfg;
    
        EnetAppUtils_assert(perCtxt != NULL);
        /* Prepare init configuration for all peripherals */
        EnetAppUtils_print("\nInit all configs\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        EnetAppUtils_print("%s: init config\r\n", perCtxt->name);
    
        cpswCfg->vlanCfg.vlanAware          = false;
        cpswCfg->hostPortCfg.removeCrc      = true;
        cpswCfg->hostPortCfg.padShortPacket = true;
        cpswCfg->hostPortCfg.passCrcErrors  = true;
    
    #if (ENET_SYSCFG_ENABLE_MDIO_MANUALMODE == 1U)
        cpswCfg->mdioLinkStateChangeCb     = NULL;
        cpswCfg->mdioLinkStateChangeCbArg  = NULL;
    #else
        cpswCfg->mdioLinkStateChangeCb     = EnetApp_mdioLinkStatusChange;
        cpswCfg->mdioLinkStateChangeCbArg  = &gEnetApp;
    #endif
        cpswCfg->portLinkStatusChangeCb    = &EnetApp_portLinkStatusChangeCb;
        cpswCfg->portLinkStatusChangeCbArg = &gEnetApp;
    
        EnetApp_initAleConfig(&cpswCfg->aleCfg);
    
        /* Set the enChOverrideFlag to enable the channel override feature of CPDMA */
        dmaCfg=(EnetCpdma_Cfg *)cpswCfg->dmaCfg;
        dmaCfg->enChOverrideFlag = true;
    
        /* Setup board for requested Ethernet port */
        ethPort.instId   = instId;
        ethPort.macPort  = perCtxt->macPort;
        ethPort.boardId  = EnetBoard_getId();
        ethPort.enetType = enetType;
        ethPort.mii.layerType      = ENET_MAC_LAYER_GMII;
        ethPort.mii.sublayerType   = ENET_MAC_SUBLAYER_REDUCED;
        ethPort.mii.variantType    = ENET_MAC_VARIANT_FORCED;
        boardPhyCfg = EnetBoard_getPhyCfg(&ethPort);
        EnetAppUtils_assert(boardPhyCfg != NULL);
    
        cpswCfg->mdioCfg.mode = MDIO_MODE_NORMAL;
        cpswCfg->mdioCfg.pollEnMask = boardPhyCfg->phyAddr;
    }
    
    int32_t EnetApp_open(EnetApp_PerCtxt *perCtxts,
                               uint32_t numPerCtxts)
    {
        uint32_t i;
        int32_t status = ENET_SOK;
    
        /* Do peripheral dependent initalization */
        EnetAppUtils_print("\nInit all peripheral clocks\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &perCtxts[i];
            EnetAppUtils_enableClocks(perCtxt->enetType, perCtxt->instId);
        }
    
            /* Create RX tasks for each peripheral */
        if (status == ENET_SOK)
        {
            EnetAppUtils_print("\nCreate RX tasks\r\n");
            EnetAppUtils_print("----------------------------------------------\r\n");
            for (i = 0U; i < numPerCtxts; i++)
            {
                EnetApp_PerCtxt *perCtxt = &perCtxts[i];
    
                EnetAppUtils_print("%s: Create RX task\r\n", perCtxt->name);
    
                EnetApp_createRxTask(perCtxt);
            }
        }
    
        /* Open Enet driver for all peripherals */
        EnetAppUtils_print("\nOpen all peripherals\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
    
        EnetApp_driverInit();
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &perCtxts[i];
            EnetApp_HandleInfo handleInfo;
    
            EnetAppUtils_print("%s: Open enet\r\n", perCtxt->name);
            status = EnetApp_driverOpen(perCtxt->enetType, perCtxt->instId);
            if (status != ENET_SOK)
            {
                EnetAppUtils_print("%s: failed to open enet\r\n", perCtxt->name);
                break;
            }
            EnetApp_acquireHandleInfo(perCtxt->enetType, perCtxt->instId, &handleInfo);
            perCtxt->hEnet = handleInfo.hEnet;
        }
    
        /* Start PHY tick timer */
        if (status == ENET_SOK)
        {
            EnetAppUtils_print("\nAttach core id %u on all peripherals\r\n", gEnetApp.coreId);
            EnetAppUtils_print("----------------------------------------------\r\n");
            for (i = 0U; i < numPerCtxts; i++)
            {
                EnetApp_PerCtxt *perCtxt = &perCtxts[i];
                EnetPer_AttachCoreOutArgs attachCoreOutArgs;
    
                EnetAppUtils_print("%s: Attach core\r\n", perCtxt->name);
    
                EnetApp_coreAttach(perCtxt->enetType, perCtxt->instId, gEnetApp.coreId, &attachCoreOutArgs);
                perCtxt->coreKey = attachCoreOutArgs.coreKey;
            }
        }
    
        /* Open DMA for peripheral/port */
        if (status == ENET_SOK)
        {
            for (i = 0U; i < numPerCtxts; i++)
            {
                EnetApp_PerCtxt *perCtxt = &perCtxts[i];
    
                EnetAppUtils_print("%s: Open DMA\r\n", perCtxt->name);
    
                status = EnetApp_openDma(perCtxt, i);
                if (status != ENET_SOK)
                {
                    EnetAppUtils_print("%s: failed to open DMA: %d\r\n", perCtxt->name, status);
                }
            }
        }
    
        if (status == ENET_SOK)
        {
            status = EnetApp_waitForLinkUp(perCtxts);
            if (status != ENET_SOK)
            {
                EnetAppUtils_print("%s: Failed to wait for link up: %d\r\n", perCtxts->name, status);
            }
        }
    
        EnetAppUtils_print("%s: MAC port addr: ", perCtxts->name);
    
        EnetAppUtils_printMacAddr(&perCtxts->macAddr[0U]);
    
        return status;
    }
    
    static int32_t EnetApp_getPerIdx(Enet_Type enetType, uint32_t instId, uint32_t *perIdx)
    {
        uint32_t i;
        int32_t status = ENET_SOK;
    
        /* Initialize async IOCTL and TX timestamp semaphores */
        for (i = 0U; i < gEnetApp.numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &(gEnetApp.perCtxt[i]);
            if ((perCtxt->enetType == enetType) && (perCtxt->instId == instId))
            {
                break;
            }
        }
        if (i < gEnetApp.numPerCtxts)
        {
            *perIdx = i;
            status = ENET_SOK;
        }
        else
        {
            status = ENET_ENOTFOUND;
        }
        return status;
    }
    
    static EnetApp_PerCtxt * EnetApp_getPerCtxt(Enet_Type enetType,
                                                uint32_t instId)
    {
        uint32_t perIdx;
        int32_t status;
    
        status = EnetApp_getPerIdx(enetType, instId, &perIdx);
        EnetAppUtils_assert(status == ENET_SOK);
        EnetAppUtils_assert(perIdx < ENET_ARRAYSIZE(gEnetApp.perCtxt));
        return (&gEnetApp.perCtxt[perIdx]);
    }
    
    void EnetApp_close(EnetApp_PerCtxt *perCtxts,
                             uint32_t numPerCtxts)
    {
        uint32_t i;
    
        EnetAppUtils_print("\nClose Ports for all peripherals\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &perCtxts[i];
    
            EnetAppUtils_print("%s: Close Port\r\n", perCtxt->name);
    
            EnetApp_closePort(perCtxt);
        }
    
        EnetAppUtils_print("\nClose DMA for all peripherals\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &perCtxts[i];
    
            EnetAppUtils_print("%s: Close DMA\r\n", perCtxt->name);
    
            EnetApp_closeDma(perCtxt, i);
        }
    
        /* Delete RX tasks created for all peripherals */
        EnetAppUtils_print("\nDelete RX tasks\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_destroyRxTask(&perCtxts[i]);
        }
    
        /* Detach core */
        EnetAppUtils_print("\nDetach core from all peripherals\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &perCtxts[i];
    
            EnetAppUtils_print("%s: Detach core\r\n", perCtxt->name);
    
            EnetApp_coreDetach(perCtxt->enetType, perCtxt->instId,
                                gEnetApp.coreId,
                                perCtxt->coreKey);
        }
    
        /* Close opened Enet drivers if any peripheral failed */
        EnetAppUtils_print("\nClose all peripherals\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &perCtxts[i];
            EnetAppUtils_print("%s: Close enet\r\n", perCtxt->name);
            EnetApp_releaseHandleInfo(perCtxt->enetType, perCtxt->instId);
            perCtxt->hEnet = NULL;
        }
    
        /* Do peripheral dependent initalization */
        EnetAppUtils_print("\nDeinit all peripheral clocks\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &perCtxts[i];
            EnetAppUtils_disableClocks(perCtxt->enetType, perCtxt->instId);
        }
    }
    
    void EnetApp_printStats(EnetApp_PerCtxt *perCtxts,
                                  uint32_t numPerCtxts)
    {
        Enet_IoctlPrms prms;
        Enet_MacPort macPort;
        uint32_t i;
        int32_t status;
    
        EnetAppUtils_print("\nPrint statistics\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &gEnetApp.perCtxt[i];
    
            ENET_IOCTL_SET_OUT_ARGS(&prms, &gEnetApp_cpswStats);
    
            ENET_IOCTL(perCtxt->hEnet, gEnetApp.coreId, ENET_STATS_IOCTL_GET_HOSTPORT_STATS, &prms, status);
            if (status != ENET_SOK)
            {
                EnetAppUtils_print("%s: Failed to get port stats\r\n", perCtxt->name);
                continue;
            }
            EnetAppUtils_printHostPortStats9G((CpswStats_HostPort_Ng *)&gEnetApp_cpswStats);
    
    
            macPort = perCtxt->macPort;
    
            EnetAppUtils_print("\n %s - Port %u statistics\r\n", perCtxt->name, ENET_MACPORT_ID(macPort));
            EnetAppUtils_print("--------------------------------\r\n");
    
            ENET_IOCTL_SET_INOUT_ARGS(&prms, &macPort, &gEnetApp_cpswStats);
    
            ENET_IOCTL(perCtxt->hEnet, gEnetApp.coreId, ENET_STATS_IOCTL_GET_MACPORT_STATS, &prms, status);
            if (status != ENET_SOK)
            {
                EnetAppUtils_print("%s: Failed to get port %u stats\r\n", perCtxt->name, ENET_MACPORT_ID(macPort));
                continue;
            }
    
            EnetAppUtils_printMacPortStats9G((CpswStats_MacPort_Ng *)&gEnetApp_cpswStats);
    
            EnetAppUtils_print("\n");
    
        }
    }
    
    void EnetApp_resetStats(EnetApp_PerCtxt *perCtxts,
                                  uint32_t numPerCtxts)
    {
        Enet_IoctlPrms prms;
        Enet_MacPort macPort;
        uint32_t i;
        int32_t status;
    
        EnetAppUtils_print("\nReset statistics\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &gEnetApp.perCtxt[i];
    
            EnetAppUtils_print("%s: Reset statistics\r\n", perCtxt->name);
    
            ENET_IOCTL_SET_NO_ARGS(&prms);
            ENET_IOCTL(perCtxt->hEnet, gEnetApp.coreId, ENET_STATS_IOCTL_RESET_HOSTPORT_STATS, &prms, status);
            if (status != ENET_SOK)
            {
                EnetAppUtils_print("%s: Failed to reset  host port stats\r\n", perCtxt->name);
                continue;
            }
    
            macPort = perCtxt->macPort;
    
            ENET_IOCTL_SET_IN_ARGS(&prms, &macPort);
            ENET_IOCTL(perCtxt->hEnet, gEnetApp.coreId, ENET_STATS_IOCTL_RESET_MACPORT_STATS, &prms, status);
            if (status != ENET_SOK)
            {
                EnetAppUtils_print("%s: Failed to reset port %u stats\r\n", perCtxt->name, ENET_MACPORT_ID(macPort));
                continue;
            }
    
        }
    }
    
    void EnetApp_showMacAddrs(EnetApp_PerCtxt *perCtxts,
                                    uint32_t numPerCtxts)
    {
        uint32_t i;
    
        EnetAppUtils_print("\nAllocated MAC addresses\r\n");
        EnetAppUtils_print("----------------------------------------------\r\n");
        for (i = 0U; i < numPerCtxts; i++)
        {
            EnetApp_PerCtxt *perCtxt = &gEnetApp.perCtxt[i];
    
            EnetAppUtils_print("%s: \t", perCtxt->name);
            EnetAppUtils_printMacAddr(&perCtxt->macAddr[0U]);
        }
    }
    
    void EnetApp_initLinkArgs(Enet_Type enetType,
                              uint32_t instId,
                              EnetPer_PortLinkCfg *linkArgs,
                              Enet_MacPort macPort)
    {
        EnetBoard_EthPort ethPort;
        const EnetBoard_PhyCfg *boardPhyCfg;
        EnetMacPort_LinkCfg *linkCfg = &linkArgs->linkCfg;
        EnetMacPort_Interface *mii = &linkArgs->mii;
        EnetPhy_Cfg *phyCfg = &linkArgs->phyCfg;
        int32_t status = ENET_SOK;
        EnetApp_PerCtxt *perCtxt;
    
        perCtxt = EnetApp_getPerCtxt(enetType, instId);
    
        EnetAppUtils_print("%s: Open port %u\r\n", perCtxt->name, ENET_MACPORT_ID(macPort));
    
        /* Setup board for requested Ethernet port */
        ethPort.enetType = perCtxt->enetType;
        ethPort.instId   = perCtxt->instId;
        ethPort.macPort  = macPort;
        ethPort.boardId  = EnetBoard_getId();
        EnetApp_macMode2MacMii(RGMII, &ethPort.mii);
    
        status = EnetBoard_setupPorts(&ethPort, 1U);
        if (status != ENET_SOK)
        {
            EnetAppUtils_print("%s: Failed to setup MAC port %u\r\n", perCtxt->name, ENET_MACPORT_ID(macPort));
            EnetAppUtils_assert(false);
        }
    
        /* Set port link params */
        linkArgs->macPort = macPort;
    
        mii->layerType     = ethPort.mii.layerType;
        mii->sublayerType  = ethPort.mii.sublayerType;
        mii->variantType   = ENET_MAC_VARIANT_FORCED;
    
        linkCfg->speed     = ENET_SPEED_AUTO;
        linkCfg->duplexity = ENET_DUPLEX_AUTO;
    
        boardPhyCfg = EnetBoard_getPhyCfg(&ethPort);
        if (boardPhyCfg != NULL)
        {
            EnetPhy_initCfg(phyCfg);
            phyCfg->phyAddr     = boardPhyCfg->phyAddr;
            phyCfg->isStrapped  = boardPhyCfg->isStrapped;
            phyCfg->loopbackEn  = false;
            phyCfg->skipExtendedCfg = boardPhyCfg->skipExtendedCfg;
            phyCfg->extendedCfgSize = boardPhyCfg->extendedCfgSize;
            memcpy(phyCfg->extendedCfg, boardPhyCfg->extendedCfg, phyCfg->extendedCfgSize);
    
        }
        else
        {
            EnetAppUtils_print("%s: No PHY configuration found\r\n", perCtxt->name);
            EnetAppUtils_assert(false);
        }
    }
    
    void EnetApp_closePort(EnetApp_PerCtxt *perCtxt)
    {
        Enet_IoctlPrms prms;
        Enet_MacPort macPort;
        int32_t status;
    
        macPort = perCtxt->macPort;
    
        EnetAppUtils_print("%s: Close port %u\r\n", perCtxt->name, ENET_MACPORT_ID(macPort));
    
        /* Close port link */
        ENET_IOCTL_SET_IN_ARGS(&prms, &macPort);
    
        EnetAppUtils_print("%s: Close port %u link\r\n", perCtxt->name, ENET_MACPORT_ID(macPort));
        ENET_IOCTL(perCtxt->hEnet, gEnetApp.coreId, ENET_PER_IOCTL_CLOSE_PORT_LINK, &prms, status);
        if (status != ENET_SOK)
        {
            EnetAppUtils_print("%s: Failed to close port link: %d\r\n", perCtxt->name, status);
        }
    }
    
    int32_t EnetApp_waitForLinkUp(EnetApp_PerCtxt *perCtxt)
    {
        Enet_IoctlPrms prms;
        Enet_MacPort macPort;
        bool linked;
        int32_t status = ENET_SOK;
    
        EnetAppUtils_print("%s: Waiting for link up...\r\n", perCtxt->name);
    
    
        macPort = perCtxt->macPort;
        linked = false;
    
        while (gEnetApp.run && !linked)
        {
            ENET_IOCTL_SET_INOUT_ARGS(&prms, &macPort, &linked);
    
            ENET_IOCTL(perCtxt->hEnet, gEnetApp.coreId, ENET_PER_IOCTL_IS_PORT_LINK_UP, &prms, status);
            if (status != ENET_SOK)
            {
                EnetAppUtils_print("%s: Failed to get port %u link status: %d\r\n",
                                    perCtxt->name, ENET_MACPORT_ID(macPort), status);
                linked = false;
                break;
            }
    
            if (!linked)
            {
                ClockP_sleep(1);
            }
        }
        return status;
    }
    
    void EnetApp_macMode2MacMii(emac_mode macMode,
                                      EnetMacPort_Interface *mii)
    {
        switch (macMode)
        {
            case RMII:
                mii->layerType    = ENET_MAC_LAYER_MII;
                mii->sublayerType = ENET_MAC_SUBLAYER_REDUCED;
                mii->variantType  = ENET_MAC_VARIANT_NONE;
                break;
    
            case RGMII:
                mii->layerType    = ENET_MAC_LAYER_GMII;
                mii->sublayerType = ENET_MAC_SUBLAYER_REDUCED;
                mii->variantType  = ENET_MAC_VARIANT_FORCED;
                break;
            default:
                EnetAppUtils_print("Invalid MAC mode: %u\r\n", macMode);
                EnetAppUtils_assert(false);
                break;
        }
    }
    

    example.syscfg:

    /**
     * These arguments were used when this file was generated. They will be automatically applied on subsequent loads
     * via the GUI or CLI. Run CLI with '--help' for additional information on how to override these arguments.
     * @cliArgs --device "AM263Px" --package "ZCZ_S" --part "AM263P4" --context "r5fss0-0" --product "MCU_PLUS_SDK_AM263Px@09.01.00"
     * @versions {"tool":"1.18.0+3266"}
     */
    
    /**
     * Import the modules used in this configuration.
     */
    const eeprom     = scripting.addModule("/board/eeprom/eeprom", {}, false);
    const eeprom1    = eeprom.addInstance();
    const gpio       = scripting.addModule("/drivers/gpio/gpio", {}, false);
    const gpio1      = gpio.addInstance();
    const i2c        = scripting.addModule("/drivers/i2c/i2c", {}, false);
    const i2c1       = i2c.addInstance();
    const i2c2       = i2c.addInstance();
    const debug_log  = scripting.addModule("/kernel/dpl/debug_log");
    const mpu_armv7  = scripting.addModule("/kernel/dpl/mpu_armv7", {}, false);
    const mpu_armv71 = mpu_armv7.addInstance();
    const mpu_armv72 = mpu_armv7.addInstance();
    const mpu_armv73 = mpu_armv7.addInstance();
    const mpu_armv74 = mpu_armv7.addInstance();
    const mpu_armv75 = mpu_armv7.addInstance();
    const general    = scripting.addModule("/memory_configurator/general", {}, false);
    const general1   = general.addInstance();
    const region     = scripting.addModule("/memory_configurator/region", {}, false);
    const region1    = region.addInstance();
    const section    = scripting.addModule("/memory_configurator/section", {}, false);
    const section1   = section.addInstance();
    const section2   = section.addInstance();
    const section3   = section.addInstance();
    const section4   = section.addInstance();
    const section5   = section.addInstance();
    const section6   = section.addInstance();
    const section7   = section.addInstance();
    const section8   = section.addInstance();
    const section9   = section.addInstance();
    const section10  = section.addInstance();
    const enet_cpsw  = scripting.addModule("/networking/enet_cpsw/enet_cpsw", {}, false);
    const enet_cpsw1 = enet_cpsw.addInstance();
    
    /**
     * Write custom configuration values to the imported modules.
     */
    eeprom1.$name = "CONFIG_EEPROM0";
    
    gpio1.$name                = "CONFIG_GPIO0";
    gpio1.pinDir               = "OUTPUT";
    gpio1.GPIO.gpioPin.$assign = "EPWM0_A";
    
    i2c1.$name               = "CONFIG_I2C0";
    eeprom1.peripheralDriver = i2c1;
    i2c1.I2C.SCL.$assign     = "I2C0_SCL";
    i2c1.I2C.SDA.$assign     = "I2C0_SDA";
    
    i2c2.$name           = "CONFIG_I2C1";
    i2c2.I2C.SCL.$assign = "UART0_RTSn";
    i2c2.I2C.SDA.$assign = "UART0_CTSn";
    
    debug_log.enableUartLog            = true;
    debug_log.enableCssLog             = false;
    debug_log.uartLog.$name            = "CONFIG_UART0";
    debug_log.uartLog.UART.$assign     = "UART0";
    debug_log.uartLog.UART.RXD.$assign = "UART0_RXD";
    debug_log.uartLog.UART.TXD.$assign = "UART0_TXD";
    
    mpu_armv71.$name             = "CONFIG_MPU_REGION0";
    mpu_armv71.size              = 31;
    mpu_armv71.attributes        = "Device";
    mpu_armv71.accessPermissions = "Supervisor RD+WR, User RD";
    mpu_armv71.allowExecute      = false;
    
    mpu_armv72.$name             = "CONFIG_MPU_REGION1";
    mpu_armv72.size              = 15;
    mpu_armv72.accessPermissions = "Supervisor RD+WR, User RD";
    
    mpu_armv73.$name             = "CONFIG_MPU_REGION2";
    mpu_armv73.baseAddr          = 0x80000;
    mpu_armv73.size              = 15;
    mpu_armv73.accessPermissions = "Supervisor RD+WR, User RD";
    
    mpu_armv74.$name             = "CONFIG_MPU_REGION3";
    mpu_armv74.accessPermissions = "Supervisor RD+WR, User RD";
    mpu_armv74.baseAddr          = 0x70000000;
    mpu_armv74.size              = 21;
    
    mpu_armv75.$name      = "CONFIG_MPU_REGION4";
    mpu_armv75.baseAddr   = 0x70080000;
    mpu_armv75.size       = 14;
    mpu_armv75.attributes = "NonCached";
    
    general1.$name        = "CONFIG_GENERAL0";
    general1.stack_size   = 8192;
    general1.heap_size    = 1024;
    general1.linker.$name = "TIARMCLANG0";
    
    region1.$name                               = "MEMORY_REGION_CONFIGURATION0";
    region1.memory_region.create(7);
    region1.memory_region[0].type               = "TCMA";
    region1.memory_region[0].$name              = "R5F_VECS";
    region1.memory_region[0].auto               = false;
    region1.memory_region[0].size               = 0x40;
    region1.memory_region[1].type               = "TCMA";
    region1.memory_region[1].$name              = "R5F_TCMA";
    region1.memory_region[1].size               = 0x7FC0;
    region1.memory_region[2].type               = "TCMB";
    region1.memory_region[2].size               = 0x8000;
    region1.memory_region[2].$name              = "R5F_TCMB";
    region1.memory_region[3].$name              = "CPPI_DESC";
    region1.memory_region[3].auto               = false;
    region1.memory_region[3].manualStartAddress = 0x70080000;
    region1.memory_region[3].size               = 0x4000;
    region1.memory_region[4].$name              = "OCRAM";
    region1.memory_region[4].auto               = false;
    region1.memory_region[4].manualStartAddress = 0x70084000;
    region1.memory_region[4].size               = 0x17C000;
    region1.memory_region[5].type               = "FLASH";
    region1.memory_region[5].$name              = "FLASH";
    region1.memory_region[5].auto               = false;
    region1.memory_region[5].manualStartAddress = 0x60100000;
    region1.memory_region[5].size               = 0x80000;
    region1.memory_region[6].$name              = "SBL";
    region1.memory_region[6].size               = 0x40000;
    region1.memory_region[6].auto               = false;
    
    section1.load_memory                  = "R5F_VECS";
    section1.group                        = false;
    section1.$name                        = "Vector Table";
    section1.output_section.create(1);
    section1.output_section[0].$name      = ".vectors";
    section1.output_section[0].palignment = true;
    
    section2.load_memory                  = "OCRAM";
    section2.$name                        = "Text Segments";
    section2.output_section.create(5);
    section2.output_section[0].$name      = ".text.hwi";
    section2.output_section[0].palignment = true;
    section2.output_section[1].$name      = ".text.cache";
    section2.output_section[1].palignment = true;
    section2.output_section[2].$name      = ".text.mpu";
    section2.output_section[2].palignment = true;
    section2.output_section[3].$name      = ".text.boot";
    section2.output_section[3].palignment = true;
    section2.output_section[4].$name      = ".text:abort";
    section2.output_section[4].palignment = true;
    
    section3.load_memory                  = "OCRAM";
    section3.$name                        = "Code and Read-Only Data";
    section3.output_section.create(2);
    section3.output_section[0].$name      = ".text";
    section3.output_section[0].palignment = true;
    section3.output_section[1].$name      = ".rodata";
    section3.output_section[1].palignment = true;
    
    section4.load_memory                  = "OCRAM";
    section4.$name                        = "Data Segment";
    section4.output_section.create(1);
    section4.output_section[0].$name      = ".data";
    section4.output_section[0].palignment = true;
    
    section5.load_memory                             = "OCRAM";
    section5.$name                                   = "Memory Segments";
    section5.output_section.create(3);
    section5.output_section[0].$name                 = ".bss";
    section5.output_section[0].output_sections_start = "__BSS_START";
    section5.output_section[0].output_sections_end   = "__BSS_END";
    section5.output_section[0].palignment            = true;
    section5.output_section[1].$name                 = ".sysmem";
    section5.output_section[1].palignment            = true;
    section5.output_section[2].$name                 = ".stack";
    section5.output_section[2].palignment            = true;
    
    section6.load_memory                              = "OCRAM";
    section6.$name                                    = "Stack Segments";
    section6.output_section.create(5);
    section6.output_section[0].$name                  = ".irqstack";
    section6.output_section[0].output_sections_start  = "__IRQ_STACK_START";
    section6.output_section[0].output_sections_end    = "__IRQ_STACK_END";
    section6.output_section[0].input_section.create(1);
    section6.output_section[0].input_section[0].$name = ". = . + __IRQ_STACK_SIZE;";
    section6.output_section[1].$name                  = ".fiqstack";
    section6.output_section[1].output_sections_start  = "__FIQ_STACK_START";
    section6.output_section[1].output_sections_end    = "__FIQ_STACK_END";
    section6.output_section[1].input_section.create(1);
    section6.output_section[1].input_section[0].$name = ". = . + __FIQ_STACK_SIZE;";
    section6.output_section[2].$name                  = ".svcstack";
    section6.output_section[2].output_sections_start  = "__SVC_STACK_START";
    section6.output_section[2].output_sections_end    = "__SVC_STACK_END";
    section6.output_section[2].input_section.create(1);
    section6.output_section[2].input_section[0].$name = ". = . + __SVC_STACK_SIZE;";
    section6.output_section[3].$name                  = ".abortstack";
    section6.output_section[3].output_sections_start  = "__ABORT_STACK_START";
    section6.output_section[3].output_sections_end    = "__ABORT_STACK_END";
    section6.output_section[3].input_section.create(1);
    section6.output_section[3].input_section[0].$name = ". = . + __ABORT_STACK_SIZE;";
    section6.output_section[4].$name                  = ".undefinedstack";
    section6.output_section[4].output_sections_start  = "__UNDEFINED_STACK_START";
    section6.output_section[4].output_sections_end    = "__UNDEFINED_STACK_END";
    section6.output_section[4].input_section.create(1);
    section6.output_section[4].input_section[0].$name = ". = . + __UNDEFINED_STACK_SIZE;";
    
    section7.load_memory                  = "OCRAM";
    section7.$name                        = "Initialization and Exception Handling";
    section7.output_section.create(3);
    section7.output_section[0].$name      = ".ARM.exidx";
    section7.output_section[0].palignment = true;
    section7.output_section[1].$name      = ".init_array";
    section7.output_section[1].palignment = true;
    section7.output_section[2].$name      = ".fini_array";
    section7.output_section[2].palignment = true;
    
    section8.group                       = false;
    section8.load_memory                 = "CPPI_DESC";
    section8.type                        = "NOLOAD";
    section8.$name                       = "ENET_CPPI_DESC";
    section8.output_section.create(1);
    section8.output_section[0].$name     = ".bss:ENET_CPPI_DESC";
    section8.output_section[0].alignment = 128;
    
    section9.$name                       = "ENET_DMA_PKT_MEMPOOL";
    section9.load_memory                 = "OCRAM";
    section9.group                       = false;
    section9.type                        = "NOLOAD";
    section9.output_section.create(1);
    section9.output_section[0].$name     = ".bss:ENET_DMA_PKT_MEMPOOL";
    section9.output_section[0].alignment = 128;
    
    section10.group                       = false;
    section10.type                        = "NOLOAD";
    section10.$name                       = "TCM_MEMORY";
    section10.load_memory                 = "R5F_TCMA";
    section10.output_section.create(1);
    section10.output_section[0].$name     = ".bss:TCM_MEMORY";
    section10.output_section[0].alignment = 32;
    
    enet_cpsw1.$name                         = "CONFIG_ENET_CPSW0";
    enet_cpsw1.BoardType                     = "am263px-cc";
    enet_cpsw1.DisableMacPort1               = true;
    enet_cpsw1.LargePoolPktCount             = 16;
    enet_cpsw1.MediumPoolPktCount            = 32;
    enet_cpsw1.cptsHostRxTsEn                = false;
    enet_cpsw1.BypassEnable                  = true;
    enet_cpsw1.UnknownUnicastFloodToHost     = true;
    enet_cpsw1.autoAgingEn                   = false;
    enet_cpsw1.aleVlanAwareMode              = false;
    enet_cpsw1.txDmaChannel[0].$name         = "ENET_DMA_TX_CH0";
    enet_cpsw1.rxDmaChannel[0].$name         = "ENET_DMA_RX_CH0";
    enet_cpsw1.pinmux[0].$name               = "ENET_CPSW_PINMUX0";
    enet_cpsw1.pinmux[0].RGMII1.$assign      = "RGMII2";
    enet_cpsw1.pinmux[0].RGMII2.$assign      = "RGMII1";
    enet_cpsw1.pinmux[0].RGMII2.RD0.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.RD1.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.RD2.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.RD3.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.RX_CTL.$used = false;
    enet_cpsw1.pinmux[0].RGMII2.RXC.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.TD0.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.TD1.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.TD2.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.TD3.$used    = false;
    enet_cpsw1.pinmux[0].RGMII2.TX_CTL.$used = false;
    enet_cpsw1.pinmux[0].RGMII2.TXC.$used    = false;
    
    /**
     * Pinmux solution for unlocked pins/peripherals. This ensures that minor changes to the automatic solver in a future
     * version of the tool will not impact the pinmux you originally saw.  These lines can be completely deleted in order to
     * re-solve from scratch.
     */
    gpio1.GPIO.$suggestSolution                         = "GPIO";
    i2c1.I2C.$suggestSolution                           = "I2C0";
    i2c2.I2C.$suggestSolution                           = "I2C2";
    enet_cpsw1.pinmux[0].MDIO.$suggestSolution          = "MDIO";
    enet_cpsw1.pinmux[0].MDIO.MDC.$suggestSolution      = "MDIO_MDC";
    enet_cpsw1.pinmux[0].MDIO.MDIO.$suggestSolution     = "MDIO_MDIO";
    enet_cpsw1.pinmux[0].RGMII1.RD0.$suggestSolution    = "PR0_PRU0_GPIO0";
    enet_cpsw1.pinmux[0].RGMII1.RD1.$suggestSolution    = "PR0_PRU0_GPIO1";
    enet_cpsw1.pinmux[0].RGMII1.RD2.$suggestSolution    = "PR0_PRU0_GPIO2";
    enet_cpsw1.pinmux[0].RGMII1.RD3.$suggestSolution    = "PR0_PRU0_GPIO3";
    enet_cpsw1.pinmux[0].RGMII1.RX_CTL.$suggestSolution = "PR0_PRU0_GPIO4";
    enet_cpsw1.pinmux[0].RGMII1.RXC.$suggestSolution    = "PR0_PRU0_GPIO6";
    enet_cpsw1.pinmux[0].RGMII1.TD0.$suggestSolution    = "PR0_PRU0_GPIO11";
    enet_cpsw1.pinmux[0].RGMII1.TD1.$suggestSolution    = "PR0_PRU0_GPIO12";
    enet_cpsw1.pinmux[0].RGMII1.TD2.$suggestSolution    = "PR0_PRU0_GPIO13";
    enet_cpsw1.pinmux[0].RGMII1.TD3.$suggestSolution    = "PR0_PRU0_GPIO14";
    enet_cpsw1.pinmux[0].RGMII1.TX_CTL.$suggestSolution = "PR0_PRU0_GPIO15";
    enet_cpsw1.pinmux[0].RGMII1.TXC.$suggestSolution    = "PR0_PRU0_GPIO16";
    

  • Forgot to mention there are 2 tests I haven't tried yet for reducing latency:

    1. Enable cut-through on CPSW. Is this something you can help with?

    I didn't see any option for Cut Through in SysConfig, so I tried the "easy" way of changing CPSW register directly. I couldn't find an equivalent to "CPSW_PN_CUT_THRU_REG_k" in the AM263Px TRM (in fact none of the registers for CPSW are currently in the TRM).

    On AM64x this register is located at CPSW_BASE + 0x223C0, so I tried writing to 0x528223C0 on AM263Px. Unfortunately that register remained at 0x00000000 regardless of the value I wrote (writing via memory browser in CCS).

    Do you know how to enable cut-through on CPSW?

    2. Move RX packet buffer from OCRAM to TCM so Ethernet DMA writes packet directly to TCM.

    As far as the jitter I saw on the scope trace in the previous post, I'm hoping a bare metal implementation would do significantly better as the timing will be more consistent without random interrupts and background tasks.

  • Hi Steven,

    Thanks for sharing the files.

    1. Regarding the ALE config being overwritten regardless of syscfg settings, I shall look into it and raise a bug if needed.

    2. Cut through is not available on AM263Px.

    As mentioned previously you can expect the baremetal example in the next SDK release. (Version 09.02). Hoping that would solve the issue we are facing here.

    Regards,

    Shaunak

  • Hi Shaunak,

    That's a shame cut-through switching isn't available. Note the TRM (Sept 2023) calls out support for cut-through switching on page 1111.

    This is a problem we've struggled with a few times, namely that TI has many chips with similar peripherals, but often reuses the same names even when the peripherals are different. In this case CPSW3G on AM64x supports cut-through switching, but CPSW3G on AM263Px doesn't. Another example is eHRPWM on AM64x is not actually "high resolution", unlike eHRPWM used on C2000 and other Sitara processors. Very confusing. At the least a version number for each peripheral would help. Something like CPSW3G_V1 or CPSW3G_V2 where feature differences between _V1 and _V2 are called out. This may also help with cut-and-paste errors that seem to frequently occur in TRMs.

    I expect a bare metal implementation will reduce jitter significantly, but probably won't do much for latency. Unfortunately we need a ~50% reduction in latency to use CPSW3G for this application. Without cut-through, I'm unaware of other settings on CPSW3G that reduce latency. Either way we'd appreciate a bare metal example in the next SDK release so we can test this.

  • Hi Steven,

    Apologies for the confusion caused due to the inconsistency between TRM stating details and the actual functionality not being available. The cut-through is not available on AM263Px. I will take measures on my end to get the TRM updated. You are right, a lot of cut-paste errors can be spotted in the TRM and it needs to be addressed.

    Regarding the Baremetal example, it will be available in the next SDK release. This example flow can be expected to be a bit similar to AM64x Enet loopback NORTOS based example. Maybe once the baremetal example is available, we would be able to evaluate the performance better for your use case.

    Let me know if I can help for anything else.

    Regards,
    Shaunak