This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM5728: Ethernet SMP performance

Part Number: AM5728
Other Parts Discussed in Thread: SYSBIOS

Hello.

I have conducted experience evaluating the performance of A15 cores in SMP mode on AM5728. Up to 24 streams with a packet size of 128 bytes for a period of 250 ms are sent to a gigabit port. When the SMP is disabled and 24 streams, the kernel load is ~ 98%. With active SMP mode and placing all threads on the zero core, 100% kernel loading is achieved with 6 threads. With the automatic distribution of flows among cores with 6 threads, we have: zero core - 98%, first core - 19%. With an increase in the number of threads, the load on the first core increases, while on the zero it decreases. For 15 threads: zero core 89%, the first 50. For 24 threads, core loading becomes: zero core 78%, the first 83%. Tell me what explains this behavior and is it a consequence of an error? Why is the behavior of a program without SMP and all threads on the zero core with SMP mode not the same?

  • Please post which SDK you are using - Linux or RTOS? Which version?

  • Hello,

    I am using sdk processor_sdk_rtos_am57xx_6_01_00_08
  • Hi,

    By "threads" I assume you mean SYSBIOS tasks? What are the core affinities for these tasks (core0, core1, none)? What are the task priorities? Are some task higher priority than others? Are you dynamically or statically creating the tasks for the difference scenarios you mention?

    Regards,
    Frank

  • Hi,

    Yes, when i asked "threads" I mean SYSBIOS tasks

    I took  screenshot below to show the number of tasks, afinity and priority. Tasks a15_eth_taskFxn and Benchmark_task are created dynamically.

    a15_eth_taskFxn - task receive ethernet packets

    Benchmark_task - task measure cores load and put it in logs

    For placing all threads on the zero core I use  Task.defaultAffinity=0;

    Regards,
    Alex

  • Alex,

    To answer your last question, many of the APIs used when SMP is enabled have much more complicated implementations than when SMP is not enabled.

    An entirely different task scheduler is used.

    All critical section code that would otherwise be protected using a very light weight Hwi_disable/Hwi_restore protection mechanism when SMP is disabled must be protected with a much heavier inter-core-lock mechanism when SMP is enabled. As there are MANY instances of critical section code within SYS/BIOS, this adds up to a considerable amount.

    All of this additional overhead results in greater CPU load when compared to non-SMP enabled BIOS.

    Alan