This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM625-Q1: Not able to run IPC RP Message Linux Echo Demo Application on R5F

Part Number: AM625-Q1
Other Parts Discussed in Thread: SK-AM62-LP

Tool/software:

Hi Ti Team

I am trying to run the IPC RP Message Linux Echo demo application to test the RPmsg and LPM feature on AM62x-SK-Lp board.

I am following the steps mentioned in this link

https://dev.ti.com/tirex/explore/content/mcu_plus_sdk_am62x_11_00_00_16/docs/api_guide_am62x/EXAMPLES_DRIVERS_IPC_RPMESSAGE_LINUX_ECHO.html

Steps followed:

1.loaded the R5F image during tispl.bin

2. let the linux up and running 

3. Enter these command and I can see some logs on Linux terminal

But as mentioned in the link, I don't see the expected output on the WakeUp UART. Also, nothing happens when I type on the UART terminal.

Did I miss something and make a mistake?

Regards

Mayank

  • Hello Mayank,

    Your output for RPMsg looks the way I would expect. I am sending your thread over to our low power owner for comment. Feel free to ping the thread if you have not received another response within a day or so.

    Regards,

    Nick

  • Actually, just to double-check:

    Are you planning on writing custom code to run alongside the DM task on the DM R5F core? (there is no other R5 core on AM62x)

    Developing on the DM R5F

    If so, please keep in mind that development on the DM R5F would be supported, but low power modes would NOT be supported in that case. Please refer to this page in the MCU+ SDK docs:
    https://software-dl.ti.com/mcu-plus-sdk/esd/AM62X/10_01_00_33/exports/docs/api_guide_am62x/DEVELOP_AND_DEBUG_DMR5.html 

    Additional important notes for DM R5F development 

    I am going to get this additional information added to that page at a later point in time:

    Non-DM code running on the DM R5F should not interfere with the DM task in any way. That means the non-DM task should not crash the DM R5F core, block the DM task from running, corrupt DM memory, etc.

    If you are programming the DM R5F, it is suggested to add a way for the system to recover if the DM R5F locks up or otherwise enters a bad state. Remember that if the DM task is non-functional, code that is normally used for recovery may not work (e.g., the Linux shutdown command). The DM R5F watchdog timer is one recovery tool. There is an example in the MCU+ SDK at examples/drivers/watchdog/watchdog_interrupt

    non-DM code should not crash the DM R5F core:

    The function DebugP_assert() is used to freeze the operation of a core and preserve the state when an error is detected. However, that means that DebugP_assert() halts the entire DM R5F core, including the DM task. In most situations, for non-DM tasks, we suggest using different error handling mechanisms other than DebugP_assert().

    non-DM code should not block the DM task from running:

    The DM task should always be the highest priority task running on the DM R5F. That ensures that other tasks cannot block the DM task from running with while() loops, if() conditions, etc.

    non-DM code should not corrupt the DM memory:

    Static analysis tools and other methods can be used to catch potential memory issues.

  • Hi Nick, Yes the plan is to write a custom code that should run alongside the DM task. However to give you a general overview. We need R5 as a Housekeeping processor where it can be used to process some key inputs via external ADC, and Powermoding functionality where on the shutdown side of things we can utilize A53 to shut peripherals down and then utilize fixed code in R5/M4 to complete the shutdown and get into a deep sleep. We are still exploring whether to go with R5 or M4. Since I see a great deal of limitations with R5 for the LPM feature when using Custom code. I think we might need to check M4 as well.

    Also kindly check the response from LPM owner, we're still waiting how to run the application

  • Hello Mayank,

    Understood, thank you for confirming. If deep sleep is needed, I would suggest evaluating M4F for your task.

    Is this a TI ADC?

    AM62x also has the option to use PRU cores in the PRU Subsystem (PRUSS). We have some examples of controlling TI ADCs with the PRU cores on AM64x PRU_ICSSG here:
    https://software-dl.ti.com/mcu-plus-sdk/esd/AM64X/10_01_00_32/exports/docs/api_guide_am64x/DRIVERS_PRU_ADC.html
    https://software-dl.ti.com/mcu-plus-sdk/esd/AM64X/10_01_00_32/exports/docs/api_guide_am64x/EXAMPLES_PRU_ADC.html 

    There has been some discussion on porting these examples from AM64x, where R5F cores (NOT DM R5F cores) are used to control the PRU subsystem, to AM62x, where Linux would control the PRU subsystem. If you are interested, I could check to see if there are firm plans for porting that stuff to AM62x.

    Regards,

    Nick

  • Hello Mayank,

    Regarding the MCU Only Kernel logs, its unclear whats the error is referring to, but here the RCU Stall Warning Page: https://docs.kernel.org/RCU/stallwarn.html

    We can quickly test the suspend sequence using 'rtcwake -m mem -s 10' which will put the device to sleep for 10 seconds. This will confirm the suspend/resume sequence is working correctly. 

    Then we can add 'echo 100000 > /sys/devices/system/cpu/cpu0/power/pm_qos_resume_latency_us' to enter MCU Only mode then run the same rtcwake command. This will confirm the MCU remaining online and working correctly.

    If both of these work, then we can look closer at the IPC wakeup source.

    Best Regards,

    Anshu

  • Hi Anshu

    Using rtcwake -m mem -s 10 . The below logs were seen on the terminal. and the terminal becomes unresponsive. Attaching the logs from the terminal

    root@am62xx-lp-evm:~# rtcwake -m mem -s 10
    rtcwake: assuming RTC uses UTC ...
    rtcwake: wakeup from "mem" using /dev/rtc0 at Thu Jan  1 00:08:53 1970
    [  515.881601] PM: suspend entry (deep)
    [  515.885816] Filesystems sync: 0.000 seconds
    [  536.899219] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
    [  536.905340] rcu:     3-...0: (0 ticks this GP) idle=56a4/1/0x4000000000000000 softirq=8027/8027 fqs=2438
    [  536.914550] rcu:     (detected by 1, t=5255 jiffies, g=12681, q=554 ncpus=4)
    [  536.921329] Sending NMI from CPU 1 to CPUs 3:
    [  538.291247] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  540.371248] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  541.907250] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  543.955247] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  545.491253] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  546.922255] rcu: rcu_preempt kthread starved for 2498 jiffies! g12681 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=0
    [  546.932595] rcu:     Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
    [  546.941711] rcu: RCU grace-period kthread stack dump:
    [  546.946749] task:rcu_preempt     state:I stack:0     pid:16    tgid:16    ppid:2      flags:0x00000008
    [  546.956050] Call trace:
    [  546.958487]  __switch_to+0xe4/0x140
    [  546.961987]  __schedule+0x268/0xa84
    [  546.965474]  schedule+0x34/0x104
    [  546.968698]  schedule_timeout+0x84/0xfc
    [  546.972529]  rcu_gp_fqs_loop+0x118/0x4c8
    [  546.976449]  rcu_gp_kthread+0x134/0x160
    [  546.980280]  kthread+0x110/0x114
    [  546.983506]  ret_from_fork+0x10/0x20
    [  546.987077] rcu: Stack dump where RCU GP kthread last ran:
    [  546.992549] Sending NMI from CPU 1 to CPUs 0:
    [  546.996905] NMI backtrace for cpu 0 skipped: idling at default_idle_call+0x28/0x3c
    [  610.015219] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
    [  610.021340] rcu:     3-...0: (0 ticks this GP) idle=56a4/1/0x4000000000000000 softirq=8027/8027 fqs=9683
    [  610.030551] rcu:     (detected by 0, t=23533 jiffies, g=12681, q=1404 ncpus=4)
    [  610.037502] Sending NMI from CPU 0 to CPUs 3:
    [  611.443256] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  613.523251] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  615.103253] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  617.203245] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  618.783257] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  620.038426] rcu: rcu_preempt kthread starved for 2495 jiffies! g12681 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=2
    [  620.048768] rcu:     Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
    [  620.057886] rcu: RCU grace-period kthread stack dump:
    [  620.062927] task:rcu_preempt     state:I stack:0     pid:16    tgid:16    ppid:2      flags:0x00000008
    [  620.072233] Call trace:
    [  620.074672]  __switch_to+0xe4/0x140
    [  620.078175]  __schedule+0x268/0xa84
    [  620.081661]  schedule+0x34/0x104
    [  620.084887]  schedule_timeout+0x84/0xfc
    [  620.088717]  rcu_gp_fqs_loop+0x118/0x4c8
    [  620.092637]  rcu_gp_kthread+0x134/0x160
    [  620.096467]  kthread+0x110/0x114
    [  620.099692]  ret_from_fork+0x10/0x20
    [  620.103264] rcu: Stack dump where RCU GP kthread last ran:
    [  620.108735] Sending NMI from CPU 0 to CPUs 2:
    [  620.113094] NMI backtrace for cpu 2 skipped: idling at default_idle_call+0x28/0x3c
    [  681.373123] kauditd_printk_skb: 8 callbacks suppressed
    [  681.373145] audit: type=1701 audit(1736475398.396:20): auid=4294967295 uid=993 gid=989 ses=4294967295 subj=kernel pid=499 comm="systemd-network" exe="/usr/lib/systemd/systemd-networkd" sig=6 res=1
    [  683.131220] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
    [  683.137346] rcu:     3-...0: (0 ticks this GP) idle=56a4/1/0x4000000000000000 softirq=8027/8027 fqs=17079
    [  683.146644] rcu:     (detected by 0, t=41813 jiffies, g=12681, q=3496 ncpus=4)
    [  683.153597] Sending NMI from CPU 0 to CPUs 3:
    [  684.627925] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  686.707254] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  688.243266] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  690.291251] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  691.827251] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  693.154519] rcu: rcu_preempt kthread timer wakeup didn't happen for 2505 jiffies! g12681 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
    [  693.165900] rcu:     Possible timer handling issue on cpu=0 timer-softirq=11278
    [  693.172934] rcu: rcu_preempt kthread starved for 2511 jiffies! g12681 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
    [  693.183350] rcu:     Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
    [  693.192462] rcu: RCU grace-period kthread stack dump:
    [  693.197499] task:rcu_preempt     state:I stack:0     pid:16    tgid:16    ppid:2      flags:0x00000008
    [  693.206799] Call trace:
    [  693.209238]  __switch_to+0xe4/0x140
    [  693.212737]  __schedule+0x268/0xa84
    [  693.216224]  schedule+0x34/0x104
    [  693.219448]  schedule_timeout+0x84/0xfc
    [  693.223278]  rcu_gp_fqs_loop+0x118/0x4c8
    [  693.227198]  rcu_gp_kthread+0x134/0x160
    [  693.231027]  kthread+0x110/0x114
    [  693.234251]  ret_from_fork+0x10/0x20
    [  693.237823] rcu: Stack dump where RCU GP kthread last ran:
    [  693.243302] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Tainted: G           O       6.12.17-ti-00771-gc85877d40f8e #1
    [  693.253375] Tainted: [O]=OOT_MODULE
    [  693.256853] Hardware name: Texas Instruments AM62x LP SK (DT)
    [  693.262586] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
    [  693.269534] pc : default_idle_call+0x28/0x3c
    [  693.273797] lr : default_idle_call+0x24/0x3c
    [  693.278060] sp : ffff800081403d90
    [  693.281364] x29: ffff800081403d90 x28: 00000000fde97490 x27: 0000000000000000
    [  693.288493] x26: ffff000077bc4f00 x25: ffff800081413480 x24: 0000000000000000
    [  693.295621] x23: 0000000000000000 x22: ffff800081409d68 x21: ffff800081413480
    [  693.302749] x20: ffff800081409c48 x19: 0000000000000000 x18: 0000000000000000
    [  693.309876] x17: 0000000000000000 x16: 0000000000000000 x15: ffff000077b76240
    [  693.317005] x14: 0000000000000000 x13: 0000000000000287 x12: ffff000077bb82c0
    [  693.324132] x11: 0000000000000396 x10: 00000000000009e0 x9 : ffff800081403cc0
    [  693.331260] x8 : ffff800081413ec0 x7 : 00000000000000c0 x6 : ffff8000812107d8
    [  693.338387] x5 : ffff000077b727d8 x4 : ffff8000812107e8 x3 : 0000000000000000
    [  693.345514] x2 : 00000000000a3cfc x1 : 0000000000000001 x0 : 4000000000000000
    [  693.352641] Call trace:
    [  693.355078]  default_idle_call+0x28/0x3c
    [  693.358996]  do_idle+0x200/0x258
    [  693.362223]  cpu_startup_entry+0x38/0x3c
    [  693.366140]  kernel_init+0x0/0x1d4
    [  693.369536]  start_kernel+0x554/0x6a0
    [  693.373195]  __primary_switched+0x80/0x88
    ^Z^Z^H^Hrtcwake -m mem -s 10[  756.395219] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
    [  756.401345] rcu:     3-...0: (0 ticks this GP) idle=56a4/1/0x4000000000000000 softirq=8027/8027 fqs=24331
    [  756.410642] rcu:     (detected by 2, t=60126 jiffies, g=12681, q=4374 ncpus=4)
    [  756.417595] Sending NMI from CPU 2 to CPUs 3:
    [  766.418516] rcu: rcu_preempt kthread starved for 2491 jiffies! g12681 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=0
    [  766.433185] rcu:     Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
    [  766.442299] rcu: RCU grace-period kthread stack dump:
    [  766.447337] task:rcu_preempt     state:I stack:0     pid:16    tgid:16    ppid:2      flags:0x00000008
    [  766.456638] Call trace:
    [  766.459076]  __switch_to+0xe4/0x140
    [  766.462575]  __schedule+0x268/0xa84
    [  766.466063]  schedule+0x34/0x104
    [  766.469287]  schedule_timeout+0x84/0xfc
    [  766.473118]  rcu_gp_fqs_loop+0x118/0x4c8
    [  766.477037]  rcu_gp_kthread+0x134/0x160
    [  766.480867]  kthread+0x110/0x114
    [  766.484091]  ret_from_fork+0x10/0x20
    [  766.487663] rcu: Stack dump where RCU GP kthread last ran:
    [  766.493135] Sending NMI from CPU 2 to CPUs 0:
    [  766.497489] NMI backtrace for cpu 0 skipped: idling at default_idle_call+0x28/0x3c
    [  829.499219] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
    [  829.505344] rcu:     3-...0: (0 ticks this GP) idle=56a4/1/0x4000000000000000 softirq=8027/8027 fqs=31712
    [  829.514641] rcu:     (detected by 1, t=78404 jiffies, g=12681, q=5214 ncpus=4)
    [  829.521593] Sending NMI from CPU 1 to CPUs 3:
    [  839.522519] rcu: rcu_preempt kthread starved for 586 jiffies! g12681 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=2
    [  839.537119] rcu:     Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
    [  839.546236] rcu: RCU grace-period kthread stack dump:
    [  839.551276] task:rcu_preempt     state:I stack:0     pid:16    tgid:16    ppid:2      flags:0x00000008
    [  839.560580] Call trace:
    [  839.563020]  __switch_to+0xe4/0x140
    [  839.566521]  __schedule+0x268/0xa84
    [  839.570008]  schedule+0x34/0x104
    [  839.573233]  schedule_timeout+0x84/0xfc
    [  839.577062]  rcu_gp_fqs_loop+0x118/0x4c8
    [  839.580981]  rcu_gp_kthread+0x134/0x160
    [  839.584812]  kthread+0x110/0x114
    [  839.588035]  ret_from_fork+0x10/0x20
    [  839.591607] rcu: Stack dump where RCU GP kthread last ran:
    [  839.597078] Sending NMI from CPU 1 to CPUs 2:
    [  839.601438] NMI backtrace for cpu 2 skipped: idling at default_idle_call+0x28/0x3c
    [  849.783220] sched: DL replenish lagged too much
    [  902.619218] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
    [  902.625333] rcu:     3-...0: (0 ticks this GP) idle=56a4/1/0x4000000000000000 softirq=8027/8027 fqs=39155
    [  902.634628] rcu:     (detected by 1, t=96685 jiffies, g=12681, q=6460 ncpus=4)
    [  902.641580] Sending NMI from CPU 1 to CPUs 3:
    [  904.051264] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  906.131253] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  907.667256] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  909.715250] tps6598x 0-003f: tps6598x_interrupt: failed to read event1
    [  911.251271] tps6598x 0-003f: tps6598x_interrupt: failed to read version (-110)
    [  912.642505] rcu: rcu_preempt kthread starved for 2498 jiffies! g12681 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=0
    [  912.652848] rcu:     Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
    [  912.661965] rcu: RCU grace-period kthread stack dump:
    [  912.667005] task:rcu_preempt     state:I stack:0     pid:16    tgid:16    ppid:2      flags:0x00000008
    [  912.676306] Call trace:
    [  912.678746]  __switch_to+0xe4/0x140
    [  912.682243]  __schedule+0x268/0xa84
    [  912.685730]  schedule+0x34/0x104
    [  912.688955]  schedule_timeout+0x84/0xfc
    [  912.692784]  rcu_gp_fqs_loop+0x118/0x4c8
    [  912.696703]  rcu_gp_kthread+0x134/0x160
    [  912.700534]  kthread+0x110/0x114
    [  912.703758]  ret_from_fork+0x10/0x20
    [  912.707330] rcu: Stack dump where RCU GP kthread last ran:
    [  912.712802] Sending NMI from CPU 1 to CPUs 0:
    [  912.717159] NMI backtrace for cpu 0 skipped: idling at default_idle_call+0x28/0x3c
    [  975.735219] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
    [  975.741339] rcu:     3-...0: (0 ticks this GP) idle=56a4/1/0x4000000000000000 softirq=8027/8027 fqs=46683
    [  975.750635] rcu:     (detected by 0, t=114964 jiffies, g=12681, q=7854 ncpus=4)
    [  975.757673] Sending NMI from CPU 0 to CPUs 3:
    [  977.907248] ti-sci 44043000.system-controller: Mbox timedout in resp(caller: ti_sci_cmd_put_device+0x18/0x24)
    [  977.921554] ti-sci 44043000.system-controller: Mbox send fail -110
    [  985.758596] rcu: rcu_preempt kthread timer wakeup didn't happen for 2506 jiffies! g12681 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
    [  985.769980] rcu:     Possible timer handling issue on cpu=0 timer-softirq=31230
    [  985.777014] rcu: rcu_preempt kthread starved for 2512 jiffies! g12681 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
    [  985.787431] rcu:     Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
    [  985.796542] rcu: RCU grace-period kthread stack dump:
    [  985.801579] task:rcu_preempt     state:I stack:0     pid:16    tgid:16    ppid:2      flags:0x00000008
    [  985.810881] Call trace:
    [  985.813319]  __switch_to+0xe4/0x140
    [  985.816817]  __schedule+0x268/0xa84
    [  985.820304]  schedule+0x34/0x104
    [  985.823529]  schedule_timeout+0x84/0xfc
    [  985.827360]  rcu_gp_fqs_loop+0x118/0x4c8
    [  985.831278]  rcu_gp_kthread+0x134/0x160
    [  985.835106]  kthread+0x110/0x114
    [  985.838330]  ret_from_fork+0x10/0x20
    [  985.841900] rcu: Stack dump where RCU GP kthread last ran:
    [  985.847380] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Tainted: G           O       6.12.17-ti-00771-gc85877d40f8e #1
    [  985.857451] Tainted: [O]=OOT_MODULE
    [  985.860931] Hardware name: Texas Instruments AM62x LP SK (DT)
    [  985.866663] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
    [  985.873612] pc : default_idle_call+0x28/0x3c
    [  985.877877] lr : default_idle_call+0x24/0x3c
    [  985.882140] sp : ffff800081403d90
    [  985.885444] x29: ffff800081403d90 x28: 00000000fde97490 x27: 0000000000000000
    [  985.892574] x26: ffff000077bc4f00 x25: ffff800081413480 x24: 0000000000000000
    [  985.899704] x23: 0000000000000000 x22: ffff800081409d68 x21: ffff800081413480
    [  985.906832] x20: ffff800081409c48 x19: 0000000000000000 x18: 0000000000000000
    [  985.913958] x17: 0000000000000000 x16: 0000000000000000 x15: ffff000077b76240
    [  985.921086] x14: 0000000000000000 x13: 0000000000000067 x12: ffff000077bb82c0
    [  985.928213] x11: 0000000000000800 x10: 00000000000009e0 x9 : ffff800081403cc0
    [  985.935340] x8 : ffff800081413ec0 x7 : 00000000000000c0 x6 : ffff8000812107d8
    [  985.942468] x5 : ffff000077b727d8 x4 : ffff8000812107e8 x3 : 0000000000000000
    [  985.949595] x2 : 000000000012c77c x1 : 0000000000000001 x0 : 4000000000000000
    [  985.956723] Call trace:
    [  985.959160]  default_idle_call+0x28/0x3c
    [  985.963078]  do_idle+0x200/0x258
    [  985.966305]  cpu_startup_entry+0x34/0x3c
    [  985.970222]  kernel_init+0x0/0x1d4
    [  985.973619]  start_kernel+0x554/0x6a0
    [  985.977278]  __primary_switched+0x80/0x88
    [  985.981400] ti-sci 44043000.system-controller: Message for 0 is not expected!
    
    

  • HI Anshu, kindly look into the response and suggest what should be the next steps to run this application

  • Hi Mayank,

    Can you provide details of your setup so I can replicate it on my end?

    What all is plugged into the SK-AM62-LP board?

    What software changes have been made on the Linux side and M4F IPC firmware?

    As Nick mentioned, low power modes are not supported if the DM R5F core is used for general purpose applications. Additionally, low power modes are only supported using SPL boot.

    Best Regards,

    Anshu

  • Hi Anshu

    Yes, we're using SPL Boot flow and integrating ipc_rpmsg_echo_linux.wkup-r5f0_0.strip.out from the example in tispl.bin for R5 as suggested in this link

    https://dev.ti.com/tirex/explore/node?node=A__AZNhqJdyJ3LM.YBw-Z2UAw__AM62-ACADEMY__uiYMDcq__LATEST
    There is no firmware on the M4F side, and we're using the TI U-Boot from the TI-processor-sdk-linux-am62xx-evm-11.00.09.04 to load the default Linux.

    That's our setup.

    Regards

    Mayank 

  • Hi Mayank,

    There is a bug in the PMIC driver for AM62x on SDK 11.0 which impacts SK-AM62-LP: https://sir.ext.ti.com/jira/browse/EXT_EP-12340

    Can you try on SDK 10.1? This bug is planned to be fixed in SDK 11.1.

    Best Regards,

    Anshu