This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM5726: Rpmsg length table is full

Part Number: AM5726

Referring to the related POST, is it possible to increase the size of the Linux Kernel FIFO queue message size up from 32?

The reason I ask is we are occasionally seeing these messages in our logs with a long time running system. Adding some buffer will to account for the Linux Host not being hard real time.

The data is being fed from the PRU which is pulling the data from a bunch ADCs.

The PRU code is based on the RPMSG_Echo example as included in SDK 3

[ 100.890535] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 100.922527] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 100.954526] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 100.986530] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.018526] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.050528] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.082528] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.114529] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.146529] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.178530] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.210531] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.242531] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.274543] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full
[ 101.306533] rpmsg_pru virtio0.rpmsg-pru.-1.30: Message length table is full

  • I found the code in the driver

    Looks like we would have to rebuild the rpmsg driver and the  kernel

    from drivers/rpmsg/rpmsg_pru.c

    #define PRU_MAX_DEVICES                (8)
    /* Matches the definition in virtio_rpmsg_bus.c */
    #define RPMSG_BUF_SIZE                (512)
    #define MAX_FIFO_MSG                (32)
    #define FIFO_MSG_SIZE                RPMSG_BUF_SIZE
    
    if ((prudev->msg_idx_wr + 1) % MAX_FIFO_MSG == prudev->msg_idx_rd) {
            dev_err(&rpdev->dev, "Message length table is full\n");
            return;
    }



  • Hello Mark,

    Quick check - let's make sure this is an issue of Linux being non-real-time as opposed to an overall throughput issue.

    What is your application's data throughput? I.e., how frequently are you going to want to send RPMsg packets?

    Are you sending data to a Linux userspace application?

    Regards,

    Nick

  • Hi Nick,

    Thanks for responding.

    The PRU is processing 8 KSPS data across 12 channels with a 32 bit data width.

    We are using CMEM for the data transfer, with the following config

    #define CMEM_SAMPLES_IN_BLK 256   // 256 ADC samples in a buffer block
    #define NUM_CMEM_BLKS        25   // Number of circular buffer blocks available

    if (sample_count > cmem_nsamples) {

    pru_rpmsg_send( ... )

    }

    The receiving application runs in user space with a priority of SCHED_FIFO

    The user space application processes the channel data in 1s blocks

    Based on your question I think we can double the block size to 512 samples (or higher) and halve the number of  RPMSGs sent to Linux instead of tweaking the RPMSG driver code

    Thanks,

    Mark

  • Hello Mark,

    Ok, RPMsg as a signalling mechanism to kick off data transfer sounds fine. Doubling the block size sounds easier than tweaking the driver if you have the system resources.

    The fastest that a PRU RPMsg can get sent and received seems to be about a millisecond, so depending on their data throughput customers are more likely to run into problems if they try to use the RPMsg data packet as their actual data transfer method.

    Regards,

    Nick