This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

A question about PCI-e video transportation in DM8168

Hi,

I am using two TI DM8168 and want to transport video YUV data from one to another via PCI-e. As I know, the speed of device itself is very fast.

The demo program just sends 510 bytes every 10 seconds, and I change to sending 4MB(just a example,near to one frame of 1080p) data every second. In RC program,I find that the time cost of following code is about 110 ms:

memset((buf + off_st), value, 4*1024*1024);   in put_data_in_local_buffer() used by push_data() in tixx_rc_app.c .

110 ms is too long if I want to send a video in 30fps. I think it`s because memset is a A8 side memory operation. The EP side uses DMA so the time cost is much less(my test result shows less than 15 ms).

How to improve RC side send speed? The limit is from software not hardware.

Regards,

Xiaotao

  • I have learned that the memset is so slow because the memory mapped from IO space is not cached.

    So could the RC side use DMA? The demo program uses DMA only on EP side.We need bi-directional transport.

    Regards,

    Xiaotao

  • Xiaotao,

    The demo application is part of EP s/w package and doesn't assume any specific RC (could be x86, PPC, DM8168 etc). That is the reason it only relies on EDMA on EP side. But one can add DMA functionality on RC side too. 

    Btw, if the peer side target is memory (RAM), you can even use mmap without mapping uncached (by avoiding pgprot_noncached() in mmap function, for example) but you also then need to ensure coherency explicitly.

  • Hi, another question is here.

    I am trying to send decoded buffer to another TI8168 using PCIe from EP to RC. In vdec_vdis demo,I mmap the decoded buffer and then call send_data_by_dma in ti81xx_ep.c.

    The result is like:

    Unable to handle kernel paging request at virtual address 620e66a0
    pgd = c8d5c000
    [620e66a0] *pgd=8756b031, *pte=00000000, *ppte=00000000
    Internal error: Oops: 17 [#1]
    last sysfs file: /sys/kernel/uevent_seqnum
    Modules linked in: ti81xxhdmi ti81xxfb vpss syslink ti81xx_pcie_epdrv ti81xx_edma ipv6
    CPU: 0 Not tainted (2.6.37 #2)
    PC is at memcpy+0x48/0x330
    LR is at ti81xx_ep_dma_ioctl+0x74/0x240 [ti81xx_edma]
    pc : [<c0186b48>] lr : [<bf0494f0>] psr: 20000013
    sp : c74b3eac ip : 00000000 fp : c74b3ef4
    r10: 00000000 r9 : c74b2000 r8 : 51cf5dac
    r7 : 51cf5dac r6 : 00300000 r5 : bf049b10 r4 : bf0499dc
    r3 : 00300000 r2 : 002fff80 r1 : 620e66a0 r0 : e8800000
    Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
    Control: 10c5387d Table: 88d5c019 DAC: 00000015
    Process dvr_rdk_demo_mc (pid: 940, stack limit = 0xc74b22e8)
    Stack: (0xc74b3eac to 0xc74b4000)
    3ea0: bf049b10 00300000 51cf5dac 51cf5dac e8800000
    3ec0: bf0499dc bf0494f0 c74b3ef4 c74b3ed8 c01ab93c 00000000 c8ed8f00 00000004
    3ee0: 00000004 51cf5dac c74b3f04 c74b3ef8 c00c9bac bf049488 c74b3f74 c74b3f08
    3f00: c00ca2bc c00c9b90 40422000 c8cbca00 c01a83cc c8eeef40 4da2d4d1 36d61600
    3f20: c0084f80 c8cbca00 40422000 c74b3f70 0000002b 40422000 c74b2000 00000000
    3f40: 00000001 0000002b 00000000 00000000 51cf5dac c0045001 00000004 c8ed8f00
    3f60: c74b2000 00000000 c74b3fa4 c74b3f78 c00ca354 c00c9dc8 00000000 00000001
    3f80: 4da2d4d1 00000000 40d4f000 0021c000 00000036 c0041f48 00000000 c74b3fa8
    3fa0: c0041da0 c00ca308 00000000 40d4f000 00000004 c0045001 51cf5dac 20001050
    3fc0: 00000000 40d4f000 0021c000 00000036 20000000 00000000 00000002 0007d394
    3fe0: 00001050 51cf5da0 00013a50 40305aec 20000010 00000004 00000000 ffffffff
    Backtrace:
    [<bf04947c>] (ti81xx_ep_dma_ioctl+0x0/0x240 [ti81xx_edma]) from [<c00c9bac>] (vfs_ioctl+0x28/0x44)
    r8:51cf5dac r7:00000004 r6:00000004 r5:c8ed8f00 r4:00000000
    [<c00c9b84>] (vfs_ioctl+0x0/0x44) from [<c00ca2bc>] (do_vfs_ioctl+0x500/0x540)
    [<c00c9dbc>] (do_vfs_ioctl+0x0/0x540) from [<c00ca354>] (sys_ioctl+0x58/0x7c)
    [<c00ca2fc>] (sys_ioctl+0x0/0x7c) from [<c0041da0>] (ret_fast_syscall+0x0/0x30)
    r8:c0041f48 r7:00000036 r6:0021c000 r5:40d4f000 r4:00000000
    Code: ba000002 f5d1f03c f5d1f05c f5d1f07c (e8b151f8)
    ---[ end trace 030a84c59b780966 ]---

    I`ve checked that the mmaped buffer is exactly 620e66a0,and the physical addr is a875c680.

    However, if I define a buffer like:
    static char dataAddr[DATA_TRANS_SIZE] in ti81xx_ep.c,
    and use memcpy like(the address is 620e66a0):
    memcpy((char*)dataAddr ,(char*)address, datasize),

    no error occurs and the received buffer is correct.But the memcpy speed is very slow as mentioned before.

    Does the mmaped buffer 620e66a0 not recognized by PCI-e in EP?

  • Hi , I also want to do PCI-e video transportation ,but I do not find ti81xx_ep.c and  tixx_rc_app.c in

    EZSDK 5_04_00_11.

    I want to know is it my version problem?

    Regards.