This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Keystone II SRIO interface with Linux.

Other Parts Discussed in Thread: 66AK2H12

Hi!

I got this guide:http://processors.wiki.ti.com/index.php/MCSDK_HPC_3.x_MPI_over_RIONET for enabling SRIO interface in keystone II evm board.

I followed instructions and after some work I got driver's compiled and probed.

[ 12.921009] keystone-rapidio 2900000.rapidio: KeyStone RapidIO driver v1.2
[ 12.926641] keystone-rapidio 2900000.rapidio: initializing 3.125 Gbps interface with port configuration 0
[ 12.934816] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 12.942928] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 12.951039] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 12.959149] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 12.967263] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 12.975374] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 12.983484] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 12.991595] keystone-rapidio 2900000.rapidio: enabling packet forwarding to port 0 for DestID 0xffff - 0xffff
[ 13.088282] keystone-rapidio 2900000.rapidio: SerDes for lane mask 0x2 on 3.125 Gbps not locked
[ 13.096246] keystone-rapidio 2900000.rapidio: port 0 not ready

uio_module_drv srio.4: registered misc device srio

But port does not get initialized:

dmesg gives: keystone-rapidio 2900000.rapidio: port 0 is not initialized - PORT_OK not set

                         keystone-rapidio 2900000.rapidio: port 0 not ready

untill timeout: keystone-rapidio 2900000.rapidio: RIO port register timeout, port mask 0x1 not ready.

          

How to initialize port correctly? I'm using 66AK2H.

br,

jv

  • Not sure which version of the MCSDK-HPC you are using?  The GA release of this package will be released tomorrow btw. 

    How do you have SRIO connected on the EVM?  Are you using a breakout card?  The above log indicates that the SerDes is not locked.  For SRIO to establish port_ok, the two link partners have to be communicating and exchanging error free control symbols and idles.  Usually, port_ok is the very first step in establishing communication, and if you can't achieve it, it usually means there is something physically wrong, i.e. mismatched data rates/port widths, bad reference clocks, bad board connection, etc.

     

    Regards,

    Travis

     

  • Hi Travis!

    Thank you for answer.

    We are using MCSDK3.0, and I have added patches to include SRIO interface from TI git(3.10-rio-dev-dio). Is this included in upcoming release?

    Actually, SRIO is not connected, so I assume that driver is working correct. We have plans to connect two EVM's together and test SRIO functionality. Within next month we will use 66AK2H12 in custom PCB, and SRIO interface to communicate between CPU and FPGA. I'll get back to it when/if we face any problems.

    br,

    jv 

  • JV,

    A couple things to mention, first to get quicker feedback, this MCSDK-HPC question should really be in the HPC forum:  http://e2e.ti.com/support/applications/high-performance-computing/f/952.aspx the folks that really know the HPC release are monitoring this forum. 

    The GA release of the MCSDK-HPC will be available here later today:

    http://software-dl.ti.com/sdoemb/sdoemb_public_sw/mcsdk_hpc/latest/index_FDS.html

    You have a valid question, the SRIO linux package must still be downloaded from the git.  The new package in the GA is slightly different and you can get the details here:

    http://processors.wiki.ti.com/index.php/MCSDK_HPC_3.x_MPI_over_SRIO

    See chapter 7 (hit Expand on the right) for git download instructions.

    The MPI is directly over SRIO now.  There is no TCP overhead.  The 2 node restriction is gone, we now support multiple hops (packet forwarding) as shown in the topologies section of the guide above.

    Regards,

    Travis

     

  • Thank you Travis for pointing out right forum and documentation. I have followed MPI_over_RIONET guide previously, actually that was only and sensible example I got. I will go through these links and get more familiar with SRIO.

    br,

    jv