This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

OpenMPI example on two K2H boards via hyperlink transport fails

Hi,

we've got two TI K2H EVMs connected through hyperlink using two breakout cards from Mistral. We followed all the instructions in http://processors.wiki.ti.com/index.php/MCSDK_HPC_3.x_Getting_Started_Guide#EVM_Setup word by word. We even found uboot FDT command in MCSDK UG to make sure Hyperlink is enabled. Additionaly, we downscaled the hlink clock using mpm-config.json as advised. We checked the testmpi example application by running: /opt/ti-openmpi/bin/mpirun --mca btl_base_verbose 100 --mca btl self,tcp -np 2 -host k2hnode1,k2hnode2 ./testmpi works fine. On the other hand: /opt/ti-openmpi/bin/mpirun --mca btl_base_verbose 100 --mca btl self,hlink -np 2 -host k2hnode1,k2hnode2 ./testmpi fails. Anyone ever experienced this error? Any help would be much appreciated!


The output of the preceding command can be found below. Version info: (not the first we tried...): BMC_ver: 1.0.2.5 EVM type: 0.0.0.1 EVM Superset: K2KH-EVM one EVM is rev 3.0 and the other is rev 4.0 boot mode ARM-SPI imglib_c66x_3_1_1_0 mcsdk-hpc_03_00_01_08 mcsdk_linux_3_01_01_04 ndk_2_22_02_16 openem_1_10_0_0 openmp_dsp_2_01_16_02 pdk_keystone2_3_01_01_04 ti-cgt-c6000_8.0.0 ti-llvm-3.3-3.3 ti-opencl_1.0.0 ti-openmpacc_1.2.0 ti-openmpi-1.0.0.18 transport_net_lib_1_1_0_2 uia_1_03_02_10 xdctools_3_25_06_96 xdctools_3_25_06_96_core6x [k2hnode2:01877] mca: base: components_open: Looking for btl components [k2hnode1:01954] mca: base: components_open: Looking for btl components [k2hnode2:01877] mca: base: components_open: opening btl components [k2hnode2:01877] mca: base: components_open: found loaded component hlink [k2hnode2:01877] BTL_HLINK TIMPIDBG: hlink_component_register!!! [k2hnode2:01877] This is EVM, using hl0 only! [k2hnode2:01877] mca: base: components_open: component hlink register function successful [k2hnode2:01877] BTL_HLINK TIMPIDBG: hlink_component_open!!! [k2hnode2:01877] BTL_HLINK BTL HLINK start of HYPLNKINITCFG: 0xb6a63dfc [k2hnode2:01877] BTL_HLINK [0x21400000] [k2hnode2:01877] BTL_HLINK [0x40000000] [k2hnode2:01877] BTL_HLINK [0x21400100] [k2hnode2:01877] BTL_HLINK [0x28000000] [k2hnode2:01877] BTL_HLINK [(nil)] [k2hnode2:01877] BTL_HLINK [(nil)] [k2hnode2:01877] BTL_HLINK [(nil)] [k2hnode2:01877] BTL_HLINK [(nil)] [k2hnode2:01877] BTL_HLINK BTL HLINK end of HYPLNKINITCFG [k2hnode2:01877] BTL_HLINK: CMEM_init OK! [k2hnode2:01877] mca: base: components_open: component hlink open function successful [k2hnode2:01877] mca: base: components_open: found loaded component self [k2hnode2:01877] mca: base: components_open: component self has no register function [k2hnode2:01877] mca: base: components_open: component self open function successful [k2hnode1:01954] mca: base: components_open: opening btl components [k2hnode1:01954] mca: base: components_open: found loaded component hlink [k2hnode1:01954] BTL_HLINK TIMPIDBG: hlink_component_register!!! [k2hnode1:01954] This is EVM, using hl0 only! [k2hnode1:01954] mca: base: components_open: component hlink register function successful [k2hnode1:01954] BTL_HLINK TIMPIDBG: hlink_component_open!!! [k2hnode1:01954] BTL_HLINK BTL HLINK start of HYPLNKINITCFG: 0xb6afcdfc [k2hnode1:01954] BTL_HLINK [0x21400000] [k2hnode1:01954] BTL_HLINK [0x40000000] [k2hnode1:01954] BTL_HLINK [0x21400100] [k2hnode1:01954] BTL_HLINK [0x28000000] [k2hnode1:01954] BTL_HLINK [(nil)] [k2hnode1:01954] BTL_HLINK [(nil)] [k2hnode1:01954] BTL_HLINK [(nil)] [k2hnode1:01954] BTL_HLINK [(nil)] [k2hnode1:01954] BTL_HLINK BTL HLINK end of HYPLNKINITCFG [k2hnode1:01954] BTL_HLINK: CMEM_init OK! [k2hnode1:01954] mca: base: components_open: component hlink open function successful [k2hnode1:01954] mca: base: components_open: found loaded component self [k2hnode1:01954] mca: base: components_open: component self has no register function [k2hnode1:01954] mca: base: components_open: component self open function successful [k2hnode2:01877] select: initializing btl component hlink [k2hnode2:01877] BTL_HLINK TIMPIDBG: hlink_component_init!!! [k2hnode1:01954] select: initializing btl component hlink [k2hnode1:01954] BTL_HLINK TIMPIDBG: hlink_component_init!!! [k2hnode2:01877] BTL_HLINK shmem open successfull!! [k2hnode2:01877] BTL_HLINK: CMEM physAddr: 22000000 (to a2000000) userAddr:0xb59a8000 [k2hnode2:01877] BTL_HLINK shmem MSMC0 mmap successfull!! [k2hnode2:01877] BTL_HLINK shmem MSMC0 mmap successfull!! [k2hnode2:01877] BTL_HLINK attempt HyperLink0 then HyperLink1 [k2hnode2:01877] BTL_HLINK hyplnk0 attempt opening [k2hnode1:01954] BTL_HLINK shmem open successfull!! [k2hnode1:01954] BTL_HLINK: CMEM physAddr: 22000000 (to a2000000) userAddr:0xb5a41000 [k2hnode1:01954] BTL_HLINK shmem MSMC0 mmap successfull!! [k2hnode1:01954] BTL_HLINK shmem MSMC0 mmap successfull!! [k2hnode1:01954] BTL_HLINK attempt HyperLink0 then HyperLink1 [k2hnode1:01954] BTL_HLINK hyplnk0 attempt opening [k2hnode2:01877] BTL_HLINK hyplnk0 open failed [k2hnode2:01877] BTL_HLINK hyplnk1 attempt opening [k2hnode1:01954] BTL_HLINK hyplnk0 open failed [k2hnode1:01954] BTL_HLINK hyplnk1 attempt opening [k2hnode1:01954] BTL_HLINK hyplnk1 open failed [k2hnode1:01954] BTL_HLINK hyplnk0=(nil) hyplnk1=(nil) [k2hnode1:01954] HLINK turned off !!! [k2hnode1:01954] select: init of component hlink returned failure [k2hnode1:01954] select: module hlink unloaded [k2hnode1:01954] select: initializing btl component self -------------------------------------------------------------------------- At least one pair of MPI processes are unable to reach each other for MPI communications. This means that no Open MPI device has indicated that it can be used to communicate between these processes. This is an error; Open MPI requires that all MPI processes be able to reach each other. This error can sometimes be the result of forgetting to specify the "self" BTL. Process 1 ([[62988,1],1]) is on host: k2hnode2 Process 2 ([[62988,1],0]) is on host: k2hnode1 BTLs attempted: self Your MPI job is now going to abort; sorry. -------------------------------------------------------------------------- -------------------------------------------------------------------------- MPI_INIT has failed because at least one MPI process is unreachable from another. This *usually* means that an underlying communication plugin -- such as a BTL or an MTL -- has either not loaded or not allowed itself to be used. Your MPI job will now abort. You may wish to try to narrow down the problem; * Check the output of ompi_info to see which BTL/MTL plugins are available. * Run your application with MPI_THREAD_SINGLE. * Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose, if using MTL-based communications) to see exactly which communication plugins were considered and/or discarded. -------------------------------------------------------------------------- [k2hnode1:1954] *** An error occurred in MPI_Init [k2hnode1:1954] *** reported by process [4127981569,0] [k2hnode1:1954] *** on a NULL communicator [k2hnode1:1954] *** Unknown error [k2hnode1:1954] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [k2hnode1:1954] *** and potentially your MPI job) -------------------------------------------------------------------------- An MPI process is aborting at a time when it cannot guarantee that all of its peer processes in the job will be killed properly. You should double check that everything has shut down cleanly. Reason: Before MPI_INIT completed Local host: k2hnode1 PID: 1954 -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 1954 on node k2hnode1 exiting improperly. There are three reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" 3. this process called "MPI_Abort" or "orte_abort" and the mca parameter orte_create_session_dirs is set to false. In this case, the run-time cannot detect that the abort call was an abnormal termination. Hence, the only error message you will receive is this one. This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). You can avoid this message by specifying -quiet on the mpirun command line. -------------------------------------------------------------------------- [k2hnode2:01875] 1 more process has sent help message help-mca-bml-r2.txt / unreachable proc [k2hnode2:01875] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages [k2hnode2:01875] 1 more process has sent help message help-mpi-runtime / mpi_init:startup:pml-add-procs-fail [k2hnode2:01875] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal unknown handle [k2hnode2:01875] 1 more process has sent h[k2hnode1:01954] select: init of component self returned success [k2hnode2:01877] BTL_HLINK hyplnk1 open failed [k2hnode2:01877] BTL_HLINK hyplnk0=(nil) hyplnk1=(nil) [k2hnode2:01877] HLINK turned off !!! [k2hnode2:01877] select: init of component hlink returned failure [k2hnode2:01877] select: module hlink unloaded [k2hnode2:01877] select: initializing btl component self [k2hnode2:01877] select: init of component self returned success
  • Hi,

    I have successfully tested the MCSDK Hyperlink example between two K2H EVM, but i did not tested openmpi demos. Please see my test setup image below and confirm the same setup is configured your test.

    Thanks,

  • Hi,

    thanks for the immediate response, I haven't even noticed it!
    Your setup is the same we've got, although ours is only connected by a single cable while you've got two. According the manuals though, it shouldn't make any difference.
    Have you had any success with the factory examples so far with your setup?

    Regards,

    Janos
  • Hi,

    K2H device have two hyperlink ports. If you have one cable means, you can make sure to enable only one hyperlink port(based on your hyperlink cable connection port) on your test code and then run the test.

    I have successfully tested the TI provide MCSDK 3.0 Hyperlink example between two K2H EVM. it support two hyperlink port.

    MCSDK path: \ti\pdk_keystone2_3_01_01_04\packages\exampleProjects\hyplnk_K2HC66BiosExampleProject

    Thanks,
  • This is great news!
    Could you maybe try it with the openMPI example I mentioned in the top post as well?
  • I din't have ready setup to test openMPI example. I will check with my team to test this example.
  • Meanwhile we've managed to make some progress:

    We noticed that during the mpmsrv startup it traces in '/var/log/mpmsrv.log'
    that it couldn't find slave devices. Started tracing the details of this
    process it seems that it's looking for a particular device in /sys/uio.. by searching
    a specific name in the 'name' files. It turns out the hyperlink devices (uio8-9)
    _do not_ follow the naming convention the mpmsrv is relying on.
    By editing the mpmconfig.json file we could guide it to find the hyperlink device for hyperlink0 (that's the one
    connected).
    However, we're stuck again with the following errors (unable to map the remote MSMC):

    [k2hnode1:02907] BTL_HLINK TIMPIDBG: hlink_component_init!!!
    [k2hnode1:02907] BTL_HLINK shmem open successfull!!
    [k2hnode1:02907] BTL_HLINK: CMEM physAddr: 22000000 (to a2000000)
    userAddr:0xb5a05000
    [k2hnode1:02907] BTL_HLINK shmem MSMC0 mmap successfull!!
    [k2hnode1:02907] BTL_HLINK shmem MSMC0 mmap successfull!!
    [k2hnode1:02907] BTL_HLINK attempt HyperLink0 then HyperLink1
    [k2hnode1:02907] BTL_HLINK hyplnk0 attempt opening
    [k2hnode1:02907] BTL_HLINK hyplnk0 open successfull!!
    [k2hnode1:02907] BTL_HLINK hyplnk1 attempt opening
    [k2hnode2:01828] select: initializing btl component hlink
    [k2hnode2:01828] BTL_HLINK TIMPIDBG: hlink_component_init!!!
    [k2hnode2:01828] BTL_HLINK shmem open successfull!!
    [k2hnode2:01828] BTL_HLINK: CMEM physAddr: 22000000 (to a2000000)
    userAddr:0xb59a1000
    [k2hnode2:01828] BTL_HLINK shmem MSMC0 mmap successfull!!
    [k2hnode2:01828] BTL_HLINK shmem MSMC0 mmap successfull!!
    [k2hnode2:01828] BTL_HLINK attempt HyperLink0 then HyperLink1
    [k2hnode2:01828] BTL_HLINK hyplnk0 attempt opening
    [k2hnode2:01828] BTL_HLINK hyplnk0 open successfull!!
    [k2hnode2:01828] BTL_HLINK hyplnk1 attempt opening
    [k2hnode1:02907] BTL_HLINK hyplnk1 open failed
    [k2hnode1:02907] BTL_HLINK hyplnk0=0x5c4e8 hyplnk1=(nil)
    [k2hnode1:02907] mmap_failed_hl_win_msmc_rmt (MSMC over hyplnk0)!
    [k2hnode1:02907] select: init of component hlink returned failure
    [k2hnode1:02907] select: module hlink unloaded
    [k2hnode1:02907] select: initializing btl component self
    [k2hnode1:02907] select: init of component self returned success
    [k2hnode2:01828] BTL_HLINK hyplnk1 open failed
    [k2hnode2:01828] BTL_HLINK hyplnk0=0x5c500 hyplnk1=(nil)
    [k2hnode2:01828] mmap_failed_hl_win_msmc_rmt (MSMC over hyplnk0)!
    [k2hnode2:01828] select: init of component hlink returned failure
    [k2hnode2:01828] select: module hlink unloaded

    We reckon the core problem isn't with the HLink physical connection, but it's maybe an ARM linux/openMPI hyperlink transport related issue and DSP LLD checks don't help us much.

    Regards,

    Janos
  • Hi,

    thanks for getting on this project, it is a huge help for us! Meanwhile we've also managed to run the DSP to DSP factory hyperlink test, but still got no success with the ARM 2 ARM hyperlink connection however.

    The results of the successful DSP2DSP test:

    [C66xx_0] Version #: 0x02010001; string HYPLNK LLD Revision:
    02.01.00.01:Mar 30 2015:11:04:06
    About to do system setup (PLL, PSC, and DDR)
    Constructed SERDES configs: PLL=0x00000228; RX=0x0046c485; TX=0x000cc305
    system setup worked
    About to set up HyperLink Peripheral
    ============================Hyperlink Testing Port 0
    ========================================== begin registers before
    initialization ===========
    Revision register contents:
      Raw    = 0x4e902101
    Status register contents:
      Raw        = 0x00003004
    Link status register contents:
      Raw       = 0x00000000
    Control register contents:
      Raw             = 0x00000000
    Control register contents:
      Raw        = 0x00000000
    ============== end registers before initialization ===========
    Waiting for other side to come up (       0)
    Version #: 0x02010001; string HYPLNK LLD Revision: 02.01.00.01:Mar 30
    2015:11:04:06
    About to do system setup (PLL, PSC, and DDR)
    Constructed SERDES configs: PLL=0x00000228; RX=0x0046c485; TX=0x000cc305
    system setup worked
    About to set up HyperLink Peripheral
    ============================Hyperlink Testing Port 0
    ========================================== begin registers before
    initialization ===========
    Revision register contents:
      Raw    = 0x4e902101
    Status register contents:
      Raw        = 0x00003004
    Link status register contents:
      Raw       = 0x00000000
    Control register contents:
      Raw             = 0x00000000
    Control register contents:
      Raw        = 0x00000000
    ============== end registers before initialization ===========
    ============== begin registers after initialization ===========
    Status register contents:
      Raw        = 0x04402005
    Link status register contents:
      Raw       = 0xccf00cf0
    Control register contents:
      Raw             = 0x00006204
    ============== end registers after initialization ===========
    Waiting 5 seconds to check link stability
    ============== begin registers after initialization ===========
    Status register contents:
      Raw        = 0x04402005
    Link status register contents:
      Raw       = 0xccf00cff
    Control register contents:
      Raw             = 0x00006204
    ============== end registers after initialization ===========
    Waiting 5 seconds to check link stability
    Precursors 0
    Postcursors: 19
    Link seems stable
    About to try to read remote registers
    ============== begin REMOTE registers after initialization ===========
    Status register contents:
      Raw        = 0x0440200b
    Link status register contents:
      Raw       = 0xfdf0bdf0
    Control register contents:
      Raw             = 0x00006204
    ============== end REMOTE registers after initialization ===========
    Peripheral setup worked
    About to read/write once
    Precursors 0
    Postcursors: 19
    Link seems stable
    About to try to read remote registers
    ============== begin REMOTE registers after initialization ===========
    Status register contents:
      Raw        = 0x0440000b
    Link status register contents:
      Raw       = 0xfdf0bdf0
    Control register contents:
      Raw             = 0x00006200
    ============== end REMOTE registers after initialization ===========
    Peripheral setup worked
    About to read/write once
    Single write test passed
    About to pass 65536 tokens; iteration = 0
    Single write test passed
    About to pass 65536 tokens; iteration = 0
    === this is not an optimized example ===
    === this is not an optimized example ===
    Link Speed is 4 * 6.25 Gbps
    Link Speed is 4 * 6.25 Gbps
    Passed 65536 tokens round trip (read+write through hyplnk) in 16117 Mcycles
    Passed 65536 tokens round trip (read+write through hyplnk) in 16117 Mcycles
    Approximately 245938 cycles per round-trip
    Approximately 245938 cycles per round-trip
    === this is not an optimized example ===
    === this is not an optimized example ===
    Checking statistics
    Checking statistics
    About to pass 65536 tokens; iteration = 1
    About to pass 65536 tokens; iteration = 1
    === this is not an optimized example ===
    === this is not an optimized example ===
    Link Speed is 4 * 6.25 Gbps
    Link Speed is 4 * 6.25 Gbps
    Passed 65536 tokens round trip (read+write through hyplnk) in 16117 Mcycles
    Passed 65536 tokens round trip (read+write through hyplnk) in 16117 Mcycles
    Approximately 245938 cycles per round-trip
    Approximately 245938 cycles per round-trip
    === this is not an optimized example ===
    === this is not an optimized example ===
    Checking statistics
    Checking statistics
    ...
    


    Regards,

    Janos

  • Hi,

    Thanks for your update. Have modified the test code based on your test setup? Please share your debug experience it will help full for other e2e community members.

    Thanks,
  • Janos Balazs,
    I think, the Open MPI demos are part of MCSDK HPC package. Please start a new thread in MCSDK HPC forum for faster response on demo issues. Please find the HPC forum link below.

    e2e.ti.com/.../high-performance-computing

    Thank you.
  • This is an issue reproduced on MCSDK HPC drop 8 +  MCSDK 3.0.4.18, with 2 K2H EVMs connected together, our developer is looking into it.

    Regards, Eric

  • Janos,

    We have here a K2H EVM to K2H EVM connection via Hyperlink to reproduce/debug the issue. The HW setup was firstly verified by loading MCSDK Hyperlink BIOS examples via CCS/JTAG to DSP cores. Then, we moved to openMPI over hyperlink test:

    • Issue was reproduced with MCSDK HPC drop 8 which is an intermediate release.
    • Issue was also reproduced on GA candidate MCSDK HPC drop 12 (internal only)
    • I have been debugging with openMPI developer on drop 12 and found two issues:

    1)     openmpi relies on MPM transport library (/usr/lib/libmpmtransport.so.1.0.0) to open the hyperlink port, through a client (/usr/bin/mpmcl) and server (/usr/bin/mpmsrv) thread. The library reports that the open is successful, and the server log looks normal (/var/log/mpmsrv.log). However, the hyperlink port is not opened at all (checked by CCS JTAG or Linux command devmem2)

    2)     To bypass the issue 1), we used DSP BIOS program referred above to open the hyperlink port instead. Then run the openmpi test, beginning of the test is a read of remote region in MSMC for a fixed string over hyperlink, it also uses the MPM transport library. The read returns garbage value however, so the test aborts. We uses JTAG to confirm the string is indeed in the memory region and hyperlink mapping is setup correct.

    So the current issues pointing to the underneath library used by openMPI.

    Next plan:

    • debug with MPM transport library developer to resolve the two issues met
    • Then test openmpi again to see if it works

    Regards, Eric

  • Hi,

    The hyperlink ports were not automatically turned on after booting up the EVMs.

    Can you try manually turning it on with the following steps below?

     After booting up both EVMs.

     1)      On both the EVMs, issue the command below (almost simultaneously on both EVMs)

                /usr/bin/mpmcl transport arm-remote-hyplnk-0 open

                The above command should output   open succeeded for arm-remote-hyplnk-0 on both sides

     2)      Run the MPI test with hyperlink.

     Can you try the above and let us know?

    Please note that this is not a final solution, but a step confirming our findings while we work on a proper solution,

    PS: I was using the latest MCSDK HPC 3.0.1.12 to reproduce this issue. If you havent migrated to this version, please do so.

    http://software-dl.ti.com/sdoemb/sdoemb_public_sw/mcsdk_hpc/03_00_01_12/index_FDS.html

    Regards

    Mahesh

     

  • Thank you Mahesh, the mpi test was successful!