Tool/software:
If I build our system using Yocto from SDK 10.1 (ti-processor-sdk-linux-edgeai-j721s2-evm-10_01_00_04-Linux-x86-Install.bin), I end up with a system that is unable to perform inferencing. I've tried using the config file included with the SDK release itself (configs/processor-sdk-analytics/processor-sdk-analytics-10_01-config.txt) and the one available via the repo https://git.ti.com/git/arago-project/oe-layersetup.git (configs/processor-sdk-analytics/processor-sdk-analytics-10.01.00-config.txt). I would expect these to be identical, but they are not (the referenced commit for meta-edgeai differs between them).
Building with either configuration results in errors when tensor data is pulled from the GST pipeline:
root@am68a-sk:/opt/edgeai-gst-apps# cd apps_python/
root@am68a-sk:/opt/edgeai-gst-apps/apps_python# ./app_edgeai.py -n -v ../configs/image_classification.yaml
Number of subgraphs:1 , 34 nodes delegated out of 34 nodes
APP: Init ... !!!
22603.275840 s: MEM: Init ... !!!
22603.275908 s: MEM: Initialized DMA HEAP (fd=6) !!!
22603.276082 s: MEM: Init ... Done !!!
22603.276110 s: IPC: Init ... !!!
22603.312964 s: IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
22603.319090 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
22603.321311 s: VX_ZONE_INIT:Enabled
22603.321544 s: VX_ZONE_ERROR:Enabled
22603.323104 s: VX_ZONE_WARNING:Enabled
22603.328488 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-0
22603.328804 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-1
22603.328938 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-2
22603.329067 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:124] Added target MPU-3
22603.329085 s: VX_ZONE_INIT:[tivxInitLocal:136] Initialization Done !!!
22603.331131 s: VX_ZONE_INIT:[tivxHostInitLocal:106] Initialization Done for HOST !!!
==========[INPUT PIPELINE(S)]==========
[PIPE-0]
multifilesrc location=/opt/edgeai-test-data/images/%04d.jpg index=1 stop-index=-1 loop=True ! jpegdec ! videoscale qos=True ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12;" ! tiovxmultiscaler name=split_01
split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)1280, height=(int)720;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! appsink max-buffers=2 drop=True name=sen_0
split_01. ! queue ! capsfilter caps="video/x-raw, width=(int)454, height=(int)256;" ! tiovxdlcolorconvert out-pool-size=4 ! capsfilter caps="video/x-raw, format=(string)RGB;" ! videobox qos=True left=115 right=115 top=16 bottom=16 ! tiovxdlpreproc out-pool-size=4 channel-order=1 data-type=3 ! capsfilter caps="application/x-tensor-tiovx;" ! appsink max-buffers=2 drop=True name=pre_0
==========[OUTPUT PIPELINE]==========
appsrc do-timestamp=True format=3 block=True name=post_0 ! tiovxdlcolorconvert ! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1280, height=(int)720;" ! queue ! mosaic_0.sink_0
tiovxmosaic target=1 background=/tmp/background_0 name=mosaic_0 src::pool-size=4
sink_0::startx="<320>" sink_0::starty="<150>" sink_0::widths="<1280>" sink_0::heights="<720>"
! capsfilter caps="video/x-raw, format=(string)NV12, width=(int)1920, height=(int)1080;" ! queue ! tiperfoverlay title=Image Classification ! kmssink sync=False max-lateness=5000000 qos=True processing-deadline=15000000 driver-name=tidss connector-id=40 plane-id=31 force-modesetting=True fd=44
[ERROR] Error pulling tensor from GST Pipeline
If I download the pre-built 10.01.00.04 image and use that, then inferencing works as expected.
Further, if I copy the contents of the pre-built image's `/usr/lib` dir to the SD card with the Yocto build, then inferencing works!
In looking at the contents of the pre-built image, I see discrepancies between several TI-specific libraries. Here's one example: The Yocto build has `/usr/lib/libtivision_apps.so.10.0.0`. The pre-built image has `/usr/lib/libtivision_apps.so.10.1.0`. I'd expect the versions to be identical, but they are not. There are other edgeai-related libs that are also different based on their md5sums. I'd expect everything to be identical but that is not the case.
Any idea what is up? Can someone try to build from scratch using Yocto for the am68-sk and confirm they end up with something that can perform inferencing?