Tool/software:
1-Captures live video from a camera.
2-Processes the video through a GStreamer pipeline that includes TI’s inference elements to perform object detection.
3-Records the stream into 10‑second MKV files.
4-Manages recorded files by keeping only the last three segments and deleting older ones.
5-Listens for user input so that if the user presses 'y', it copies the current, previous, and next segments.
6- Attempts to print a notification message on the terminal when a person is detected in the live stream.
1 to 5 step work perfectly but 6 th requirement is not work so this 6 th option is possible on your AM68A board
Hi Mohammed Niyas,
We do not have any tutorial for this, but you can take a look at how the example edgeai-gst-apps code to see how this can be implemented in a similar application. The python-based application is located in the /opt/edgeai-gst-apps/apps_python directory.
Thank you,
Fabiana
HI fabiana,
I have completed the following steps using C code:
Now, I need to trigger an action when a person is detected. Is it possible to implement this trigger using C code, or alternatively, how can I extract metadata related to object detection?
Hi,
We do not offer application development support. I would suggest using the C++ based application is located in the /opt/edgeai-gst-apps/apps_cpp directory as a reference. If you have any specific questions about the sample application code, please let me know.
Thank you,
Fabiana
gchar *pipeline_str = g_strdup_printf(
"v4l2src device=/dev/video-usb-cam0 io-mode=2 ! image/jpeg, width=1280, height=720 ! jpegdec ! tiovxdlcolorconvert ! video/x-raw, format=NV12 ! "
"tiovxmultiscaler name=split src_0::roi-startx=0 src_0::roi-starty=0 src_0::roi-width=1280 src_0::roi-height=720 target=0 ! "
"queue ! video/x-raw,width=320,height=320 ! "
"tiovxdlpreproc model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 out-pool-size=4 ! application/x-tensor-tiovx ! "
"tidlinferer target=1 model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 ! post.tensor "
"split. ! queue ! video/x-raw,width=480,height=480 ! post.sink "
"tidlpostproc name=post model=/opt/model_zoo/TFL-OD-2020-ssdLite-mobDet-DSP-coco-320x320 alpha=0.4 viz-threshold=0.6 top-N=5 display-model=true ! "
"tee name=video_tee ! queue ! v4l2h264enc ! h264parse ! queue ! mux. "
"alsasrc device=hw:0,0 ! audio/x-raw,format=S16LE,rate=16000,channels=1 ! audioconvert ! audioresample ! "
"avenc_aac bitrate=128 ! aacparse ! queue ! mux. "
"matroskamux name=mux ! filesink location=%s sync=true",
output_file
);
How extract the class ID and name from the tidlinferer
element
Hi Mohammed Niyas,
This is not a feature in our sample applications that can be enabled, so you will have to implement this yourself. If you take a look at either the c++ or python GStreamer application, you can see where this data lies and make the changes required to achieve this. Please let me know if you have specific questions about our code.
Thank you,
Fabiana
What about OptiFlow? From what I observed, the apps_cpp
and apps_python
examples use OpenCV for post-processing instead of TIDL elements. However, our requirement is to use GStreamer, where tidlpostproc
is utilized for post-processing.
I also noticed that OptiFlow runs an end-to-end GStreamer pipeline when executing the demo. Our main goal is to achieve the same using GStreamer while extracting either the class name or at least the class ID, which we can then map to the COCO dataset. tidlpostprocess will have class names any way to access this.
When I tried running the C++ application (apps_cpp
), I encountered a graph-related error. I previously faced this issue but couldn't resolve it at the time.
root@am68a-sk:/opt/edgeai-gst-apps/apps_cpp# ./bin/Release/app_edgeai ../configs/image_classification.yaml
Number of subgraphs:1 , 34 nodes delegated out of 34 nodes
APP: Init ... !!!
12257.155477 s: MEM: Init ... !!!
12257.155556 s: MEM: Initialized DMA HEAP (fd=6) !!!
12257.155752 s: MEM: Init ... Done !!!
12257.155779 s: IPC: Init ... !!!
12257.197586 s: IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
12257.206213 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
12257.206971 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_ERROR
12257.207203 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_WARNING
12257.207236 s: VX_ZONE_INFO: Globally Enabled VX_ZONE_INFO
12257.210168 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-0
12257.210712 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-1
12257.210892 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-2
12257.212165 s: VX_ZONE_INFO: [tivxPlatformCreateTargetId:134] Added target MPU-3
12257.212225 s: VX_ZONE_INFO: [tivxInitLocal:126] Initialization Done !!!
12257.212260 s: VX_ZONE_INFO: Globally Disabled VX_ZONE_INFO
12257.229918 s: VX_ZONE_ERROR: [ownContextSendCmd:912] Command ack message returned failure cmd_status: -1
12257.230313 s: VX_ZONE_ERROR: [ownNodeKernelInit:604] Target kernel, TIVX_CMD_NODE_CREATE failed for node TIDLNode
12257.230442 s: VX_ZONE_ERROR: [ownNodeKernelInit:605] Please be sure the target callbacks have been registered for this core
12257.230531 s: VX_ZONE_ERROR: [ownNodeKernelInit:606] If the target callbacks have been registered, please ensure no errors are occurring within the create callback of this kernel
12257.230638 s: VX_ZONE_ERROR: [ownGraphNodeKernelInit:690] kernel init for node 0, kernel com.ti.tidl:1:1 ... failed !!!
12257.230696 s: VX_ZONE_ERROR: [ TIDL subgraph MobilenetV1/Predictions/Reshape_1 ] Node kernel init failed
12257.230794 s: VX_ZONE_ERROR: [ TIDL subgraph MobilenetV1/Predictions/Reshape_1 ] Graph verify failed
TIDL_RT_OVX: ERROR: Verifying TIDL graph ... Failed !!!
TIDL_RT_OVX: ERROR: Verify OpenVX graph failed
graph
[07:07:56.000.000000]:ERROR:[startPipeline:0141] gst_element_set_state() failed.
[07:07:56.000.000085]:ERROR:[setupFlows:0250] Failed to start GST pipelines.
terminate called after throwing an instance of 'std::runtime_error'
what(): EdgeAIDemoImpl object creation failed.
Aborted (core dumped)
Hello,
What SDK version are you using? Have you made any changes to the configuration file or anything else? Do you have a display connected to the board?
Thank you,
Fabiana
SDK version 10.0.1, no i have not made any changes to the configuration file.
Thanks connecting monitor has worked the issue is no more.
Thank you very much for your reply. I ran the C++ application and modified the object_detection.yaml
file to use a USB camera for detection.
I attempted to add a print
statement in /common/src/post_process_image_object_detect.cpp
to display the detected object's name in the terminal. However, it seems the print statement is not working. This might be due to the existing logs generated during the execution of the object detection application, which could be overwhelming the terminal output and preventing our custom print statement from appearing.
Additionally, I tried to inspect the script being executed, located in bin/release/app_edgeai
, but it is not in a readable format, making it difficult to debug further.
Could you please help me understand why the print statement isn't appearing in the terminal and suggest a way to resolve this issue?
Hi Nihal,
Did you rebuild after making the change to the C++ application? C++ apps can be modified and built on the target using below steps.
/opt/edgeai-gst-apps/apps_cpp# rm -rf build bin lib /opt/edgeai-gst-apps/apps_cpp# mkdir build /opt/edgeai-gst-apps/apps_cpp# cd build /opt/edgeai-gst-apps/apps_cpp/build# cmake .. /opt/edgeai-gst-apps/apps_cpp/build# make -j2
Thank you,
Fabiana
I appreciate your help! I was able to print the object name successfully. I also implemented logic to assign a variable and print only when a person is detected exclusively.
Now, I want to record a 10-second video three times in a FIFO manner—meaning when a new recording starts, the oldest file should be deleted and replaced with the latest one. I’ve done this before using splitmuxsink, but I’m unsure how to implement it in this case.
Currently, I can save and stream video simultaneously, but I need help achieving continuous recording while maintaining only the last three files.
Below is the relevant section from edgeai_demo_config.cpp
where the video is saved:
else if (sinkType == "video")
{
string h264enc = gstElementMap["h264enc"]["element"].as<string>();
string encoder_extra_ctrl = "";
if (h264enc == "v4l2h264enc")
{
encoder_extra_ctrl = "controls"
",frame_level_rate_control_enable=1"
",video_bitrate=" + to_string(m_bitrate) +
",video_gop_size=" + to_string(m_gopSize);
m_gstElementProperty = {{"extra-controls", encoder_extra_ctrl.c_str()}};
}
makeElement(m_dispElements, h264enc.c_str(), m_gstElementProperty, NULL);
// Add H.264 Parser
makeElement(m_dispElements, "h264parse", m_gstElementProperty, NULL);
// Add MKV Multiplexer
makeElement(m_dispElements, "matroskamux", m_gstElementProperty, NULL);
// Configure filesink to store in output.mkv
m_gstElementProperty = {{"location", "output.mkv"},
{"name", name.c_str()}};
makeElement(m_dispElements, "filesink", m_gstElementProperty, NULL);
}
Additionally, in my object_detection.yaml
, I’ve created two output flows:
outputs:
output0:
sink: kmssink
width: 1280
height: 720
overlay-perf-type: graph
output1:
sink: /opt/edgeai-test-data/output/output_video0.mkv
width: 1280
height: 720
output2:
sink: /opt/edgeai-test-data/output/output_image_%04d.jpg
width: 1920
height: 1080
output3:
sink: remote
width: 1920
height: 1080
port: 8081
host: 127.0.0.1
encoding: jpeg
overlay-perf-type: graph
flows:
flow0: [input1, model2, output0]
flow1: [input1, model2, output1]
Here, input1
is a USB webcam video, and model2
is YOLOX.
Could you help me implement continuous recording in 10-second intervals while ensuring only the last three video files are retained, following a FIFO approach?
Hi Nihal,
I am glad to hear that you were able to successfully print the desired object name!
As I have stated before, TI does not offer application development support. This use case is not a feature that is supported out of box, nor do we have any tutorials on enabling it. Using a script is one way that you can accomplish this. I suggest looking at online resources for tutorials/examples. If you encounter any errors that seem to be related to the board or SDK, you can ask a new question here on E2E. Because the initial question has been answered and resolved, I will be closing this thread.
Thank you,
Fabiana
Please mark my last response as having answered/resolving your question.
Thanks,
Fabiana