SK-AM62A-LP: Using edgeai-gst-apps-barcode-reader with SDK 10.0

Part Number: SK-AM62A-LP

Tool/software:

Hello TI Team,

I’m currently working on a custom hardware platform for barcode detection and came across your repository:
https://github.com/TexasInstruments-Sandbox/edgeai-gst-apps-barcode-reader/tree/main

From the documentation and commits, it appears that the repo officially supports SDK versions 8.6 and 9.0. I am using the 10.0 SDK (Processor SDK Linux) and would like to know if there is a supported or recommended way to get the barcode reader solution working with SDK 10.0.

Could you please let me know how can i move ahead

Your guidance will be highly appreciated.

Best regards,
Atharva Shende

  • Hello Atharva,

    Correct, that demo application has been validated for older SDK's, and there will be some delta effort required to port that to a newer SDK.

    The main porting step would be recompiling the trained model for the 10.0 SDK.  This will be similar to what I wrote in another demo developed around the same time (automated retail checkout [0]).

    • You will need the ONNX model trained for barcode detection; see ./scripts/setup_model.sh [2] -- this will download a tarball that includes the .ONNX model
      • edit: looks like this is a 404 error now... I'll attach here:
      •  https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/barcode_2D00_model.onnx

    You can download edgeai-tidl-tools for your release (rel_10_00) [1] and follow the instructions to setup on a Linux machine. The model architecture was YOLOX-NANO. If you change the path for the corresponding yolox-nano entry [3] in edgeai-tidl-tools/examples/osrt_python/model_configs.py to the barcode-trained model, you can compile without changing other settings. 

    • I'm omitting a few steps for brevity here -- they are otherwise covered in edgeai-tidl-tools documentation for how to setup and compile a model

    Once you have the compiled artifacts, transfer to the EVM at the path [2] was using (/opt/model_zoo/barcode-modelartifacts)

    The rest of the application is expected to work as-is 

    [0] https://github.com/TexasInstruments-Sandbox/edgeai-gst-apps-retail-checkout/blob/main/retail-shopping/doc/REPRODUCE.md#recompiling-the-model 

    [1] https://github.com/TexasInstruments/edgeai-tidl-tools/tree/rel_10_00 

    [2] https://github.com/TexasInstruments-Sandbox/edgeai-gst-apps-barcode-reader/blob/main/scripts/setup_model.sh 

    [3] https://github.com/TexasInstruments/edgeai-tidl-tools/blob/efae61031b31aa2ba5491c03bb808216d3baef14/examples/osrt_python/model_configs.py#L752 

  • Thanks Reese,

    Can you tell me proper steps for training the model for 10.0

  • Hi Atharva,

    I'll make a (pedantic) correction that it will not be training the model. This has been done already, and produced a .ONNX model file -- this is still perfectly valid. The SDK- and SOC -specific portion here is to compile/import the model. 

    Note that this is not a production grade version. It was trained on a small set of online datasets and images taken on our campus. 

    1. clone edgeai-tidl-tools for rel_10_00 branch
    2. run setup.sh script to install components and TIDL tooling for importing/compiling the model. I recommend using a python virtual environment
      1. https://github.com/TexasInstruments/edgeai-tidl-tools/?tab=readme-ov-file#setup-on-x86_pc
      2. You will need to export SOC=am62a beforehand to target this processor
    3. Copy the ONNX model file into models/public
    4. Edit edgeai-tidl-tools/examples/osrt_python/model_configs.py for entry od-8200_onnxrt_coco_edgeai-mmdet_yolox_nano_lite_416x416_20220214_model_onnx
      1. change the model_path to use the barcode-trained .ONNX model 
      2. You can also copy this whole dictionary in python and give it a separate (shorter) name; I'll refer to it generically as barcode-onnx-tidl
    5. change directory to 'ort' and compile the model like so
      1. python3 onnxrt_ep.py -c -m barcode-onnx-tidl
    6. Your model artifacts will be at edgeai-tidl-tools/model-artifacts/barcode-onnx-tidl
    7. Copy this model-artifacts/barcode-onnx-tidl directory over to the EVM at /opt/model_zoo/barcode-modelartifacts
  • Hi Reese,

    Thank you for your guidance. I tried the steps you provided, but encountered an error when running the command but still when i checked model-artifacts folder i got barcode-onnx-tidl folder ...


    python3 onnxrt_ep.py -c -m barcode-onnx-tidl

    here are the logs

    root@c32097e8b443:/home/root/examples/osrt_python/ort# python3 onnxrt_ep.py -c -m barcode-onnx-tidl
    Available execution providers : ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']

    Running 1 Models - ['barcode-onnx-tidl']


    Running_Model : barcode-onnx-tidl


    Running shape inference on model ../../../models/public/barcode-model.onnx

    ========================= [Model Compilation Started] =========================

    Model compilation will perform the following stagesa:
    1. Parsing
    2. Graph Optimization
    3. Quantization & Calibration
    4. Memory Planning

    ============================== [Version Summary] ==============================

    -------------------------------------------------------------------------------
    | TIDL Tools Version | 10_00_08_00 |
    -------------------------------------------------------------------------------
    | C7x Firmware Version | 10_00_02_00 |
    -------------------------------------------------------------------------------
    | Runtime Version | 1.14.0+10000005 |
    -------------------------------------------------------------------------------
    | Model Opset Version | 11 |
    -------------------------------------------------------------------------------

    NOTE: The runtime version here specifies ONNXRT_VERSION+TIDL_VERSION
    Ex: 1.14.0+1000XXXX -> ONNXRT 1.14.0 and a TIDL_VERSION 10.00.XX.XX

    ============================== [Parsing Started] ==============================

    [TIDL Import] WARNING: 'meta_layers_names_list' is not provided - running OD post processing in ARM mode
    Number of OD backbone nodes = 0
    Size of odBackboneNodeIds = 0

    ------------------------- Subgraph Information Summary -------------------------
    -------------------------------------------------------------------------------
    | Core | No. of Nodes | Number of Subgraphs |
    -------------------------------------------------------------------------------
    | C7x | 244 | 13 |
    | CPU | 39 | x |
    -------------------------------------------------------------------------------
    ---------------------------------------------------------------------------------------------------------------------------------------------
    | Node | Node Name | Reason |
    ---------------------------------------------------------------------------------------------------------------------------------------------
    | Gather | Gather_364 | Only line gather is supported |
    | ReduceMax | ReduceMax_369 | Reduction is only supported along height |
    | Less | Less_373 | Layer type not supported by TIDL |
    | Not | Not_374 | Layer type not supported by TIDL |
    | NonZero | NonZero_375 | Layer type not supported by TIDL |
    | Exp | Exp_327 | Layer type not supported by TIDL |
    | Gather | Gather_355 | Only line gather is supported |
    | Gather | Gather_353 | Only line gather is supported |
    | Unsqueeze | Unsqueeze_361 | Layer type not supported by TIDL |
    | Gather | Gather_349 | Only line gather is supported |
    | Gather | Gather_347 | Only line gather is supported |
    | Unsqueeze | Unsqueeze_360 | Layer type not supported by TIDL |
    | Sub | Sub_345 | Both inputs as variable are not supported in Sub/Div |
    | Unsqueeze | Unsqueeze_359 | Layer type not supported by TIDL |
    | Sub | Sub_339 | Both inputs as variable are not supported in Sub/Div |
    | Unsqueeze | Unsqueeze_358 | Layer type not supported by TIDL |
    | Gather | Gather_368 | Only line gather is supported |
    | GatherND | GatherND_377 | Layer type not supported by TIDL |
    | ReduceMax | ReduceMax_388 | Reducing in all dimensions is not supported in TIDL-RT |
    | ArgMax | ArgMax_370 | Only keepdims = 1 (default) is supported |
    | GatherND | GatherND_387 | Layer type not supported by TIDL |
    | Cast | Cast_389 | Only supported at the terminal nodes (Input/Output) of the network |
    | Mul | Mul_392 | The variable inputs in Add/Mul/Sub/Div/Max layer must of be same dimensions or broadcast-able |
    | Unsqueeze | Unsqueeze_393 | Layer type not supported by TIDL |
    | Add | Add_394 | The variable inputs in Add/Mul/Sub/Div/Max layer must of be same dimensions or broadcast-able |
    | Shape | Shape_398 | Layer type not supported by TIDL |
    | Gather | Gather_400 | Input dimensions must be greater than 1D |
    | GatherND | GatherND_383 | Layer type not supported by TIDL |
    | GatherND | GatherND_380 | Layer type not supported by TIDL |
    | Unsqueeze | Unsqueeze_396 | Layer type not supported by TIDL |
    | Unsqueeze | Unsqueeze_397 | Layer type not supported by TIDL |
    | Unsqueeze | Unsqueeze_395 | Layer type not supported by TIDL |
    | NonMaxSuppression | NonMaxSuppression_403 | Layer type not supported by TIDL |
    | Gather | Gather_405 | Only line gather is supported |
    | Squeeze | Squeeze_406 | Subgraph does not have any compute node |
    | Gather | Gather_417 | Input dimensions must be greater than 1D |
    | Gather | Gather_408 | Input dimensions must be greater than 1D |
    | Gather | Gather_414 | Only line gather is supported |
    | Unsqueeze | Unsqueeze_415 | Layer type not supported by TIDL |
    ---------------------------------------------------------------------------------------------------------------------------------------------
    ============================= [Parsing Completed] =============================

    Process Process-1:
    Traceback (most recent call last):
    File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
    File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
    File "/home/root/examples/osrt_python/ort/onnxrt_ep.py", line 325, in run_model
    sess = rt.InferenceSession(
    File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 362, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
    File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 410, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
    onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for Split(11) (node Split). Kernel not found

  • Can you provide the entry for this model in your model_config.py?

    What I see is that an object-detection meta-architecture file (.PROTOTXT) is either not found or not supplied.  

    [TIDL Import] WARNING: 'meta_layers_names_list' is not provided - running OD post processing in ARM mode

    Which is related to the documentation here on meta-architectures, which encode object-detection post-processing (SSD) heads [0]

    [0] https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_fsg_od_meta_arch.md 

  • "barcode-onnx-tidl": create_model_config(
    #source=AttrDict(
    # model_url="">software-dl.ti.com/.../yolox_nano_lite_416x416_20220214_model.onnx",
    # meta_arch_url="">software-dl.ti.com/.../yolox_nano_lite_416x416_20220214_model.prototxt",
    # infer_shape=True,
    #),
    preprocess=AttrDict(
    resize=416,
    crop=416,
    data_layout="NCHW",
    pad_color=[114, 114, 114],
    resize_with_pad=[True, "corner"],
    reverse_channels=True,
    ),
    session=AttrDict(
    session_name="onnxrt",
    model_path=os.path.join(
    models_base_path, "barcode-model.onnx"
    ),
    #meta_layers_names_list=os.path.join(
    # models_base_path, "yolox_nano_lite_416x416_20220214_model.prototxt"
    #),
    meta_arch_type=6,
    input_mean=[0, 0, 0],
    input_scale=[1, 1, 1],
    input_optimization=True,
    ),
    postprocess=AttrDict(
    formatter="DetectionBoxSL2BoxLS",
    resize_with_pad=True,
    keypoint=False,
    object6dpose=False,
    normalized_detections=False,
    shuffle_indices=None,
    squeeze_axis=None,
    reshape_list=[(-1, 5), (-1, 1)],
    ignore_index=None,
    ),
    task_type="detection",
    extra_info=AttrDict(
    od_type="SSD",
    framework="MMDetection",
    num_images=numImages,
    num_classes=91,
    label_offset_type="80to90",
    label_offset=1,
    ),
    ),


    Below is the entry I’ve added for my model in model_config.py I only have the barcode-model.onnx file, which I downloaded. I do not have a corresponding .prototxt file.

    Should I reuse the same .prototxt file that was associated with the previous model (od-8200_onnxrt_coco_edgeai-mmdet_yolox_nano_lite_416x416_20220214_model_onnx), or is .prototxt even required in this case?

  • Also after providing .prototxt file still i got that split error (Failed to find kernel for Split(11) (node Split). Kernel not found)


    root@c32097e8b443:/home/root/examples/osrt_python/ort# python3 onnxrt_ep.py -c -m barcode-onnx-tidl
    Available execution providers : ['TIDLExecutionProvider', 'TIDLCompilationProvider', 'CPUExecutionProvider']

    Running 1 Models - ['barcode-onnx-tidl']


    Running_Model : barcode-onnx-tidl


    Running shape inference on model ../../../models/public/barcode-model.onnx

    ========================= [Model Compilation Started] =========================

    Model compilation will perform the following stages:
    1. Parsing
    2. Graph Optimization
    3. Quantization & Calibration
    4. Memory Planning

    ============================== [Version Summary] ==============================

    -------------------------------------------------------------------------------
    | TIDL Tools Version | 10_00_08_00 |
    -------------------------------------------------------------------------------
    | C7x Firmware Version | 10_00_02_00 |
    -------------------------------------------------------------------------------
    | Runtime Version | 1.14.0+10000005 |
    -------------------------------------------------------------------------------
    | Model Opset Version | 11 |
    -------------------------------------------------------------------------------

    NOTE: The runtime version here specifies ONNXRT_VERSION+TIDL_VERSION
    Ex: 1.14.0+1000XXXX -> ONNXRT 1.14.0 and a TIDL_VERSION 10.00.XX.XX

    ============================== [Parsing Started] ==============================

    yolox is meta arch name
    yolox
    Number of OD backbone nodes = 0
    Size of odBackboneNodeIds = 0

    ------------------------- Subgraph Information Summary -------------------------
    -------------------------------------------------------------------------------
    | Core | No. of Nodes | Number of Subgraphs |
    -------------------------------------------------------------------------------
    | C7x | 244 | 13 |
    | CPU | 39 | x |
    -------------------------------------------------------------------------------
    ---------------------------------------------------------------------------------------------------------------------------------------------
    | Node | Node Name | Reason |
    ---------------------------------------------------------------------------------------------------------------------------------------------
    | Gather | Gather_364 | Only line gather is supported |
    | ReduceMax | ReduceMax_369 | Reduction is only supported along height |
    | Less | Less_373 | Layer type not supported by TIDL |
    | Not | Not_374 | Layer type not supported by TIDL |
    | NonZero | NonZero_375 | Layer type not supported by TIDL |
    | Exp | Exp_327 | Layer type not supported by TIDL |
    | Gather | Gather_355 | Only line gather is supported |
    | Gather | Gather_353 | Only line gather is supported |
    | Unsqueeze | Unsqueeze_361 | Layer type not supported by TIDL |
    | Gather | Gather_349 | Only line gather is supported |
    | Gather | Gather_347 | Only line gather is supported |
    | Unsqueeze | Unsqueeze_360 | Layer type not supported by TIDL |
    | Sub | Sub_345 | Both inputs as variable are not supported in Sub/Div |
    | Unsqueeze | Unsqueeze_359 | Layer type not supported by TIDL |
    | Sub | Sub_339 | Both inputs as variable are not supported in Sub/Div |
    | Unsqueeze | Unsqueeze_358 | Layer type not supported by TIDL |
    | Gather | Gather_368 | Only line gather is supported |
    | GatherND | GatherND_377 | Layer type not supported by TIDL |
    | ReduceMax | ReduceMax_388 | Reducing in all dimensions is not supported in TIDL-RT |
    | ArgMax | ArgMax_370 | Only keepdims = 1 (default) is supported |
    | GatherND | GatherND_387 | Layer type not supported by TIDL |
    | Cast | Cast_389 | Only supported at the terminal nodes (Input/Output) of the network |
    | Mul | Mul_392 | The variable inputs in Add/Mul/Sub/Div/Max layer must of be same dimensions or broadcast-able |
    | Unsqueeze | Unsqueeze_393 | Layer type not supported by TIDL |
    | Add | Add_394 | The variable inputs in Add/Mul/Sub/Div/Max layer must of be same dimensions or broadcast-able |
    | Shape | Shape_398 | Layer type not supported by TIDL |
    | Gather | Gather_400 | Input dimensions must be greater than 1D |
    | GatherND | GatherND_383 | Layer type not supported by TIDL |
    | GatherND | GatherND_380 | Layer type not supported by TIDL |
    | Unsqueeze | Unsqueeze_396 | Layer type not supported by TIDL |
    | Unsqueeze | Unsqueeze_397 | Layer type not supported by TIDL |
    | Unsqueeze | Unsqueeze_395 | Layer type not supported by TIDL |
    | NonMaxSuppression | NonMaxSuppression_403 | Layer type not supported by TIDL |
    | Gather | Gather_405 | Only line gather is supported |
    | Squeeze | Squeeze_406 | Subgraph does not have any compute node |
    | Gather | Gather_417 | Input dimensions must be greater than 1D |
    | Gather | Gather_408 | Input dimensions must be greater than 1D |
    | Gather | Gather_414 | Only line gather is supported |
    | Unsqueeze | Unsqueeze_415 | Layer type not supported by TIDL |
    ---------------------------------------------------------------------------------------------------------------------------------------------
    ============================= [Parsing Completed] =============================

    Process Process-1:
    Traceback (most recent call last):
    File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
    File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
    File "/home/root/examples/osrt_python/ort/onnxrt_ep.py", line 325, in run_model
    sess = rt.InferenceSession(
    File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 362, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
    File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 410, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
    onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for Split(11) (node Split). Kernel not found

  • Hi Atharva,

    My sincere apologies -- I directed you to use a PROTOTXT file that was not 100% compatible with the barcode ONNX model. Please use the one linked here

    I encountered the same issue with the prototxt I had recommended -- it had a few different layer names and different # of classes. 

    /cfs-file/__key/communityserver-discussions-components-files/791/barcode_2D00_yolox_2D00_nano_2D00_metaarch.prototxt

    Compilation is successful in 10.0 SDK with this model_config:

        'barcode-yolox' :{ 
            'model_path' : os.path.join(models_base_path, 'barcode-yolox.onnx'),
            'source' : {'model_url': 'dummy', 'opt': True,  'infer_shape' : True, \
                        'meta_arch_url' : 'dummy'},
            'mean': [0, 0, 0],
            'scale' : [1, 1, 1],
            'num_images' : numImages,
            'num_classes': 91,
            'model_type': 'od',
            'od_type' : 'SSD',
            'framework' : 'MMDetection',
            'session_name' : 'onnxrt' ,
            'meta_layers_names_list' : os.path.join(models_base_path, 'barcode-yolox.prototxt'),
            'meta_arch_type' : 6
        },

    And the model is detecting barcodes as expected on basic test images. I will reaffirm that this is not a production model. It is meant as an example for demo purposes. 

    BR,
    Reese

  • Hi Reese,

    Thank you for your support.

    The model has been compiled successfully. Could you please share a basic example (similar to onnxrt_ep.py) to run the model? I would also like to test barcode detection using my own camera (/dev/video3) on the board.

    Thanks and regards,
    Atharva Shende

  • Hi Atharva,

    If you want to run a full vision pipeline from capture-preprocess-TIDL-postproc-display, then I would suggest the edgeai-gst-apps (located under /opt).

    You can specify the input configuration and the model to use from a YAML config file [1]. The barcode-reader git repo is also based on these tools and has a preexisting config file that you could start from.  

    Note that the model will find barcodes, but not decode them -- our demo application used ZBar library for this

    There was an issue in some of the previous SDK's (I struggle to recall if 10.0 was one of them) in which the model IO description in param.yaml of the artifacts was using a different format than what edgeai-gst-apps expected. I have an FAQ on this topic in case your model is not initializing with that edgeai-gst-apps tool [2]

    [1] https://software-dl.ti.com/processor-sdk-linux/esd/AM62AX/latest/exports/docs/edgeai/configuration_file.html#inputs

    [2] [FAQ] How do I get a valid param.yaml file for my TIDL deep learning model? The demos in edgeai-gst-apps throw errors on my model  

  • Thanks, Reese,

    I was able to detect barcodes using the process you explained with the YAML config file. Now, could you please tell me how I can decode them using ZBar, or how I can integrate ZBar into the pipeline?

    Thanks and regards,
    Atharva Shende

  • Hi Atharva,

    You would want to take cropped images using the bounding-box output of the neural network and use these as input to ZBAR. I'll share where this is done in the demo application. 

    See here for Python [1]  and CPP [2] versions of this being utilized. We convert the image to grayscale with OpenCV before attempting to decode. For more detailed information on ZBAR, please see their public documentation. Note that there are other implementations for 1D/2D decoding, but we chose ZBAR because it was fairly straightforward and open-source with permissive license.

    Some libraries like ZBAR also can detect multiple code types, but doing so will increase processing time -- the library must run some processing just to determine the code type (e.g. QR, data matrix). Specific usage of open source libraries like ZBAR is outside TI's support provisions -- this demo application uses ZBAR as an example for this procedure. 

    [1] https://github.com/TexasInstruments-Sandbox/edgeai-gst-apps-barcode-reader/blob/df8f8116b4cf05a997468bcf6e27945ea699b5f9/apps_python/post_process.py#L278

    [2] https://github.com/TexasInstruments-Sandbox/edgeai-gst-apps-barcode-reader/blob/df8f8116b4cf05a997468bcf6e27945ea699b5f9/apps_cpp/common/src/post_process_image_object_detect.cpp#L127

    BR,
    Reese

  • Hello Reese,

    I integrated the scan_codes() function into my pipeline as suggested. Here is the function I added:

    def scan_codes(self, img):
        h, w, c = img.shape
        if h <= 0 or w <= 0:
            return ""
        if c == 3:
            img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
        zbar_img = zbar.Image(w, h, 'Y800', img.tobytes())
        self.zbar_scanner.scan(zbar_img)
        found_text = []
        for sym in zbar_img:
            found_text.append(sym.data)
    
        return found_text
    

    With this, the bounding boxes from my detection model are correctly passed to ZBar, and it does detect that a barcode is present. However, when I try to decode, the result still shows as “undefined” instead of the actual barcode value.

    Could you please clarify:

    1. Is there a specific conversion step required (e.g., sym.data.decode("utf-8")) to extract the barcode text properly?

    2. Do I need to set up the zbar.ImageScanner() object in a particular way for decoding to work correctly?

    3. Any known issues with ZBar on the AM62A (Python 3.12 / Yocto SDK) that could cause decoding to return undefined?

    Thank you for your support,
    Atharva Shende

  • I would suggest saving the cropped images you run ZBAR on as image files so you can view them yourself. It is common that a barcode is rotated, and the cropped portion of the image cuts off part of the code -- this will make it hard if not impossible to read. I expect that the "img" passed here in the demo app was cropped from the bounding box coordinates + some buffer area to better capture corners. 

    I'm not aware of any issues with this SDK that would cause you to get undefined results. In the repo here, we build ZBAR on the target itself from scratch, and did not run into any further functional issues. I would suggest viewing ZBAR documentation for more help here on setup and usage -- it is outside my support purview.

    You can also view other portions of this demo repo for how we setup 

    BR,
    Reese

  • Hi Reese,

    Thanks for the detailed suggestion. I tried saving the cropped images from the bounding boxes, and I can confirm the crops look correct and include the full QR/barcode (with some buffer). Detection is working fine, but the issue is that the decode result is consistently coming back as "undefined" instead of the actual barcode value.

    This makes me believe the issue isn’t with the crop being clipped, but rather with how ZBAR (or pyzbar) is being used inside the pipeline. Could you clarify if there are any specific parameters or decoding modes that need to be set for ZBAR in this demo?

    Appreciate your guidance — just wanted to highlight that the problem is not with detecting barcodes, but with decoding them to readable text (always undefined).

    Best regards,
    Atharva Shende

  • Hi Atharva,

    We used ZBAR ImageScanner here with default configurations. The postprocess.py python file [1] contains a minimal reference and usage of this library. 

    Effectively all we needed was to create the scanner object and enable the config. It is possible to change that config for different code types, but we left this as default. 

    I would suggest take some of those output images (and maybe a few pulled directly from the web) as files, and try using a simple python script with those ZBAR function calls to decode data read from those image files. You can also do this on PC, and it is mainly a step to flow-flush the library and your usage of it. 

    [1] github.com/.../post_process.py