TDA4AL-Q1: Request how to float the input of the TIDL deep learning model

Part Number: TDA4AL-Q1

Tool/software:

I have developed a sequence model that does not use images as input.
Therefore, the network input is in float format.
However, TI's default deep learning architecture is designed for image-based models.

To input float values into the model in this environment, I used the following method:
I checked the range of the input values and normalized them to fit within the range of 0 to 255.
For example, if the range of the input float values is 0 to 1, I set the std in the conversion settings to 1/255.
Then, in the C code, I multiplied the input float values by 255 before feeding them into the network.
This works correctly because the std of 1/255 was applied in the configuration (cfg).

However, the 8-bit representation did not meet the range requirements of my float input because the maximum expressible value with 1/255 is limited.

Please let me know if there is any way.

  • Hi Minwoo;

    Assume you want to do some kind of quantization first yourself. Then you should know your data range, (even they are float point number), let us say your range is [0, Xmax]. Then you should map this range to [0, 255]. 

    Let Q be your quantized data in [0,255], x is your current float-point number that you want to convert, then Q = (x/Xmax)*255

    I hope I understand your question correctly and answered your question. 

    In addition, if you want to know more about the quantization options of TIDL, here is the link.

    https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_fsg_quantization.md

     

    If you have any other questions, please feel free to submit a new ticket. 

    Thanks and regards

    Wen Li 

  • Hello,

    Thank you for your response.
    I’d like to provide more detailed information and ask for your feedback.

    For example, I want to input 1D values, where the input distribution ranges from 0 to 255. However, I need higher precision, such as 0.1, 0.2, or 0.6. Using uint8 seems too limited for my requirements.

    Question 1

    Is it possible to use uint16 for the input, instead of uint8? Specifically, can I input values ranging from 0 to 65535 using uint16?

    Question 2

    If the above is possible, can I input 16-bit data during PTQ?
    Even when storing data as raw 16-bit, it seems like the PTQ process does not properly recognize it as 16-bit input. Can you clarify if this is supported?

    Question 3

    This is a separate question. I trained a model with QAT (Quantization-Aware Training) and configured it using the settings below:

    https://github.com/TexasInstruments/edgeai-tensorlab/blob/main/edgeai-modelzoo/models/vision/classification/imagenet1k/edgeai-tv2/mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model_config.yaml
    input_mean:
    - 123.675
    - 116.28
    - 103.53
    input_scale:
    - 0.017125
    - 0.017507
    - 0.017429
    runtime_options:
    tensor_bits: 8
    accuracy_level: 0
    advanced_options:high_resolution_optimization: 0
    advanced_options:pre_batchnorm_fold: 1
    advanced_options:calibration_frames: 1
    advanced_options:calibration_iterations: 1
    advanced_options:quantization_scale_type: 4
    advanced_options:activation_clipping: 1
    advanced_options:weight_clipping: 1
    advanced_options:bias_calibration: 1
    advanced_options:output_feature_16bit_names_list: ''
    advanced_options:params_16bit_names_list: ''
    advanced_options:add_data_convert_ops: 3
    advanced_options:prequantized_model: 1
    info:
    prequantized_model_type: 2

    When configured this way, the model appears to accept float inputs, as shown in the attached image.

    In TI’s C code, is it possible to input float data? If so, could you provide guidance on how to achieve this?

    Thank you for your support, and I look forward to your response.

    Best regards,






    <Related Questions>
    I think there are 3 ways to change the model
    github.com/.../edgeai-tidl-tools
    TIDL-RT, OSRT here
    1) TIDL-RT changed to tidl_model_import.out
    2) OSRT changed to onnxrt_ep.py

    github.com/.../edgeai-tensorlab
    3) The edgeai-benchmark here
    I haven't tried it

    What can I use here to solve the problem of the above question?
    - I don't think tidl_model_import.out is QAT conversion itself. Is that right?
    - I converted the QAT sample model and the input is float.
    (tda4al board) Is it possible to put float as input in tidl qnxc code?

  • Hi Minwoo;

    Yes, you can use uint16 data. Please be aware of the performance and accuracy trade-off between uint16 and uint8.

    And thank you for many other question as well.  But in order to get your questions answered quickly, please do not put multiple questions in one ticket. Please submit different questions in separated tickets. Because questions may be answered by different experts. 

    I will close this one for now. Please feel free to submit the questions in new tickets. 

    Best regards

    Wen Li