This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: some problems

Part Number: TDA4VM

Hi team,

The customer has the following 12 questions may need your help:

1) What is the SDE disparity output scale value? Checking the value after disparity, the value is large and should have been multiplied by a scale value. 

2) The PTK SDE demo has added a three-layer golden tower version, can it directly output the disparity output? Configuring output in config does not take effect. Is it required to add output logic manually? 

3) The custom model is run using the PC simulator through TIDL tools. Can it get exactly the same results as the previous board run? (After QAT) 

4) Regarding the QAT quantization of models using TIDL tools, when models train fp32, is it required to wrap with xnn.layers first? 

5) In the QAT quantization reference design, the function (xnn.model_surgery.create_lite_model) that is used to convert to lite is used primarily to modify the form of the convolution layer to xnn.layers, right? 

6) During the quantization model, if a block contains multiple conv+bn+relu structures, each of these conv,bn,relu is named separately (self.conv1,self.conv2...). Then there is another stage where multiple blocks are used, so do these blocks need to be written as self.block1,self.block2 again? 

7) After QAT quantization, tension output in the board-end model must be INT8? Can it be multiplied scale again to the output of fp32/fp16? 

8) How does QAT occur for OP not supported by custom model?  How to print and display intermediate results of model inference? 

9) Camera Access Adapter: How to access the I2C bus? How to access the daughter board deserializer? How does CSI_RX_if configure the virtual channel? Is there any routine for reference? 

10) DSPLib: Is the code that calls the control DSP module running on arm-A72? Where is the input and output data of the DSP unit stored in memory? 

11) Can DSP and MMA modules pass data directly through pointers? 

12) What libraries do you need to use if all the hardware units (including various peripherals and on-board hard core acceleration modules) use the calling method of the SDK directly instead of OpenVx? 

Could you help check this case? Thanks.

Best Regards,

Cherry

  • I can answer the questions related to QAT. How do you want to handle other questions? If you create another thread with those questions, then it can be assigned to the right person.

    ===============================

    3) The custom model is run using the PC simulator through TIDL tools. Can it get exactly the same results as the previous board run? (After QAT) 

    [MANU]: TIDL run is repeatable. It is expected to give the same result every time you run it. It it doesn't, then it is a bug.

    4) Regarding the QAT quantization of models using TIDL tools, when models train fp32, is it required to wrap with xnn.layers first? 

    [Manu]: No 

    5) In the QAT quantization reference design, the function (xnn.model_surgery.create_lite_model) that is used to convert to lite is used primarily to modify the form of the convolution layer to xnn.layers, right? 

    [Manu]: It changes the actiation functions, removes squeeze and excitation etc. You can customize this behaviour by modifying this dictionary: https://github.com/TexasInstruments/edgeai-torchvision/blob/master/torchvision/edgeailite/xnn/model_surgery/__init__.py#L113

    6) During the quantization model, if a block contains multiple conv+bn+relu structures, each of these conv,bn,relu is named separately (self.conv1,self.conv2...). Then there is another stage where multiple blocks are used, so do these blocks need to be written as self.block1,self.block2 again? 

    [Manu]: Name does not matter. Also QAt is intelligent to trace through the model to understand which module is called after which.

    7) After QAT quantization, tension output in the board-end model must be INT8? Can it be multiplied scale again to the output of fp32/fp16? 

    [Manu]: Are you asking about TIDL imported model? That depends on the model. For typical classification models, it is INT8. But for detection models, the detection output typically has float values.

    8) How does QAT occur for OP not supported by custom model? 

    [Manu]: This question is not clear. Can you clarify this further.

    8b) How to print and display intermediate results of model inference? 

    If you are using onnxruntime from edgeai-tidl-tools, then check onnxruntime's functionality to write out intermediate tensors.

    If you are using TIDL-RT from the RTOS SDK, then this gives information about writing out intermedia values.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/08_04_00_06/exports/docs/tidl_j721e_08_04_00_16/ti_dl/docs/user_guide_html/md_tidl_fsg_steps_to_debug_mismatch.html

  • Hey Manu,

    Thanks for your support on QAT questions and regarding the remaining questions I would create a new thread then we could continue our discuss here.

    Thanks and regards,

    Cherry