This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: TDA4VM: TVM/Neo-AI-DLR::Inference steps(Cont)

Part Number: TDA4VM

Dear TI Team ,

I have created new thread with the continuation of Old Query link as by mistakenly I have set this thread to "resolved".

Q1 . Ref to Old Query link ,can you please give more details about your statement i.e "deny_list is mainly provided as a debugging option, it may not bring value in real application."==> is that mean , we cannot use this option manualy for real time project or choosing the layers which are offloaded to TIDL is decided by compiler automatically ? 

Q2. It would be good , if you can give clear steps to run inference on target , ie how we need to create SD Card , commands/scripts which need to call during target inference considering semantic segmentation as an example 

Q3.Suppose , if we have model and in which one of the layer should be executed on the MPU and rest all on target , Will there be any separate deliverables for TIDL and MPU or single  ? in case of 2 , how to differentiate ?how to use these generated deliverables in the main application of the project which includes preprocessing , post processing.....model as per project specification written using TIDL API's 

Thanks