This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/TDA3MV: details for 3D SRV in TDA3

Part Number: TDA3MV

Tool/software: TI-RTOS

Dear Champs,

My customer would like to understand the details how 3D SRV can be implemented on DSP of TDA3 and what is the difference between DSP implementation and GPU's.

Could you please provide details what is the view point in the TDA3 3D SRV? is it same as 'virtual camera point'?

When 'view turn around' is implemented, does intermediate images used in this view turn around can be generated using LUT table in each view points?

My customer normally requires hundreds views to implement 3D SRV, and want to know how it can be implemented on TDA3 and how much memory size will be required.

When I checked marketing slides, I found 9 view point implementation case and 21 view point implementation.

When view point was increased, what is the benefit with this many view points?

I found about 1.3MBytes is required for 1 view point in the VisionSDK datasheet, but I found 32MB required for 9 view point and 64MB required for 21 view points.

Then, the actual size of 21 view points case should be 32MB(9 view points) + 1.3MB x 12 view points = 47.6MB, right?

And, for auto calibration(online calibration)

my customer also wants to implement auto calibration and want to know if there is an additional resource to implement this auto calibration(online calibration).

in this case, is it possible to extract 'Motion' like 'Optical Flow' from 4ch input videos at run-time?

If so, could you please provide details how this Motion can be implemented and its processing time and how many motion can be extracted?

Thanks and Best Regards,

SI.

  • Hi SI,

    On "what is the difference between DSP implementation and GPU's." There are number of differences, 2 of the key ones are

    1. DSP based requires pre-determined view point vs dynamic view point in GPU
    2. DSP based requires to store calibered LUTs for each view and not require for GPU

    On "view point in the TDA3 3D SRV? is it same as 'virtual camera point'?" Yes that's correct

    On "When 'view turn around' is implemented, does intermediate images used in this view turn around can be generated using LUT table in each view points?" The turn around is implemented by a set of view points and we would generate LUTs for each view point

    On "My customer normally requires hundreds views to implement 3D SRV, and want to know how it can be implemented on TDA3 and how much memory size will be required" The number of view points are limited by the non-volatile memory and/or RAM. Can you please clarify the usecase for 100s of view points

    On memory size per view point,please check "VisionSDK_DataSheet.pdf", refer section 14.6.6 on VisionSDK v03.07.00.00

    The size of view-point specified is the size occupied in DDR. When the view point data is stored in MMC/SD (any non-volatile) memory, its zipped using LZ4 which further reduces the size

    On "my customer also wants to implement auto calibration and want to know if there is an additional resource to implement this auto calibration(online calibration)." We have implemented a chart based calibration method. Customer will have to update/implement dynamic calibration

    On "in this case, is it possible to extract 'Motion' like 'Optical Flow' from 4ch input videos at run-time?" optical flow require significant processing MHz, please refer "Dense Optical Flow Usercase" In data sheet

    I am not sure about FPS of optical flow in TDA3x, i think it was around 5-8 FPS.

    Regards, Sujith

  • Let us know if you need more information on this.

    Regards

    Shashank