This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA2EXEVM: TIDL_OD usecase performance

Part Number: TDA2EXEVM


Hi,

I had a question from this thread:

e2e.ti.com/support/arm/automotive_processors/f/1021/t/685804

I have a two OD model ,one is sparse , one is non-sparse.

the sparse model should run faster on the EVE core.

From the use-case log:

Sparse model time is ..
[IPU1-0] 99.542640 s: Local Link Latency : Avg = 150382 us, Min = 149820 us, Max = 151101 us,
[IPU1-0] 99.543158 s: Source to Link Latency : Avg = 164672 us, Min = 163698 us, Max = 166260 us,

and, non-sparse model time is,
[IPU1-0] 132.031720 s: Local Link Latency : Avg = 263246 us, Min = 262582 us, Max = 264137 us,
[IPU1-0] 132.032025 s: Source to Link Latency : Avg = 305553 us, Min = 276337 us, Max = 376929 us, 

we can see the difference on the latency. But the fps are almost the same.

And I think the latency is the time that we can get the first output value.

Am i right?

How the fps is almost the same?

Below is the use-case log,

Thank you .

2845.Log_file.zip

Best Regards,

Eric Lai