Hi,
I had a question from this thread:
e2e.ti.com/support/arm/automotive_processors/f/1021/t/685804
I have a two OD model ,one is sparse , one is non-sparse.
the sparse model should run faster on the EVE core.
From the use-case log:
Sparse model time is ..
[IPU1-0] 99.542640 s: Local Link Latency : Avg = 150382 us, Min = 149820 us, Max = 151101 us,
[IPU1-0] 99.543158 s: Source to Link Latency : Avg = 164672 us, Min = 163698 us, Max = 166260 us,
and, non-sparse model time is,
[IPU1-0] 132.031720 s: Local Link Latency : Avg = 263246 us, Min = 262582 us, Max = 264137 us,
[IPU1-0] 132.032025 s: Source to Link Latency : Avg = 305553 us, Min = 276337 us, Max = 376929 us,
we can see the difference on the latency. But the fps are almost the same.
And I think the latency is the time that we can get the first output value.
Am i right?
How the fps is almost the same?
Below is the use-case log,
Thank you .
Best Regards,
Eric Lai