Hello friends,
we use DVRRDK_03.00_ in DM8168. and we implement our motion detection algorithm in SCD.
By testing, there're 600~800us difference of process time per frame between TI's and our algorithm.
We tried to follow the optimization steps inside "Introduction to TMS320C6000 DSP Optimization.pdf"
to improve our performance and we found that there's no obvious time difference.
Do you have any idea for that?