Hello,
I had DM365 IPNC MT5 application source code.
From the code I found that the processing sequence is like this. Video Capture thread gets the video in buffer, then it pass to different threads like FD, VS, encode and display.
Buffer passing is in form of Queue. These all above processes are running in different thread. The video capture thread just copies that buffer and put that buffer in each process Queue.
After that individual thread starts processing. Is this right ?
Now I had inserted another thread similar to FD thread. It does processing of our algorithm. The algorithm runs on ARM itself and it takes much time to process the frame. So the FPS of framework drops. When we see at profiling the avg time / frame for capture thread goes on increasing as we increase our processing.
Module | Avg Time/Frame | Frame-rate | Total time | Total Frames |
CAPTURE | 127.56 | 7.84 | 114933 | 901 |
ENCODE0 | 34.02 | 29.40 | 30648 | 901 |
STREAM | 1.02 | 977.22 | 922 | 901 |
AEWB | 0.32 | 3099.37 | 1107 | 3431 |
IV.D | 105.19 | 9.51 | 93097 | 885 |
IV.D is the thread I had added. As the capture buffer just copies the buffer, the processing time for capture thread should not increase, as processing of IV increases.
Please let me know if my understanding is not correct and please let me know how I can manage this so that FPS of Framework remains 30 and my algorithm may run at lower FPS. Whenever my algorithm finishes the processing of current frame, it will fetch the latest frame available and do the processing.
Please let me know how I can implement this.
Thanks
Harshada