Hi,
TSU component of MCSDK video that is responsible for YUV resize has the following process call:
/**
* @brief Processing API
*
* @param[in] tsuInst Algorithm instance handle.
* @param[in] inputYUV handle to the input YUV image
* @param[in] inputYUVWidth Width of input YUV Image
* @param[in] inputYUVHeight Height of input YUV Image
* @param[out] outputYUV handle to the output YUV image
*
* @remarks tsuProcess() performs actual scaling when requested.
*
*/
tint tsuProcess(void *tsuInst, tword *inputYUV, tuint inputYUVWidth, tuint inputYUVHeight, tword *outputYUV);
Some decoders (like H.264 HP decoder for C66x) generate YUV not as single memory chunk, but split into separated Y, U and V memory chunks.
Current resize component require YUV to be a single memory chunk. Thus when I need to resize H.264 HP decoder output I need to do extra memory concatenation step that uses resources (EDMA controller, DDR3 controller, etc.)
Is it reasonable to make TSU able to work with split YUV buffers to avoid extra resource usage for concatenation?
Regards,
Andrey Lisnevich