This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MCSDK_VIDEO add myself codec

Other Parts Discussed in Thread: TMS320C6678

hi, there are two problem confused me.

1. I want to add myself codec(AVS) into the mcsdk video framework,  so I reference this web(http://processors.wiki.ti.com/index.php/MCSDK_VIDEO_2.1_CODEC_TEST_FW_User_Guide). There are some problem I don't know how to complete it.

just like the blow:

as shown in the above,

the red "1", I don't know how add my codec code to the existing code, and get the Codec library with public API files.

the red"2", I don't know what the meaning of right hand side. 

  can you explain to me and told me how can use "make or CCS to make a library"?

2. I also reference this web(http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/187260.aspx?pi70912=1), Iwant to  EVM6678L do a single channel with AVS encoder and output bitstream via RTP/RTCP. Finally, my goal is AVS encode 1080p at 25fps. I intend to use 8 cores to encoder. first of all, I want to encode CIF in real-time. now, i want to build a project not to use minGw,so I download the 1881.sv04.zip from the above web.but after adding the "build variable" i compile the error. like blow:

example, the "std.h" i have add the path in the "include option",but it's still error. shown as:

I used ccs 5.2, I have modify the path you have said in the "readme.txt". also i have modify the source code file, example: #include <#include "ti/csl/csl_cacheAux.h" to #include <C:/ti/pdk_C6678_1_1_2_6/packages/ti/csl/csl_cacheAux.h>

so i add the variable in the "linked resource":

 

 3. if i want to recreate this project what should i do?

help me,

thank you,

lei

  • Hi Lei,

    What version of MCSDK Video are you using as the baseline? Can you please also let us know more details of your application? From your post, it looks like you are trying to do 1 channel of 1080p25 AVS encoding using the 8 cores of C6678. The output is via RTP. How about the input? As for your AVS codec, I guess you have built the AVS codec lib and it has a few public API header files. Can you please confirm this? Also is AVS an XDM compliant codec? If so, what is the XDM version?

    Are you able to make sv04 following instructions @ http://processors.wiki.ti.com/index.php/MCSDK_VIDEO_2.1_Windows_Getting_Started_Guide? Is it fine to use MinGW to build your application? or CCS project is the only option to you?

    To answer your questions:

    The red "1" "Codec algorithm source code can be compiled either via make or CCS to make a library.":  I don't know how add my codec code to the existing code, and get the Codec library with public API files.

    [Hongmei]: this just indicates that your codec lib should be available and the codec has public API header files for integrating in sv04.

    The red"2", I don't know what the meaning of right hand side. 

    [Hongmei]: this is the first step of integrating a new codec in sv04. It sets the environment variables for your codec so that it can be integrated in sv04. For example, if your AVS codec lib is located @ C:/codec/avs_01_00_00_00/lib/avs.lib and header file is @ C:/codec/avs_01_00_00_00/iavs.h, you can add ENV in setupenvMsys.sh as below:

    MYVIDEOBASE="/c/codec"

    VIDEO_AVS_ENC_VERSION="avs_01_00_00_01_ELF"
    VIDEO_AVS_ENC_RUNPATH="$MYVIDEOBASE/$VIDEO_AVS_ENC_VERSION"
    make_shortname "VIDEO_AVS_ENC_RUNPATH"
    VIDEO_AVS_ENC_SRCPATH="$VIDEO_AVS_ENC_RUNPATH"
    check_exist "VIDEO_AVS_ENC_SRCPATH" "/iavs.h"
    COPY_TOOLS_LIST="$COPY_TOOLS_LIST VIDEO_AVS_ENC"
    ...
    export VIDEO_AVS_ENC_DOS="`echo $VIDEO_AVS_ENC_RUNPATH | $make_dos_sed_cmd`/packages"

    The above lines sets the ENV for AVS codec,  checks if the codec is available at the location specified, and also exports "VIDEO_AVS_ENC_DOS" which will be later used by make files and linker command file to locate the lib and public header file(s) for the AVS codec.

    I want to build a project not to use minGw,so I download the 1881.sv04.zip from the above web.but after adding the "build variable" i compile the error

    [Hongmei]: 1881.sv04.zip was created based on MCSDK Video 2.0. If you are using MCSDK Video 2.1.0.8, compilation errors are expected. This is because the CCS project links source/header files from the video package, and some of the files have been updated from 2.0 to 2.1. Please let us know if you have to use CCS project for sv04.

    Iwant to  EVM6678L do a single channel with AVS encoder and output bitstream via RTP/RTCP

    [Hongmei] Yes, RTP support can be added to sv04 as discussed in  http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/187260.aspx?pi74271=2. I would suggest that this can be made as the next step after AVS codec is integrated in sv04.

    Overall, we recommend the following steps for the AVS integration and RTP enhancement:

    Step 1: make sv04 from MinGW and sanity check one codec (such as H264HP encoder) via TFTP data IO

    Step 2: Add AVS in sv04 and make sv04 in MinGW, then check encoded output via TFTP data IO

    Step 3: Add RTP support to send output via RTP

    Step 4: If CCS project is a must do, create CCS project for sv04

    Thanks,

    Hongmei

  • hi, hongmei

    first of all, I am grateful for your reply patiently. thank you very much.

    I use the CCS5.2/MCSDK_2_01_02_06/MCSDK_Video_2_1_0_8, this is all of my software platform.

    1. my project‘s information

    I have noticed the problem you said. I modified the RTSC shown as below:

    the configure of project's property shown as following:

    build include optinon:

    Linker File search Path:

    2. the eventually target:

    my application is to complete the real time of AVS codec, which is 1 channel of 1080p25. the input date(.yuv) via TFTP, also the output date as the same.  I hope distribute 6678'8 core as follow:1 core as the main control, another 7 core to complete codec.

    3.is AVS an XDM compliant codec?

    to this problem, i don't know how to confirm whether is it compliant. I still don't know how to generate the “.lib" file.

    4. I hope can build a  TFTP frame to test the AVS codec based on SV04.out.

    thank you,

    lei


     

  • Hi Lei,

    Thanks for all the details. 

    As for making CCS project for sv04, your changes look good. However, more changes are needed since some source/header files for sv04 itself have been modified in MCSDK Video. Also please note sv04 builds on C6678 PDK 1.1.2.5 (instead of 1.1.2.6 as shown in your project). We still recommend using MinGW to build sv04 baseline at your first step. Please let us know if there are any issues in building sv04 baseline with MinGW. We can create sv04 CCS project after adding AVS and RTP if you have to use CCS project for your application.

    You mentioned that your goal is real-time 1080p25 AVS encoding with TFTP dataIO. However, using TFTP to bring in 1080p25 YUV data cannot be done in real time. For the encoded output, RTP should be fine to achieve real time 1080p25 AVS.

    Your AVS codec does not have to be XDM compliant. As described in "http://processors.wiki.ti.com/index.php/MCSDK_VIDEO_2.1_CODEC_TEST_FW_User_Guide#Integrating_new_Codec_into_the_build", if it is compliant with one of the  XDM versions supported in MCSDK Video sv04, you can reuse the existing XDM wrapper code and speed up the integration. If it is not, you can write your own wrapper for integrating AVS.

    [Your Question] I still don't know how to generate the “.lib" file.

    [Hongmei]: this is just the library from your AVS codec. You build your AVS codec source code to get the lib. It can be *.lib or some other names you prefer, such as *.le66 , *.ae66, and etc. Then it can be linked into sv04 during the integration.

    Thanks,

    Hongmei

  • hi, Hongmei.

    1. if i want to create a RTP frame, how can i do? do you have some example. because  the last platform is not the 6678EVM, which is just a test platform now. but the chip,TMS320C6678, is the final hardware. if i can create a frame, it is convenient for me to migrate other platform.  now i can test the AVS algorithm on the 6678EVM. by the way, i hope you can help me to complete the frame.

    2. whether the sv04.out can bring in CIFp25 YUV data in real time? if not, i can put the yuv data load in the DDR3 first. then read it to SRAM, how can i do.

    thank you,

    lei 

  • Hi Lei,

    Sure we can help with creating the framework for your application.  You mentioned that you can test the AVS algorithm on the 6678EVM. How did you do the test? What data IO was used?

    With your verified AVS algorithm, we can further plug it into sv04 baseline from MCSDK Video, following steps as we discussed earlier:

    Step 1: make sv04 from MinGW and sanity check one codec (such as H264HP encoder) via TFTP data IO

    Step 2: Add AVS in sv04 and make sv04 in MinGW, then check encoded output via TFTP data IO

    Step 3: Add RTP support to send output via RTP

    Step 4: If CCS project is a must do, create CCS project for sv04

    Please let us know if there are problems for each of the above steps.

    As for the real-time performance, it depends on the speed of dataIO and the cycle performance of the codec algorithm. We do not recommend using TFTP as dataIO for real-time applications. In MCSDK Video sv04, there is another dataIO via PCIe which can achieve real-time dataIO. Is that a possible option for your application? In one E2E post (http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/187260.aspx?pi74271=2), we once provided an example of using RTP for real-time encoded data output. When you worked on your last platform, how was dataIO implemented? Is that real-time? If so, we may re-use what you had earlier.

    Thanks,

    Hongmei

  • hi, Hongmei

    mcsdk_video_2_1_0_8 contains many codec examples, such as h264bpmpdec, h264hpdec,h264hpenc and so on. 

    h264bpmpdec, h264hpdec about these two examples, i have already succeed complete them. but when i do the h264hpenc, there is some problem to come.

     shown as the following:

    there is no error, but also does't go on the application.

    another question is: 

    if i want to achieve real-time output, i must add RTP? the sv04 have already used RTP, why should i add the RTP again? 

    is it suitable for TFTP as data IO when i do the CIFp25. or if i download the test yuv file to the DDR or flash firstly, can it achieve real-time? 

    I only have 6678EVM, which doesn't equipped with TMDXEVMPCI. so now, i can't use PCI.

    thank you,

    lei

  • Hi Lei,

    The H264HP encoder packaged in MCSDK Video sv04 (TFTP) is configured for 2-core encoding, as specified in dsp\siu\vct\testVecs\h264hpenc\config\codecParams.cfg. So, please load .out in core 0 & 1, and then run the test.

    It's good that you can verify codecs with the packaged sv04.out. In order to add your own codec, please try rebuilding sv04, first without any changes. Then, you can verify packaged codecs with your rebuilt sv04. If that's working fine, you can further add your codec. 

    RTP is one way to achieve real-time output. If you cannot use PCIe, we will recommend RTP for the data output. Currently sv04 uses TFTP for data input and output, but not RTP. We once provided a RTP patch to address one E2E query (http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/187260.aspx?pi74271=2). This patch is not part of the 2.1 MCSDK Video release.

    As for the data input, it would be good to use TFTP in the integration and testing stage. However, in order to achieve real-time performance, we do not recommend TFTP for your final product. For loading the input YUV to DDR, usually this is used in integration/testing to save time of getting input. Sure this would be real-time. However, DDR has limited space. In your final product, are you going to use a short clip always, or you have some way to feed the input to DDR?

    In summary, it would be good to differentiate integration/testing and final product for the dataIO. For integrating and testing your own codec, TFTP is fine, and you can also pre-load the input YUV to DDR. On the other hand, for your final product which needs to be real-time, dataIO must be real-time also. RTP can be used for the output. For the data input, TFTP is not recommended. Depending on how your system is designed, you can think of other ways to achieve real-time data input such as SRIO. 

    Thanks,

    Hongmei

  • Hi Hongmei,

    thank you for the h264hpenc problem. I try it later.

    1. loading the yuv data by TFTP is not achieve real-time performance? can you tell me how to use SRIO?

    2. the RTP patch you have post is which version of MCSDK Video. just as above, i have modify the PDK, sys/bios, and so on.but it still some  terrible problem, i need help.

    thanks,

    lei

  • Hi Lei,

    Yes, loading YUV via TFTP is not suitable for real-time applications, especially when your final goal is 1080p. For SRIO, please refer to BIOS MCSDK (http://software-dl.ti.com/sdoemb/sdoemb_public_sw/bios_mcsdk/02_01_02_05/index_FDS.html): PDK-C6678v1.1.2.5 has SRIO LLD with examples.

    The RTP patch we posted earlier is upon MCSDK Video 2.0.0.10. As we discussed earlier, in order to add RTP in MCSDK Video 2.1.0.8, code changes are needed besides updating the version of tools. We can help on this after you add your own codec and verify it with TFTP.

    Thanks,

    Hongmei

  • Hi Hongmei,

    I want to know where it is via TFTP load the yuv data, is it in the SRAM or other memory?

    to test the AVS algorithm, i just need test.yuv download the EVM6678 first, then to run the codec code to check whether it can achieve real-time.  In this situation, if TFTP is suitable for my needs?

    how can i know the parameter meaning of "..\h264hpenc\config\codecParams.cfg"? thought at the front of ".cfg" have mentioned that "See siuVctParse.h for a list of supported ParameterNames", i still don't understand the meaning after checking out the siuVctParse.h. I can know most of the parameter meaning by the comment, but i worried how to write a new .cfg for my AVS. for example, why should distinguish the "static_parameter" and "dynamic_parameter"? 

  • Hi Lei,

    TFTP loads input data to DDR.

    If you would like to just check if your codec can achieve real-time performance, TFTP is suitable for this purpose. For example, you can use TSCL around your encoding call to get the number of cycles (TSCL_End - TSCL_Begin) spent on the encoding:

    TSCL_Begin = TSCL;

    AVS encoding call;

    TSCL_End   = TSCL;

    codecParams.cfg contains the name of the codec, the number of cores and core IDs for the encoding/decoding, as well as static/dynamic codec parameters for the codec. For TI C6678 video codecs, each codec contains a set of static parameters which are used in codec create time. It also has a set of dynamic parameters which is used to (re)configure the codec before the process call. Definition of the static/dynamic parameters is codec specific, and it can be found from the corresponding codec header file, such as ti\Codecs\C66x_h264hpvenc_01_00_00_01_ELF\packages\ti\sdo\codecs\h264hpvenc\ih264hpvenc.h for H264HP encoder. sv04 has integrated a number of TI C6678 codecs, and therefore it is preferable to use common variables (static_param*, dynamic_param*) in codecParams.cfg for all the codecs. At the same time, comments are added to explain the codec parameters for individual codecs. Exposing the codec parameters in codecParams.cfg helps integrating and testing of codecs. Using H264HP encoder as one example, when target bit rate needs to be changed, you can simply modify targetBitRate in codecParams.cfg and re-run the test.

    For your AVS encoder, if there are similar codec configuration parameters you would like to expose, you can use a similar way. It is also fine to hard code them in the code. But that requires recompilation of the build when you want to change the codec configuration parameters.

    Thanks,

    Hongmei

  • Hi Hongmei,

    thank you for your reply.

    can you tell me the  "TFTP load input data to DDR" file. i want to look the source code. whether TFTP load the input data to DDR, then EDMA transport the data to SRAM, the DSP process. because i need to optimize AVS on the 6678. how to distribute for the two cores. can TI provide the H264HP flow to me?  

    Due to i need process the codec by 7 cores and 1 core to control another 7 cores,  i want to take it as a reference. and do you have some suggests for me to distribute the 8 cores?

    thanks,

    lei

  • Hi Lei,

    For loading data via TFTP, please look at the function siuVct_ReadInputData() in mcsdk_video_2_1_0_8\dsp\siu\vct\src\siuVctFileIO.c. In MCSDK Video framework, the input data is loaded to DDR, and then the base pointer is passed to the codec. Internally inside the codec, EDMA is used to copy data from DDR to Local L2 when needed. For H264HP encoder, please refer to its user guide for details.

    For TI's C6678 multi-core encoders available on Web, such as H264HP encoder, the task partition between the cores is based on slice partition. Each frame is divided into N sub-pictures, with each of the N cores exclusively processes its own sub-picture. There is a single master core, which does a few additional light tasks, such as padding at the end of slice processing, bit stream stitching from multiple cores, and SPS/PPS generation. Which cores will participate in the encoding and which core will be the master core are determined by the framework.

    If possible, please share with us the status of your AVS encoding algorithm. Are you able to do AVS encoding on multiple cores now?

    Thanks,

    Hongmei

  • Hi Hongmei,

    1. can you give me some suggestions of how to put data via TFTP, because the data download DDR, then I should read the data via EDMA to memory, after codec the data output them via TFTP. due to my tutor require me must do it like this, so i have to build a project. or is there a similar project by modify it to meet our  needs?

    2. due to Several schools do this project together, we are just responsible for codec. but the sv04's codec interface isn't same with our project needs. so i have to build a flow like what i said above.  whether we can modify sv04 easily. but sv04 is too much code file, i just know some of these, can you tell me which files i can use?

    3. due to the example of transcode' project is full, can you teach me how to modify it? now, because the mcsdk_video/mcsdk/ccs version is different, when i recompile it, it reports many errors.

    thank you,

    lei

  • Hi Hongmei,

    1. is the "F:\ti\mcsdk_video_2_1_0_8\examples\transcode" example can use for my project? 

    I compile this project but it reports error. like this:

    how can i eliminate this error?

    2.  I rebuild the SV04 which you have up to http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/187260.aspx?pi70912=1. the error shown as below:

    above all, there are two project, but they both reports error.

    the first one is TI provide demo, but it still reports error.

    the second error i cann't find "ggCodecSwap.h" in my PC. To the error of "gmake: *** no rule ...." , i even don't know what it is.

    thanks ,

    lei

  • Hi Hongmei,

    is the "F:\ti\mcsdk_video_2_1_0_8\examples\transcode" example can use for my project? 

    I compile this project but it reports error. like this:

    how can i eliminate this error?

  • Hi Lei,

    As for data transfer via TFTP, please look into files mcsdk_video_2_1_0_8\dsp\siu\vct\src\siuVctTftp.c and mcsdk_video_2_1_0_8\dsp\siu\vct\src\siuVctFileIO.c. Using data input as the example, there are two memcpy involved. The first one is in siuVctTftpGetReceiveIn(), which copies input from TFTP to siuVctFileReadBuffer. The second one is in siuVct_ReadInputData(), which copies data from siuVctFileReadBuffer to codec input data buffer. Both siuVctFileReadBuffer and  the codec input data buffer are in DDR. After that, inside the codec algorithm, EDMA is used to copy data from codec input data buffer (DDR) to local L2. You mentioned that " I should read the data via EDMA to memory". Can you please clarify what you mean by "memory"? Is this something you would like to do inside your codec algorithm? If so, you should be able to use sv04 framework as is.

    Your question 2: "due to Several schools do this project together, we are just responsible for codec. but the sv04's codec interface isn't same with our project needs. so i have to build a flow like what i said above.  whether we can modify sv04 easily. but sv04 is too much code file, i just know some of these, can you tell me which files i can use?"

    [Hongmei] It is easy to modify sv04. Please look into mcsdk_video_2_1_0_8\dsp\siu\vct\siuVctRun.c, in which we have provided codec examples and a memcpy example also. siuVctRunMediaConfigReq() is for network setup and codec create, while siuVctRunMediaProcessReq() is for processing frames (memcpy is just copying the data from input buffer to output buffer). You can plug in your own codec create an process code in the above two functions.

    As for the transcode example project, it is for single core only. You mentioned that your final goal is to achieve multi-core encoding. So, the transcode example project may not be a good baseline for you. For the compilation error, it looks like it cannot find qmss_type.h. This file should come from C:\ti\pdk_C6678_1_1_2_5\packages\ti\drv\qmss. Please check if you have qmss_type.h under this folder. I remember you once had pdk_C6678_1_1_2_6. This version mismatch can be the reason you had the compilation error.

    Thanks,

    Hongmei

  • Hi Hongmei,

    maybe what i said is not clearly, so you made a misunderstand. I mentioned that " I should read the data via EDMA to memory". the "memory" is L2.

    1. Now, I download the source code(sv04) in another site(http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/187260.aspx?pi70912=1), which is provided by you. but it cann't be compiled successfully. I have add the Include Opthion followed the "readme.text".  I think the error maybe the caused by different versions. I hope you can tell me which ".c" code and other code is needed if i build a new project. 

    2. you mention that the transcode example project is only for single core. I know sv04 can execute multicore project, just like h264hpenc. To the source code which distributed codec task to specific core,  where can i find it.  if it is integrated in h264's codec lib, so what's the difference between sv04 with tanscode project? they all need add the distributed code by myself.

    3.I create a project at the same directory with transcode, then import transcode project, but it reports the error like this:

    in the Include option, i have already add the path.but it still reports this error, can you help me?

    if i modify "ti/csl/csl_cacheAux.h" as "c:ti/pdk_6678_1_1_2_6/packages/ti/csl/csl_cacheAux.h", the error wil disappear. but it will show another wired error. 

    what's wrong?

    4. can you give your E-mail, it is convenient  for me to contact with you. otherwise, just communicate once a day. 

    thank you,

    lei