This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

H.264 Codec Development on DM6467 1Ghz



Hi,

I Optimize H.264 Codec on DM6467(594Mhz) originally. Then I put the same dsp server img(x64P) on DM6467(1Ghz) EVM,but  the codec performace isn't improved.  What lib shall I modify? (CSL ? HDVICPLIB? DVSDK?)

  • I think you need not to modify any lib, it has show performance improvement. I think you are missing something while probing the performance.

  • Hi Veeru,

    Thank for your reply. Are there some methods to check the current DSP frequency / speed?

  • Is it safe to assume you are using a DM6467T EVM (different from DM6467 EVM) for evaluating the 1 GHz DM6467?  I am not up to speed on all the hardware changes, but believe DDR2 change may be relevant.  If you are not using a DM6467T EVM, you need to see if any of the hardware changes may impact your performace results.

    If you are using DM6467T EVM, than I would look at the PLL and DDR2 settings; if you are using the same DSP image on both, the DSP image may be setting (overwritting) the same PLL setting for both parts which may explain why the performance is not improved ; the GEL files for each of the corresponding boards may give you a quick insight into the proper settings for each of these EVMs.

    That said, you can find the schematics (for comparing hawrdare changes) as well as the GEL files at http://support.spectrumdigital.com/

  • I am also playing around with the dm6467t, and have the same problem: I don't see any performance improvement and I don't know what I should take into account.

    I developed an H264 encoding application based on dmai's "video_encode_io1" and the TI codec for 1080p. I use the option "--benchmark" to meausure the coding time for each frame and the results are quite strange: according to the measure, with the "old" dm6467 it takes around 40 ms to encode each frame. Using the same server and application dm6467t, it takes almost 70 ms!! I tried to change the "clockRate" in "server.tcf" from 594 MHz to 1000 and it seems there is no effect in the measure. (By the way: which would be the correct number for "clockRate" for dm6467t?)

    I installed the new DVSDK 3.10 with Arago instead of Montavista, etc. I Did the same test and the dm6467t now takes "only" around 45 ms to encode each frame.

    Is this measure correct? I mean, the way time is measured in "video_encode_io1" depends on some specification of clock speed somewhere?

    In summary: the change from 6467 to 6467t is as smooth as just installing the new DVSDK and expect everything to go faster, or is there something in the servers or applications that one must take into account?

    Any guideline would be very much appreciated...

    Thanks

  • The DMAI apps use gettimeofday() API to calculate the time spent on the ARM side. This is what is used in --benchmark option.

    In DVSDK2.00 with LSP 2.00 on DM6467 the ARM clock rate (SYSCLK3) was hardcoded into the kernel in the file  include/asm-arm/arch-davinci/timex.h

    #define DM646X_CLOCK_TICK_RATE  148500000     /* For 594 MHz */

    So when you change PLL settings in UBL you also need to rebuild the LSP 2.00 kernel to get the correct timing values.

    In DVSDK 3.xx the kernel has removed this hardcoding and so the value you get returned of 45ms is correct based on current PLL settings.