As best I can tell, my DSP clock rate is running very slow compared to expectations. Based on other posts such as [http://e2e.ti.com/support/dsp/omap_applications_processors/f/447/p/62658/225772.aspx#225772], setting the clock rate on ARM/Linux side should also ramp up the DSP. If I execute
echo 1000000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
and check the cpufreq-info, then I am told the ARM is running at 1GHz. When I then run a test codec that I created in codec_engine, I get a measurement of 742570663 DSP cycles for a task. This is measured as:
unsigned long long start_tick, stop_tick; TSCL = 0; start_ticks = TSCL; // .. Perform task .. stop_tick = TSCL; stop_tick -= start_tick; GT_1trace(gtMask, GT_1CLASS, "Required %i DSP ticks\n", stop_tick);
The CE_DEBUG=2 output gives me a measurement that I assume is referenced to the ARM clock. I would expect a little overhead from the IPC and function startup, but:
[DSP] @0,125,888tk: ... Enter
[DSP] @19,803,855tk: [+1 T:0x8782a97c] blah.codecs.test_copy - Required 742569591 DSP ticks
And I monitored the delay as being 19.82s with a stopwatch. These numbers give about 37MHz for the DSP.
I am running on a gumstix overo, DM3730 chip, yocto kernel (linux-omap-3.5) built (if I recall) using the TI toolchain by the following:
CSTOOL_DIR=/home/wiley/ti-dvsdk_dm3730-evm_04_03_00_06/linux-devkit/
CSTOOL_PREFIX=$CSTOOL_DIR/bin/arm-arago-linux-gnueabi-
Other symptoms include some messages at bootup:
[ 0.000000] Clocking rate (Crystal/Core/MPU): 26.0/400/600 MHz
[ 0.065093] Reprogramming SDRC clock to 400000000 Hz
[ 0.065093] dpll3_m2_clk rate change failed: -22
[ 0.077850] Switched to new clocking rate (Crystal/Core/MPU): 26.0/400/500 MHz
I am using the userspace governor.
Q1. Am I correct in understanding that the DSP should be running at 800MHz since the ARM is at 1GHz?
Q2. Are there any flaws in my validation method?
Q3. How do I troubleshoot this further?
Thanks!