This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Increase DSP/BIOS clock

I'm using the F2808 with a few threads that are time-slicing. The default clock driving DSP/BIOS (100 microseconds) is currently a limiting factor in my applications performance, and I've tried increasing the speed, but DSP/BIOS becomes unstable after about 40 microseconds (I've tested this by removing all of my code and putting some simple skeleton code in its place). Are there any configuration changes I can do to get the clock down to around 1 microsecond, or is this unfeasible on the F2808?

  • Hi Bryan

    You've got to ask yourself how much time( clock ticks) does the task switch take? How much instructions can DSP execute in 1 microsecond at 100MHz clock? From the sound of it you are pushing the CPU to the limit. If you really need this kind of temporal resolution I would recommend that you change from threads to HW interrupts, possibly without using the DSP/BIOS features.

     

    Regards, Mitja

  • MItja is right.  If your control system requires a 1 Mikrosecond timing loop, we speak about a time slot of 100 machine code instructions for a 100 MHz device. So forget about DSP/BIOS and also forget about C! You will have to implement it in assembly language  and sometimes even as assembly macros.

    What is possible with this technique is shown by the control examples provided by TI as software examples of  the Digital Power Supply Kits. Also, the CLA - library of the F28035 shows a way to operate at a control loop frequency of 500 kHz and more.

    Regards

       

  •  

    The issue I'm having isn't that I need a large section of code to execute in that amount of time...I'm fine with it taking much more time than that. The issue I run into is when I have single task running and call TSK_yield(), it is a full 100-200 microseconds before control returns to that thread, which is ridiculous. As a side note to help clarify, if DSP/BIOS were truly pre-emptive, then this would be a non-issue. Basically the TSK_yields() that I have inserted each up a huge chunk of computation time. I have increase the speed of my application significantly (order of magnitude) by removing TSK_yields(), at the cost of granularity. My application is spending a very large amount of time in DSP/BIOS, not in my application due to the number of TSK_yields() (and similar calls) that are necessary to implement time-slicing. Also, the number of 1 microsecond is just a ballpark, the specific number doesn’t matter, I just don’t want my application to spend so much time in DSP/BIOS.

     Perhaps an example would help. I have a delay routine that was written to allow other threads to execute while this thread is waiting. I couldn’t use TSK_sleep because (once again) the time was not granular enough (the length of delay varies from 20 microseconds to 300 microseconds). It uses the timers, to be accurate, but I ran into the issue of TSK_yield() imposing a minimum delay of 100 microseconds, and in increments of 100 microseconds. It’s okay if the delay is longer due to other threads taking longer (the application can accept very loose timings), but if it’s possible for it to take a shorter amount of time, it should, but can't currently due to slow clock speed of DSP/BIOS