This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/CC3235MODSF: MonoThread application with 1ms tick and microsec wait

Part Number: CC3235MODSF

Tool/software: TI-RTOS

Hi,

I have a project with a cc3235modsf and TI-RTOS. It's based on the http get example where I do everything in a single task. 

I'm trying to drive a few GPIO to transmit data as fast as possible. 

It basically goes like: 

loop 1
   loop 2
      drive data output
      wait 2usec to have stable data
      drive clock output

At first it was super slow. I noticed that decreasing the tick period of TI-RTOS from 1ms to 100us did increase the performance a lot. I suppose that the call to usleep() exits the thread and it has to wait for the next tick to continue. Is my asumption correct?

The issue is that it's still not fast enough with a 100usec tick, and I suppose I cannot lower the tick period to what would be needed for max speed.

What would be the correct way to have all done as quickly as possible without having to wait on the tick?

Thanks,

Cédric

  • Hi Cédric,

    Your assumption with usleep should be correct. I can verify with the TI-RTOS team.

    You can use a hardware timer to get a smaller delay, but I don't think you'd be able to start an interrupt routine and implement your data output within 2 usec. Your time constraint would not be guaranteed by the RTOS. Even if your application is only a single thread, the host driver (sl_Task) is a separate thread to interface with the NWP. There is another discussion on TI Drivers latency vs. driverlib that you might find useful, but it doesn't approach your 2 usec period: e2e.ti.com/.../674855

    What is the use case for bit-banging GPIOs at this speed?

    Best regards,
    Sarah
  • Hi Sarah, 

    Thank you for the feedback and link!

    The use case is to drive an e-ink display with a kind of parallel communication.

    I just tried using MAP_UtilsDelay for all few microsecond wait. It corrects the slow running time. I haven't yet checked if the delay is correct, but I suppose it should. 

    Can this cause an issue with RTOS or the host driver (as long as I use it only for <=  ~10usec)?

    Best regards,

    Cédric

  • Hi Cédric,

    The host driver priority and context switching is handled by the RTOS. I'll loop in someone from the TI-RTOS team to comment.

    If you are changing the value of Clock.tickPeriod, please note that the host driver timeouts are based on this tick period. The define SL_TIMESTAMP_TICKS_IN_10_MILLISECONDS in user.h is currently hard-coded to the default tick period of 1000 usec. You'll have to update this value and rebuild the simplelink library to avoid sync errors. (This value will be based on the configured tick period in a future release.)

    Best regards,
    Sarah

  • Here's the spec if usleep (from )

    "The usleep() function suspends execution of the calling thread for (at least) usec microseconds"

    For TI-RTOS, "at least" is the Clock period (which defaults to 1ms). We generally don't recommend decreasing this period too much due to the overhead it introduces and in case, as Sarah mentioned, other components assume a 1ms tick. 

    You might want to just use the Timestamp module and spin for the 2us (e.g. get the timestamp until the current one is +2us from the first one). Where are you doing this: Hwi? Swi? or Task? I'm assuming Task since you are calling usleep which is a blocking call. The spinning is non-blocking, so it will starve out lower (or the same) level priority Tasks. If this is ok, go for it. If not, adjust the priorities as needed.

    Can you spin on "have stable data" is known via a register or something? That's a cleaner approach.

    Todd

  • I did not know that the tick had to be changed elsewhere as well. I'll keep it to 1ms then.

    All the work is done in a task. Its priority is set to 1, while the sl_Task priority is set to 9.
    MAP_UtilsDelay seems to be ok for what I want, it's not exiting the thread but the sl_task can preempt if need be. I've checked the timing with a scope and it seems to be the correct with my following define.

    #define TICK_PER_LOOP 6
    #define WAIT_US_BUSY(x) (MAP_UtilsDelay((SYS_CLK / TICK_PER_LOOP / 1000000) * x))

    I don't think the approch with "have stable data" is needed, since all data is available when starting the process.

    Thanks for your help Sarah and Todd!