This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Code execution time on RM48 HDK

Other Parts Discussed in Thread: HALCOGEN

Hi

I want to measure code execution time on RM48 HDK. Is there any sample code available?

I am doing the following using RTI but not sure if its correct:

rtiInit();

rtiStartCounter(0);

time1 = rtiGetCurrentTick(rtiCompare0);   // Compare0 is default as set in HalCoGen

// execute code to be bench marked

time2 = rtiGetCurrentTick(rtiCompare0);   // Compare0 is default as set in HalCoGen

time_elapsed = time2 - time1;

If its correct, then with what cycle time should time_elapsed be multiplied to get the time. And what rtiCompare (0, 1, 2, 3) should be used.

Thanks

Anila

Regards

Anila

  • Anila,
    This doc is a bit old now - and some of the APIs seem to be out of date - www.ti.com/.../spna138
    but the PMU section is relevant. I would use the PMU not the RTI to count CPU cycles. First the cycle counter in the PMU counts in CPU cycles not in divided down cycles of a different clock. Second the overhead to access the PMU counter is only about 6 clock cycles and therefore it can be used to measure smaller segments of code without introducing a large % of measurement error.
  • Thanks Anthony.
    I'll check the doc.
    I want to tell you that besides measuring code execution time, I need to insert a delay in the program, what would be best to do that. I thought of measuring time for a loop and using that.

    Secondly, this delay is needed because I am using the CDC class (USB device) for sending data to a PC, I want to bring down the raw data bit rate from 12 MHz to 1 MHz, so I decided to insert some delay between bytes, what would you advice.

    Many Thanks
    Anila
  • Hi Anila,

    I'm not sure I understand the question exactly.

    USB has a fixed bit rates for various modes. Four 1.1 It's either 12MHz for full speed or 1MHz for low speed. If you want to use low speed you need to actually change the hardware a bit, and to be honest I need to check to see if our device controller even supports low speed.

    Is this what you want to do (implement a low speed device?) or are you wanting to communicate at full speed but throttle the data transfer rate down ?
  • Anthony,

    What's the bit rate when sending data by CDC code, I assumed that it was 12 Mbit/sec.

    I want a bit rate of 1 Mbit/sec, so how to achieve it using RM48 USB device? As you have mentioned in your reply, can the hardware be configured? And if not, how to throttle 12 Mbit/sec down to 1 Mbit/sec.

    Regards
    Anila
  • Anila,

    Where do you want 1Mbit/s? On the wire? Or simply the amount of data that you want to transfer (and allocate buffering, processing power for?)

    In this case it makes a pretty big difference.
  • I need it on the wire Anthony. 1 Mbit/sec is the maximum data transfer rate that my application should support.

    Regards
    Anila
  • ok well if you set the wire to 1MHz - you will not get 1MBit/s of data. that's why I'm asking.
    you may have a hard time getting 1MBit/s data out of 12MHz. It depends really on the USB host and how it schedules.

    If you need 1MHz on the wire because you're doing to use cheaper wiring then you need to set the hardware up for
    a low speed device (and again I need to check if our controller supports low speed ..)
  • Anthony,

    Why 1MHz does not correspond to 1Mbit/sec? Is this understanding wrong?

    Secondly, yes please check if the hardware supports low speed USB. And if not, then please tell me how to squeeze 12 Mbit/sec, I thought of inserting delays after each byte.

    Thanks
    Anila
  • Also I am going to use the normal USB cable, no cheaper wiring.
  • Anila,

    So when you talk about 12Mhz for USB1.1 full speed or even 480MHz for USB2.0 high speed these are line rates for the symbols assuming 100% use which is never the case. It's useful to talk about these numbers for wiring purposes because the faster the maximum edge rates usually the more expensive (think shielding and impedance control) the wiring gets.

    But if you are talking about data bandwidth it's a different story. You'll get far less bandwidth than the headline wire rate normally for many reasons:
    1) USB shares the host port's bandwidth among all the devices connected to the same host port.
    2) The host port initiates all transactions, and so has the task of scheduling transfers. It may only
    wind up scheduling a single transfer to your endpoint per frame (which can be 1ms on full speed).
    3) Each transfer can have multiple packets - setup, data, ack... so there is overhead involved at
    the packet level. the smaller your individual data packets are the more this overhead hurts..

    You can't really directly control the data rate the same way you would do on a UART by simply putting in a time delay between each data byte you write to the UART's transmit buffer. And you cannot change the wire bit rate like you could on a UART.

    You should read up on the CDC class to see what provisions there are for controlling bandwidth. I'm not up to speed on this class to tell you - but there may be something int a descriptor you can set to limit how often the host polls your device or how much data it takes, and you may be able to use this method to limit the bandwidth used by your device.

    If anything I'm a bit worried you may not always be able to get 1Mbit/s data payload through the 12MHz USB link - depends a lot on the host port, hub and whatever else is plugged into the same host port. So if it's really important I would make sure you plug directly into the root hub.

    -Anthony
  • Anthony,

    Thanks for the detailed reply. It's a great explanation.

    I tried the CDC example provided by TI to send data with and without delays (70 bytes packet). On the host side (windows XP), is a simple application that listens on the COM port. Since RM48 USB device maps as a virtual COM port, I also increased the baud rate to almost 1 Mbit/sec (981000). I want to tell you that with delays in the CDC example, it works fine, hardly any packet loss. Can you please tell me what's the data bit rate of CDC example? Is it 12 Mbit/sec?

    As you say I might not get a payload of 1 Mbit/sec, then for my application 1Mbit/sec would rarely be the case, i.e. the application would be sending much less data than 1 Mbit/sec, so to achieve this slower data rate by using CDC application, what shall be done, delays work, what would you suggest?

    Regards
    Anila
  • Hi Anila,

    Thanks. I don't know enough about the implementation of our CDC example to answer the question.
    Just can speculate. I think that it is unlikely we have anything to throttle the bandwidth down based on the baud rate.
    You can check to see what happens when the PC sets the baud rate though, I'm guessing we totally ignore that parameter.

    If we were building a USB->UART where the UART were a hardware module like the SCI then I would say we would need to initialize the SCI baud rate to whatever the PC tells the virtual COM port that it should be - because in this case the idea is to talk over a real UART to some other device and that real UART needs to be running at a particular baud rate.

    But the CDC example unless I'm wrong is just USB<>memory and so it doesn't necessarily need to know or do anything with the baud rate to 'work'.

    If you are getting some packet losses (I guess these would be overflows ...) are these on the host side (data transmitted by the RM48 being dropped by the host PC) or on the device side (data from PC being dropped by the RM48).

    If it's RM48 -> PC then maybe the delay you implemented is fine. But I would think about it as something you are doing to manage the memory usage so that the buffers do not overflow. It doesn't *actually* result in changing any frequency on any physical signal.

    Best Regards,
    Anthony