This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Problem in using C6748 RTFS

Hi,

I am using RTFS in my application which is a time critical algorithm writing 1 kb of data to SD every second. The algorithm have to filter data every 20 millisecond and consume 5 milliseconds(20%) of time slot. Every 64 seconds which is equal to 65536 bytes of data something interfere with data acquisition and filtering routines.

I have used both flushdrv() and fsync() but they have no effect. I guess that something happens when the buffer is full but normally execution of fsync() should flush the buffer.

Your help is greatly appreciated.

 

  • Can you change the rate you are writing? And confirm that it is always happening at 65K?

    Also, please reply with the versions you are using for:

        1. RTFS

        2. BIOS

        3. XDC tools

        4. Code Composer Studio

    Steve

  • Thanks Sreve. Yes checking with different data rate it exactly happens on 65k. 

    RTFS: 1_10_02_32

    BIOS: 5_41_02_14

    XDC: 3_15_04_70

    CCS: 3.3.82.13

    Regards,

    Alireza

  • I wonder if this might be a priority/scheduling issue.  What are the priorities of the tasks running in your system?  In particular, the tasks that are performing your data acquisition and filtering?

  • Data Acquisition is continuous and is done by sending and receiving to spi1 through DMA. Filtering task is higher priority than the task which initialize mmcsd.

    I think that if the problem is priority then it should happen every second. I changed the SD from class4 to class 10 which make time required to write 64Kb less than time slot which is available. Now rarely happens and not in equal time intervals. 

    Thanks.

  • Hi Alireza Taghizadeh,

    I spoke with an RTFS expert and he has some ideas for you to try.

    He thinks it could possibly be something to do with new cluster allocation, flush behavior.  Can you try a few things to help trace where this could be?  Here's the email he sent me:

    " [Regarding the 65k problem ...] It depends on what boundary he's hitting, could be some erase block oriented thing in the driver or it could be something to do with new cluster allocation (I don't think this should be an issue because it shouldn't be hitting the flash to allocate clusters, it could be flush behavior, but I don't think that's it either.. To rule out the allocation and flush can try to pre-extend the file to like 1 meg or so by writing to it passing a zero pointer for the data. Then do their collection routine and see if the stalls go away.. If they do then it's in the latter camp, if not it in the driver.. conversely they can do their standard run as usual but pass a null pointer when they do their 1024 byte writes.. this will do everything with the metadata that they do now except writing the data. A third thing to do is instrument the time spent waiting for device driver writes to complete to identify if the problem is in the SD side.."

    Thanks,

    Steve

  • Hi Steve,

    I am now in a vacation. I will check the advices as soon as I meet my board.

    Thank you very much for following my case.

    Alireza

     

  • Hi Steve,

    I returned to the problem! For a while I was happy because I changed my SD from a class 4 to class 10 and the error become very rare. But since it become clear that even a few error can result in undesired situation I have to solve it completely.

    Though with a class 4 SD error was periodic and showed after writing total of  65k , with a class 10 it become sporadic with no relation to the total volume of data written( sometimes happen once an hour).

    I did all advices which did not solve my problem. I also measured time required for writing data to SD.

    Interesting facts about my situation are:

    1. Time it take to write a block of 1kb data should be about 250us but it is 2.5 ms, 10 times more! (considering SD write speed as 10MB/s).

    2. At the time the problem appear write time increase 10 times and sometimes even 100 times.

    3. I continuously use EDMA for transferring data from ADC. But if that is the source of problem why it should not happen all the time!?

    4. Writing null data and pre-xtending file did not changed time figures.

    Regards

     

  • I put my filtering routine in an ISR and now it start to has effect.

    Pre-extending file with ftruncate() improved performance but still much longer than expected.

    Only passing Null instead of data array with pre-extending , make time exactly the same as what it is expected to be.

    I don't know what should i conclude from these!?

  • Alireza Taghizadeh,

    Alireza Taghizadeh said:
    3. I continuously use EDMA for transferring data from ADC. But if that is the source of problem why it should not happen all the time!?

    One idea I have is to break this problem down into just the data transfer to the SD card.  Can you eliminate the other components of your application (such as your EDMA transfer, or other alogrithms running)?  Or maybe try updating the standard MMCSD example to do some similar transfer that your app is doing, then see if you still run into the same problem?

    Alireza Taghizadeh said:
    1. Time it take to write a block of 1kb data should be about 250us but it is 2.5 ms, 10 times more! (considering SD write speed as 10MB/s).

    Which card did you run this on? the Class 10?  I'm thinking that given the sporadic behavior seen when with the

    Also, I've been reviewing our RTFS benchmark numbers (C:\Program Files\Texas Instruments\rtfs_1_10_03_33_eng\packages\ti\rtfs\benchmarks\doc-files\Benchmarks.html) that we have for MMC/SD.  They were run using the following card, although I can't tell which class it is:

           PNY Optima Pro SD Card 133x 2GB (SD02GDG0612)

    What was found in our benchmarks is that the block size used (the size of the data array passed to the write() API) greatly affected the write speed performance.

    For example, the bench mark programs would time how long it would take to write a total of 10MB to the MMC/SD card, but using different block sizes.

    So, intially, the block size was 1024 bytes, so the total transfer would be done by writing 1024 bytes n number of times, where n = 10MB/1024.

    The best size was a block size of 4194304, as this size resulted in a huge performance increase over the small 1024 block size.

    The idea here is that you want to write data in bigger chunks as opposed to small chunks, as you will have better performance this way.

    Can you try writing a large amount of data, such as 10MB and write using a large block size, such as 4MB or 8MB?  Do you see better performance?  I'm wondering if you are making all of your write() calls with a size of 1024 ...