This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS570LS1224: TI FEE Driver slow down after adding blocks on configuration

Part Number: TMS570LS1224

Hello Support Team,

We are working on a project were we updated the TI FEE Configuration adding 2 blocks to a previous configuration of 4 blocks.

We do not updated the block size and datasets of previous allocated blocks.

We are experiencing a TI Driver slow down (TI_FEE_Mainfunction needs more time/calls to complete a Write and exit from BUSY state)

Q1: Are there information regarding the time needed to complete a BUSY operation in relation to number of blocks (is the time related to the number of blocks)?

Q2: I have another question about how to use TI FEE Driver. TI Doc suggests to schedule tasks and call TI_FEE_MainFunction until IDLE after an operation is requested. Is the schedule/context switch mandatory or we can call TI_FEE_Mainfunction multiple times after the Operation Request, in the same context?

Thank you for the support

Ilario

  • Hi,

    Q1. The time is dependent on the flash operation, and is also related to number of blocks. When you write data to one block, the driver needs to get the block array index by checking the block number of configuration data of all the data blocks.

    Q2. TI_Fee_WriteSync function completes writing of the data synchronously, i.e data is written to EEPROM before the exit of the function. TI_Fee_WriteAsync function accepts the write job, but actual writing of the data is done in TI_Fee_Mainfunction API. TI_FEE_Mainfunction() should be called to complete asynchronous write/read jobs, moving the data to new sector(once the sector is full), erasing the sector and making it ready to be used.

    Yes, you can call TI_FEE_Mainfunction multiple times after an async operation request.

  • Hi,

    thanks for the support, but difference in time is huge, maybe I'm wrong in something.

    We had a fee_cfg like this:

    - 4 Blocks: 1 & 2 with 250 datasets of 25 bytes; 3 & 4 with different setup (total amounts of datasets 584)

    Write 0xAA to all (250) datasets of 1° block takes ~500ms

    We have now a new cfg like this:
    - 6 Blocks: 1,2,3,4 like old ones; 5 & 6 with 100 datasets of 25 bytes (total amounts of datasets 784)

    Write 0xAA to all (250) datasets of 1° block takes now more than 30 seconds

    Could you please clarify if the access of single dataset is right?
    We use the Write like this:
    Dataset 0x64 of block 5: (0x500 | 0x64) = 0x564

    TI_Fee_WriteSync(0x564, buffer);


    Why adding 2 more blocks (200 datasets) the performance is so affected?

    Best Regards
    Ilario

  • The difference is too large: 500ms vs 30s.

    Is the new block programmed to the end of the virtual sector? Is sector erase-operation happened when you writing new block to virtual sector? If there is insufficient space in the current Virtual Sector to program the block, it switches over to the next Virtual Sector and copies all the valid data from the other Data Blocks in the current Virtual Sector to the new one. After copying all the valid data, the new data is now written into the new Active Virtual Sector and the current Virtual Sector which is marked as ready for erase will be erased in background.

  • Q: Is the new block programmed to the end of the virtual sector? 

    A: I dumped the memory and datasets of block 1 and datasets of block 5 are written on the same VS consecutively

    Q: Is sector erase-operation happened when you writing new block to virtual sector?

    A: How can I verify?

    Q: If there is insufficient space in the current Virtual Sector to program the block,.....

    A: We are emulating 2 EEPS, both EEP with 2 Virtual Sectors and each Virtual Sector mapped 1:1 with Physical Sector.

    We are unable to understand if the data fits into one Virtual Sector since we are unable to understand the overhead.

    From reverse engineering the dump it seems that for 25 Bytes of Dataset the FEE Driver uses 56Bytes o Physical Memory

    And it seems that when data does not fit anymore in Virtual Sector it start writing on the beginning of Virtual Sector, like a Circular Buffer.

  • Hi QJ,

    could you please help me understanding how can I calculate the overhead of this configuration and the total amount of Virtual Sector Size and Physical Vector used?

    I'm worried to not fit into the Vector

    Thank  you

  • Hi  ,

    Would you like to share your code with us? I'd like to do a test on my bench using your code.

  • For NDA I can't unfortunately. Can I help differently?

  • Hi Ilario,

    I just did a test with your FEE configuration, but I use 1 EEPS. I noticed that the block size is too big for the virtual sector. The size of 1 virtual sector is 0x4000 Bytes. The block #1 size is 255*(32 + 24) = 0x37C8. Where 255 is the number of data sets in block#1, 24 is the size of block header, 32 is the block size (round up to n*8 bytes). The size of virtual sector is 32 bytes. The total size of virtual sector and block #1 is 0x37E8.

    When writing data set #44 of block#3 to EEPROM, since the virtual sector 0 is full, data set#44 will be written to virtual sector 1, then all the data in virtual 0 is copied to virtual 1. Unfortunately the virtual 1 doesn't have enough space for the data from virtual sector 0. The code will keep copying data and erasing the sector, and dataset #45/46../80 will not be written to EEPROM.