This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

HeapBuf

I have a project that uses HeapBuff as data in/out....my device receives a lot of data from another device. My device has to take the data and send them to another device. I use Memory_alloc/Memory_free (heapBuff) to process Input/output.

I have defined HeapBuff(16blocks)  so if the flow of data input is faster than data output, then I  will add blocks of data, otherwise I take the head free the memory and re-alloc when i receive the new data.

Everything works fine when the input_speed is <= output_speed, which is the normal condition. Now I have to manage the situation when input_speed>output_speed. My list is a FIFO so I have to take the head of the list and the next block will become the new head.

I am trying to figure out if there is a why to shift the blocks down one (buffer_size) after I took the head so I don't have to shift all the blocks one by one.


Ok let me generalize the question a bit more...if I Malloc_free a block inside the HeapBuff(lets say the head for instance), is there a "easy" way to shift all the blocks which are after the block I took so I can keep the HeapBuff blocks contiguous ?

  • Hello Gianluca,

    malloc works by changing the address pointers in the link list so that the memory may not be required to be contiguous but can be allocated on run time.The only other way would be static allocation of the memory buffer to keep them continuous from start.

    Regards
    Amit
  • Thanks Amit,
    so lets say I have a HeapBuff with 16 blocks and I do 16 Memory_alloc, then I want to free the head and I move the head on the second block. Now I have 15 blocks used. What happen when I do a new Memory_alloc? Will the memory be allocated in the area available? If so..this is perfect!!!


    Gianluca
  • Hello Gian,

    It will depend on when the OS receives the freed memory buffer. If the OS does not see the free buffer before the malloc is called then a new buffer shall be allocated. Also since the Input Rate > Output Rate there is a chance that if the input rate is not managed then then the buffer overflow may corrupt the application. How do you plan to control the rate? FIFO is a temporary solution.

    Regards
    Amit
  • Thanks, ok my description of the input/output speed was just a simplification to explain the point.
    There is not a defined rate for the input data but what can happen is that I can receive 2 data inputs very close (they are coming from devices that are not synchronized). This is like an exception not the normal behavior. I have already tested for long time the flow and this event is kind or rare but must be managed. My plan is too have these 16 Heapbuf blocks and test then the worst case scenario and see how many blocks I actually use in the heapBuff. If I see that in this scenario I am very far to use all the blocks...then I think I will be confident...it is a bit like when you dimension a stack...

    I think the problem now is better defined, even if I have a plan, I appreciate any suggestions that can help to make the sw more efficient and robust.

    Thanks again for your help.
  • Hello Gianluca,

    If the planning is for worst case, then you would need to calculate the input worst case rate ratio to the output worst case ratio. Factoring it by a "sane" number (I follow the 2x principle from electronic design) should give the correct number of blocks that can be used for testing.

    Regards
    Amit