This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

linking warning: section spans page boundary: not allowed before CPU revision 3.0

Hi,

For C5510 CPU revision 2.1, I got a linking warning: section ".SARAM_C$heap" (0x2c000) spans page boundary: not allowed before CPU revision 3.0.

There is no this type of warning on CCS2.20 (CGT v2.56), it shows up on CCSv4 (CGT v4.3.7). Do you have some ideas of how to tackle this warning ?

The map file:

.SARAM_C$heap
*            0   [ 0002c000 ]  00016000          *   00011f80   UNINITIALIZED
                 [ 0002c000 ]  00016000          *   00011f80   --HOLE--

Thanks,

Yuhua

  • Only code sections may cross hardware page boundaries.  Hardware pages are 64K words aligned on 64K word boundaries.

    You show a section starting at byte address 0x2C000.  Apparently that section is larger than the amount of space (0x12000 bytes, 0x9000 words) remaining in the hardware page.

    The very old (January 2003) CGT 2.56 tools did not check for the boundary-crossing restriction (even though the restriction existed).  The newer tools (e.g. CGT 4.3.7) do check for the restriction and issue a warning when it is violated.

     

  • Hi, Paul,

    Thanks for your reply.

    Could you elaborate a little bit on "You show a section starting at byte address 0x2C000.  Apparently that section is larger than the amount of space (0x12000 bytes, 0x9000 words) remaining in the hardware page." ? Where is this "0x12000" coming from ?

    Yuhua

     

  • Hardware pages are 64K words (0x20000 bytes) starting on 64K word boundaries (byte addresses 0, 0x20000, 0x40000, ...).

    The section located at 0x2C000 is on the page starting at 0x20000.  The next page boundary above the byte address 0x2C000 is at byte address 0x40000.  So the number of bytes remaining on the page starting at 0x20000 is computed as (0x40000 - 0x2C000 = 0x12000. 

     

  • Hi, Paul,

    Thanks for the explanation.  I re-calculated 0x40000 - 0x2C000, it is actually 0x14000 bytes left in this page, the length of heap is 0x11f80 words (0x23F00 bytes),  so it doesn't fit in this page. Do you have a suggestion of how to handle this kind of problem ?

    Yuhua

  • Ah, yes.  Apparently I need to brush up on my math skills.  No wonder my numbers confused you.

    Unfortunately if you need/want a heap greater than 64K words there is no obvious easy way to do it.  The heap allocator in the runtime support cannot use a memory chunk that crosses a page boundary, so 64K is the max.

    You could (with a good deal of effort) re-work the existing heap allocator (or start from scratch) to implement an allocator that can deal with a larger heap.  malloc and friends are the in the runtime support file memory.c.  The source is distributed in the file rtssrc.zip.

     

  • Hi, Paul,

    Two more questions pop up:

    1. Is CPU revision 3.0 available for C5510? Does max of 64K words still apply for revision 3.0 ?

    2. If I leave this warning in my project, what will normally happen when the program runs ?

    Yuhua

  • Hi Yuhua,

    1. C5510 always uses only a CPU revision 2 (2.2 or 2.21 I believe).  Only the more recent C55x devices (5505, 5504, 5515, ...) use CPU 3.x.  Unfortunately the 64K limit on the size of the heap still exists on rev 3 CPUs.  The intention was to update the allocator to support the more flexible rules of the rev 3 CPU, but as yet that has not been done, I think largely becuase typcial users have not been big users of the heap.

    2. A number of bad things are possible.  Generally the heap allocator will behave in erratic an unpredictable ways.  Some examples: If the allocator manages to return a memory chunk that straddles a page boundary, code trying to index into that chunk will access the wrong addresses for the index values that would be expected to be on the "next" page.  Once the allocator starts utilizing the memory on a second page, it is likely to break its own internal data structures that maintain the free list.  It would be unusual for this warning to be safely ignored.  (I'm kind of wondering why we did not make it an error.)  I suppose if malloc never actually allocated any of the memory on the "other" page, things would work OK, but of course then there was no reason to make the heap that big in the first place.