This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

EMAC evaluation

Other Parts Discussed in Thread: OMAP-L137, SYSBIOS, TMS320C6472

I'm trying to evaluate the EMAC of C6472, using a gigabit connection.

So far, I'm just trying to see how fast it can send information out. The best results I got was when I placed my entire data on L2, and activated all the cache. I got up to 300Mb/sec.

Problem is, when I do that, I sometimes seem to 'miss' packets, that is, packets I send out never reach the computer I am connected to. This happens even though I still get a TX interrupt for all the packets I send out. For example, I send out 10000 packets, get 10000 TX interrupts, but only receive around 7000 packets.

I'm using the EMAC CSL, on the C6472 EVBoard (TI), through EMAC0, connected directly to a computer, and I'm checking the received packages with Wireshark.

Has this happened to anyone? Any idea why this can happen?

  • Hi.

    Is the figure of 7000 received packets an exact figure, and consistent with every run, or does it vary largely around 7000 with each run?

  • It's not exact and not consistent with every run. I've seen it go as low as around 5500, and as high as 9000.

    Also, it seems it's usually bulks of packets. I send them with an ID, and it usually looks like between 5-20 packets are missing. This number is also not constant.

  • To me it seems that the packets get dropped by the receiving device (PC) probably because the CRC in the Ethernet header is calculated as being bad on reception. The reason for the CRC being bad could be that your PHY connection (i.e. your ethernet cable) is of poor grade. You need to use at leist CAT-5 for Gigabit Ethernet to work properly. Even better if you could use STP CAT-5 or higher. It could also be that your cable is sitting next to some electromagnetic noise source (eg. a big electric motor) and causing corruption of data over the PHY link. So, move away from that if there is such a thing.

    My suggestion for now is to bypass the cable by doing a loopback test to see if packets are still lost. If not, then you know it is the cable. If yes, then the packets are released onto the cable with errors, so the problem must be on your side (i.e. the board or the DSP or something).

    Not sure how much you know about Networking and Protocol, but, keep in mind that Ethernet standard is only a layer 1 and 2 protocal standard (when looking at the TCP/IP or OSI models), so it does not have error correction or retransmission capability to improve on reliability when packets are corrupted during transmission. For this you will need TCP (layer 4) or maybe write your own layer on top of Ethernet to address reliability.

    If you choose to go the TCP route, you might want to consider using NDK and DSP/BIOS, since it is well established, and free of charge. It gives you full Layer 3 (IP) and Layer 4 (TCP/UDP) support, and is (apparently) easy to use. They also have support for Layer 5 Application (HTTP, FTP etc) support. I have not tried it myself, because I dont like the idea of a RTOS that takes away control over peripherals, and possibly performance.

    Where did you get the CSL for the C6472? I am asking becuase I am having huge trouble understanding the CSL EMAC example project for the C6474 DSP that came with my EVMC6474 dev board. It seems that you are much further in your understanding of the CSL EMAC module's API's than I am, because you can already send and receive EMAC packets successfully. So, if the two DSP's (6474 and 6472) EMAC modules are the same, you might be able to help me.

  • Thank you. I'll try upgrading my cable.

    Since I do not need TCP/IP, I will just use raw ethernet. Hopefully it really is the cable. Even though in that case I don't see why I don't sometimes lose packets in slower rates, too.

    I'll also try the loopback. It may shed some light on things.

     

    As for the CSL, I use the CSL for 6486, which I got with an EVboard for that DSP.

    I used to work with C6474, and also had problems with the EMAC there. It is not exactly the same as the C6472 one (since C6472 has two EMACs).

    IIRC, the problems were mostly PHY related. I think I looked through the forums here and found a function that initialized the PHY, and then changed some of the csl_mdio code so that it would only try to connect to PHY1 (which is where my PHY was connected).

    Does the code as it is work for you? That is, does it pass the internal loopback test?

  • Yes, code works, but ONLY for internal (i.e. Local) loopback, and not for PHY (external) loopback. I have raised this issue on another post. Have alook and see:

    http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/439/t/51351.aspx

     I am still struggling and still getting no feedback. It has been more that a week now.

    I am glad to hear that I am not the only one struggling with the C6474 CSL EMAC module and API's. I think its rediculous how complicated they make the example projects, and expect us to understand them to be able to work the peripherals.

    Anyway. Have a look and let me know what you think.

  • Hi All,

    I did take a look at this and consulted with the developer, it seems that PHY loopback is not test anywhere nad they already removed it from the next CSL release.

     

    Thanks,

    Arun.

  • Hi Arun.

    Thanx for the feedback.

    What does this mean? What action will be taken to replace/resolve this?

    I have been struggling for three weeks now to adapt the code of the complicated EMAC CSL project example to simply send and receive packets to/from a pc, but because of the complexity of this example, the problem with the PHY loopback, and slow support,  I have not really made any progress.

    Could you perhaps assist me with this?

    Estian.

  •  

    Hi Estian,

    I too am trying to get a working example of the NDK on the C6472 evaluation board.

    To answer one of your questions I believe the chip support library lives in "C:\Program Files\Texas Instruments\ccsv4\emulation\boards\evmc6472\cslr_inc".

    But this is not part of the NDK scr (but may of been built from it).

    I have found the NDK on the C6472 unstable with it consistently locking up after reporting which core the NDK is running on.

    My problem comes from the NDK examples being  built using only SL2RAM and LL2RAM and I need a custom platform using the DDR2 RAM.

    Cheers.

  • Hello

    I have an OMAP-L137 EVM and want to use its ethernet .

    i run an example project that is downloaded from spectrumdigital.com (named emac_loopback ) but i get 4 or 5 error after running it.

    is there any document or article to help me ?

  • Fred,

    I am stuck at the exact same position as you are, and havn't been able to figure out much. In case you have found some pointers please share with me. Could it be something to do with initialization of the DDR2 memory controller? In the evminit.c file I see that only the PLLs are being initialized, but not the DDR2 memory controller. I tried calling DDR2_533_32_Setup() via the GEL file supplied with the installation, but with no success.

    What I could guess was that since the EVM_pllc_init() function gets called though the sample app (while bringing up the NDK), some init code should compulsorily be placed in SL2RAM / LL2RAM. Otherwise an GEL file might have to be used.

  •  

    Try searching the "C:\Program Files\Texas Instruments" and sub directories for PDF's as a start.

    The NDK has a user guide (spru523_ug.pdf) and reference guide (spru524_pg.pdf).

    I have found most things do not compile out of the box and you need to configure paths and library locations etc.

    Post the errors and we'll see what's failing.

    Cheers.

  •  

    I believe the NDK does require the use of some LL2RAM ... I do not know if this is just cache or it uses special custom blocks.

    I have found the most stable memory map so far is:

    // Define a variable to set the MAR mode for DDR2 as all cacheable

    var Cache        = xdc.useModule('ti.sysbios.family.c64p.Cache');

    Cache.MAR224_255 = 0x0000000f;

     

    // Create a heap in internal shared memory

    // This heap is used for the NDK

    var heapMemParams          = new HeapMem.Params();

    heapMemParams.size         = 0x3000000;

    heapMemParams.sectionName  = "systemHeap";

    Program.global.heap0       = HeapMem.create(heapMemParams);

    Memory.defaultHeapInstance = Program.global.heap0;

    Program.heap = 0x9000000;

     

    // Apply memory section map

    Program.sectMap["systemHeap"]         = "DDR2";

    Program.sectMap[".far"]               = "DDR2";

    Program.sectMap[".cinit"]             = "DDR2";

    Program.sectMap[".bss"]               = "DDR2";

    Program.sectMap[".const"]             = "DDR2";

    Program.sectMap[".text"]              = "DDR2";

    Program.sectMap[".code"]              = "DDR2";

    Program.sectMap[".data"]              = "DDR2";

    Program.sectMap[".heap"]              = "DDR2";

    Program.sectMap[".taskStackSection"]  = "SL2RAM";

    Program.sectMap[".stack"]             = "SL2RAM";

    Program.sectMap[".far:NDK_OBJMEM"]    = {loadSegment: "LL2RAM", loadAlign: 8};

    Program.sectMap[".far:NDK_PACKETMEM"] = {loadSegment: "SL2RAM", loadAlign: 128};

     

    With a custom RTSC file turning on all caches to the max and correcting the DDR2 memory default bug which put the origin at $DFFFFFFF instead of $E0000000.

    But even with this I still get the NDK to hang regularly when it displays which core it is running on. All I do is reset then run again and 3 out of 10 attempts it runs (which is not acceptable) ... If it was the DDR2 controller setup I would have thought it would be more permanent rather than random.

     

    If anyone has anymore knowledge or can see problems with the above memory map feel free to comment.

    Cheers.

  • When you say it runs 3 out of 10 times, are you sure you are able to transact over Ethernet during those 3 times? What I have seen is that when running out of DDR2 even if the Link Status says it's up, the board does not respond to ping.

  •  

    Once I see the NDK has booted (got pass the core 0 message) I can reliably ping the IP address.

    Are you using a static IP address for the DSP ?

    In the output from the debug window below I find the same code hangs at the "Core Number on which NDK is running = 0 " point.

    The only thing I do is reset the system (through target menu) and restart.

    Once connected it does appear to transfer the TCP stream fine ... it is just getting the NDK to boot ... feels like a support chip coms failure or threading issue in the NDK.

    Cheers,

     

    MAC address read from EFUSE

    MAC Address read: 00-27-BC-7F-49-CC

    Core Number on which NDK is running = 0 

     EMAC should be up and running 

    EMAC has been started successfully

    Registeration of the EMAC Successful

    Network Added: [1]->192.168.0.21

    STATUS - TCP Connection - Port 56789

    TCPConnectionThread() stack size is 5000000, Stack used is 828.

  •  

    OK ... I have tracked the NDK lockup to a function called EVM_pllc_init().

    Once this is called it is hit and miss on things continuing.

     

    This is where I need a TI support member to answer a question.

    If I have move the .text segment out into DDR2 ram (ie: running code from DDR2) should the EVM_pllc_init() function still setup the PLL correctly?

    There are alot of hard coded "for loops" in the function ... I would expect these to run slower but running it from DDR2 appears to have an effect.

     

    Is anyone else working in this area ... using the NDK from DDR2 memory for the C6472 ?

    Or is a proper library coming in the Chip support package due on the 15th ?

    Cheers.

     

  • UDP or TCP ... from packets I guess your doing UDP ... UDP is not guaranteed to received all packets ...

    Better to use TCP and count/time bytes as performance drops off ... have you tried using DDR2 with the NDK at all ?

    Cheers.


  • Finally I have an answer! There are two issues that we are discussing, when program code is put in DDR2:

    1) Erratic behaviour. (Lockup after reporting which core NDK is running from)

    2) NDK not coming up. (Here EMAC comes up, showing Link Status, but cannot ping the board)

    The first problem lies with initialization of the PLL controllers. EVM_pllc_init(), which Fred pointed out, is buggy. Reserved registers as well reserved register bits are being modified in the code. For PLL1 the only divider register modifiable is PLLDIV10, none else. For PLL3 no register is modifiable, except writing 1 to bit 0 of PLLCTL. Further details about configuring PLL controllers can be got from the C6472 datasheet tms320c6472[1].pdf. I have corrected these and taken the PLL init routine to GEL file, disabling this function completely. I do the EMIF init there as well. init_emif() function in the installation supplied GEL file works correctly. Note that the init_PLL() function in the supplied GEL file also has inconsistencies, like in EVM_pllc_init().

    The second problem lies with one of the NDK libraries. The memory section .far:NDK_PACKETMEM needs to be placed in SL2RAM / LL2RAM, rest all sections can go to DDR2. This is either a bug or an undocumented limitation of NDK 2.1.0.

    With the above two corrections I am able to work fine. BTW I am using the 700MHz eInfochips EVM and EVMC6472_SDK_Setup_v_1_1. I would be good if somebody can verify the above corroborate.

  • Congrates Viswanath !!!

    I would love to verify this ...

    The easiest way would be if you could upload a simple sample project (just zip the project directory).

    TI has a file upload facility ... IE: when I click on your name there is an "uploaded" files section which I can view ... (I have never used it but the theory is there!).

    Cheers.

  • Viswanath,

    Verifying your findings ...

    Pg 147 of SPRS612C
    SPRU806 documents a superset of features that are not supported by the C6472 and to use the datasheet (SPRS612C) as the guide.
     
    Pg148 of SPRS612C
    States only PLLDIV10 has a programmable divider.
     
    Pg150 of SPRS612C
    Only registers documented in table 7-20 for PLL1 should be modified.
     
    Pg171 of SPRS612C
    Table 7-45 states only PID and PLLCTL are valid for PLL3
    Also I am using the EVMV6472_SDK_Setup_v_3_2 which still has these problems ...
    einfochips have just released a new one EVMV6472_SDK_Setup_v_3_3 .... I wonder if these problems will magically disappear ?
    Cheers.

  • Viswanath,

    I have just checked the EVMC6472_SDK_Setup_v_3_2 against the new 3.3 version.

    There are changes for the NDK support stuff but not in the EVM_pllc_init() function.

    Could you please post the changes that you have made to this function?

     

    I am thinking that as soon as the ".sysmem" section is set to "DDR2" memory (required to get the default heap into DDR2 memory) then the EVM_pllc_init() is placed and ran from DDR2 .... but this is the code that sets up the PLL for DDR2 memory and fails intermittently. What are your thoughts on this ?

    When you moved this init code to the GEL file it may mean that it is not run in DDR2 and succeeds.

    I need to see your changes to EVM_pllc_init() and investigate the GEL files.

    Cheers.

  • Fred,

    SDK 3.3 still says NDK 2.1.0, so I don't think the example code would have changed.

    This is what I feel, correct me if I am wrong:

    GEL file functions are run before the CPU actually starts execution of the program, like on connecting to target or when loading the .out. So putting PLL, DDR2 etc. init code here makes more sense to me. Otherwise all resources pertaining to the init function should be placed in L2, so that everything is reliably up before you start accessing peripherals.

    I have placed the entire code in EVM_pllc_init() under #if 0, and replaced the init_PLL portion in evm6472.gel with the following:

    /* Board Options */
    #define CLKIN1FREQ  25          // CLKIN1 frequency in MHz */
    #define CLKIN2FREQ  25          // CLKIN2 frequency in MHz */
    #define CLKIN3FREQ  26.667      // CLKIN3 frequency in MHz */

    /*--------------------------------------------------------------*/
    /* init_PLL()                                                   */
    /* PLL initialization                                           */
    /*--------------------------------------------------------------*/
    #define PLLCTL_1        0x029A0100      // PLL1 control register
    #define PLLM_1          0x029A0110      // PLL1 multiplier control register
    #define PLLCMD_1        0x029A0138      // PLL1 controller command register
    #define PLLSTAT_1       0x029A013C      // PLL1 controller status register
    #define DCHANGE_1       0x029A0144      // PLL1 PLLDIV ratio change status register
    #define SYSTAT_1        0x029A0150      // PLL1 SYSCLK status register
    #define PLLDIV10_1      0x029A0178      // PLL1 controller divider 10 register

    #define PLLCTL_2        0x029C0100      // PLL2 control register
    #define PLLCMD_2        0x029C0138      // PLL2 controller command register
    #define PLLSTAT_2       0x029C013C      // PLL2 controller status register
    #define DCHANGE_2       0x029C0144      // PLL2 PLLDIV ratio change status register
    #define SYSTAT_2        0x029C0150      // PLL2 SYSCLK status register

    #define PLLCTL_3        0x029C0500      // PLL3 control register

    init_PLL()
    {
        int i;

        /** PLL 1 configuration *****************************************/
        {

            //int PLLM_val =    25; // 625 MHz
            int PLLM_val =    28; // 700 MHz
            /* GEM trace logic */
            int PLLDIV10_val = 3;
           
            /* In PLLCTL, write PLLEN = 0 (bypass mode).*/
            *(int *)PLLCTL_1 &= ~(0x00000001);
            /* Wait 4 cycles of the slowest of PLLOUT or reference clock source (CLKIN).*/
            for (i=0 ; i<100 ; i++);
            /*In PLLCTL, write PLLRST = 1 (PLL is reset).*/
            *(int *)PLLCTL_1 |= 0x00000008;
            /*If necessary, program PREDIV and PLLM.*/
            *(int *)PLLM_1 = PLLM_val - 1;
           
            /*If necessary, program PLLDIV1n. Note that you must apply the GO operation
            to change these dividers to new ratios.*/

            /* Check that the GOSTAT bit in PLLSTAT is cleared to show that no GO
                    operation is currently in progress.*/
            while( (*(int *)PLLSTAT_1) & 0x00000001);


            /* Program the RATIO field in PLLDIVn to the desired new divide-down rate.
                    If the RATIO field changed, the PLL controller will flag the change
                    in the corresponding bit of DCHANGE.*/
            *(int *)PLLDIV10_1 = (PLLDIV10_val - 1) | 0x8000;

            /* Set the GOSET bit in PLLCMD to initiate the GO operation to change
                    the divide values and align the SYSCLKs as programmed.*/
            *(int *)PLLCMD_1 |= 0x00000001;

            /* Read the GOSTAT bit in PLLSTAT to make sure the bit returns to 0
                    to indicate that the GO operation has completed.*/
            while( (*(int *)PLLSTAT_1) & 0x00000001);

            /* Wait for PLL to properly reset.(128 CLKIN1 cycles).*/
            for (i=0 ; i<1000 ; i++);

            /* In PLLCTL, write PLLRST = 0 to bring PLL out of reset.*/
            *(int *)PLLCTL_1 &= ~(0x00000008);

            /* Wait for PLL to lock (2000 CLKIN1 cycles). */
            for (i=0 ; i<4000 ; i++);

            /* In PLLCTL, write PLLEN = 1 to enable PLL mode. */
            *(int *)PLLCTL_1 |= (0x00000001);

            GEL_TextOut("PLL1 has been configured. CPU is running at %dMHz.\n",
                        "init_PLL",1, 1, 1, CLKIN1FREQ * PLLM_val);
        }
       
        /* PLL2 configuration (EMAC) */
        {

            /* In PLLCTL, write PLLEN = 0 (bypass mode).*/
            *(int *)PLLCTL_2 &= ~(0x00000001);
            /* Wait 4 cycles of the slowest of PLLOUT or reference clock source (CLKIN).*/
            for (i=0 ; i<100 ; i++);
            /*In PLLCTL, write PLLRST = 1 (PLL is reset).*/
            *(int *)PLLCTL_2 |= 0x00000008;
       
            /* Wait for PLL to properly reset.*/
            for (i=0 ; i<4000 ; i++);

            /* In PLLCTL, write PLLRST = 0 to bring PLL out of reset.*/
            *(int *)PLLCTL_2 &= ~(0x00000008);

            /* Wait for PLL to lock */
            for (i=0 ; i<4000 ; i++);

            /* In PLLCTL, write PLLEN = 1 to enable PLL mode. */
            *(int *)PLLCTL_2 |= (0x00000001);
           
            GEL_TextOut("PLL2 has been configured. Output clock is %dMHz.\n",
                        "init_PLL", 1, 1, 1, CLKIN2FREQ * 20);
        }

        /** PLL 3 configuration (DDR2) *****************************************/
        {

            /*In PLLCTL, write PLLRST = 1 (PLL is reset).*/
            *(int *)PLLCTL_3 |= 0x00000008;

            /* Wait for PLL to properly reset.(128 CLKIN1 cycles).*/
            for (i=0 ; i<1000 ; i++);

            /* In PLLCTL, write PLLRST = 0 to bring PLL out of reset.*/
            *(int *)PLLCTL_3 &= ~(0x00000008);

            /* Wait for PLL to lock (2000 CLKIN1 cycles). */
            for (i=0 ; i<4000 ; i++);

            /* In PLLCTL, write PLLEN = 1 to enable PLL mode. */
            *(int *)PLLCTL_3 |= (0x00000001);
           
            GEL_TextOut("PLL3 has been configured. DDR2 clock is %fMHz.\n",
                        "init_PLL", 1, 1, 1, CLKIN3FREQ * 20);
        }
    }

    I guess you are familiar with how GEL file runs - OnTargetConnect() does the needful. Like I said earlier, init_emif() seems to work ok.

    As regards to NDK issues, I have put everything in DDR2 using the following in TCF file:

    bios.setMemDataHeapSections(prog, bios.DDR2);
    bios.setMemDataNoHeapSections(prog, bios.DDR2);
    bios.setMemCodeSections(prog, bios.DDR2);

    and overridden for the necessary section using the following in a .cmd file:

            .far:NDK_PACKETMEM                   > SL2RAM

    Try out the above. Hope you get the same results!

  • Viswanath,

    I can garantee that from version 3.2 to 3.3 files have changed in the NDK directory.

    "C:\Program Files\Texas Instruments\ccsv4\emulation\boards\evmc6472\ndk_2_1_0\packages\ti\ndk\src\hal\evm6472\eth_c6472\ethdriver.c"

    is one that has. I made a backup specifically to windiff against. I would guess anything under the hal\evm6472 directory could change without the NDK version number incrementing.

     

    Yes The GEL files should work as per the run before program executes theory ... although today I have not had much success ... I find that the system does boot/run but when any allocation of heap occurs things start to fall apart ... ie: the stack ... I will try your example init function ... are you able to post your entire bios config file ?

    Can you confirm you are using Bios5 or Bios6 ? ... TCF file suggests Bios5 ?

    Cheers.

  • In that case I will definitely be interested in having a look at v3.3.

    I am using BIOS 5, and I am doing mallocs in my app, so heap is working I believe. There is nothing much else in bios config, except that I am configuring L1 and L2 caches.

    Try the GEL file approach and see.

  • Also the GEL files have changed in v3.3 but they still write to invalid registers :(

    I am using bios6 ... also using the new and delete functions in C++ code ... I am wondering if these requires more setup?

    I will try your init function in the GEL file.

    Cheers.

  • Hey Fred,

    I just installed SDK 3.3, and I saw that they have got the GEL file correct this time. Infact init_PLL() is almost the same as what I copy-pasted. You could jolly well use that, since they have corrected a few other functions as well.

    However EVM_pllc_init() is still the same. I'll have to see if this NDK resolves the DDR2 issue.

  • Reporting my findings:

    C6272 SDK 3.3 does not resolve the DDR2 issues with NDK. .far:NDK_PACKETMEM still needs to be placed in SL2RAM / LL2RAM.

  • First off, I'd like to apologize for the bug related issues you have come across while using the NDK examples.  We are in the process of cleaning these up and hope to release them soon.  We had already identified and fixed two of the issues that you raised.  First, our latest CSL v3.0.6.3 supports NDK_PACKETMEM in DDR memory.  Though this won't formally be released for another week or so, you can download it with the following link: http://software-dl.ti.com/sdoemb/sdoemb_public_sw/csl/CSL_C6472/latest/index_FDS.html  Secondly, the EVMC6472.gel file (attached) has been updated, including the PLL register bit fields.  It will be included with the next eInfochips release package. 


     

    evmc6472.gel
  • tscheck,

    Thanks for the reply. I would definitely be interested in the new NDK.

    SDK 3.3 for C6472 has the corrected GEL files, no problem with them.  When you are updating the NDK please note that the same PLL register corrections need to be taken to EVM_pllc_init(), in packages\ti\ndk\example\tools\evm6472\evminit.c.

    Thanks.

  • Yes, we agree.  The plan is to incorporate the PLL register corrections in the EVM_pllc_init, to match the fixes in the gel file.  Also, we will move the EVM_pllc_init function from the HwPktOpen() of the ethdriver.c to the EVM_init() of evminit.c.  That way it is set in the proper location, as well as the fact that it won't require the gel.

    Regards,

    Travis

  • Hello Viswanath,

    I too am facing this same problem with NDK not working (Cannot ping the board) when the memory sections are placed under DDR2 .

    As suggested by you, How can i place .far:NDK_PACKETMEM in L2RAM ? Kindly help me out with it. I try to manually edit the linker.cmd file but when i run Build, it overwrites the linker.cmd file and once again the .far section is mapped to DDR2.

    How can we change the memory section for .far in linker.cmd file ? Kindly suggest.

  • Hi ya,

    A quick word of advice would be to make sure you are using the latest versions of everything (bugs are fixed) ... do not just use the installed files from the CDROM.

    Also using bios 5 not 6 will give you a lot less headaches.

    Cheers.

  • Are you running the bios6 ccsv4 version project? The proble is since NDK assumes that all the data are in internal memory by default. Since in this case all the data are in external memeory, you should define the EXTERNAL_MEMORY and compile the project. This will make NDK aware that the data is in external memeory. 

     

    Thanks,

    Arun.

  • Yes, I am using bios6 ccsv4 project. How do i define this EXTERNAL_MEMORY, pls provide the example for the same.

    I would need to change the placement of the memory sections thro linker.cmd file. How can it be done manually, since my default in CCSv4, it created the Linker.cmd file with the memory sections all under DDR2. How can i proceed with it ?

  • Santosh,

    You are probably trying to edit the linker file generated as a result of TCF file compilation. Add another separate linker file to your project with the following

    SECTIONS {
            .far:NDK_PACKETMEM                   > LL2RAM
    }

    This will override the global settings in TCF. In fact the generated linker file will now reflect  .far:NDK_PACKETMEM in  LL2RAM.

  • You can do a #define in the global section of the file that has the main(). I am not sure how to manually edit the link.cmd file. Because every time during compilation the file is generated by the BIOS, so whatever changes you make will be cleared. Let me talk to someone in BIOS team and will update you on this.

     

    Thanks,

    Arun.

  • Please ignore my previous post. here is the project with the memory map changed to have the cde in L2RAM. if you want to know how to do it, please take a look at platfor wizard in ccs help. Look for specifically RTSC platform wizard.

     

    Thanks,

    Arun

    Hello_world_ccsv4.zip
  •  Hi Vishwanath,

     I tried your suggestion, but i could not override the default CCS4 linker mapping. I confirmed it by looking into the ".map" file and ".xdl" file generated by CCS4 after compiling and linking the code. ".far" section is still getting mapped to the DDR2 memory.

    Regards

    Rajesh S Bagul

     

  • Hi Arun,

    RTSC Platform Wizard allows to create memory segments(L2RAM, RXRAM, DDR2_1,etc) but there is no freedom to map a particular section (.far, .external_data, etc) to a given memory block (L2RAM, DDR2_1, DDR2_2, etc).

    In a present project that we are working, we wish to map a particular section (".aifdata") to DDR2 RAM but the default mapping assigns this section to L2RAM.

    We are really stuck and are indeed cursing the CCS4 for snatching away this simple freedom from the user.

    Regards

    Rajesh S Bagul

     

  • You can open the .cfg file in the configuration project and add your section under "Place user section". Example is given below:

    /* Place user sections */

    Program.sectMap["systemHeap"] = Program.platform.stackMemory;

    Program.SectMap[".text"]=L1RAM;

     

    Let me know if you need additional data

  • tscheck,

    Has the NDK. issue of placing ".far:NDK_PACKETMEM" SL2RAM / LL2RAM or MSMC RAM been resolved in the latest version of NDK -> ndk_2_20_04_26 available with MCSDK 2.0.0.11 ?

    Pls let me know since my application is facing a shortage of memory in L2 and MSMC RAM for c6670.

     

    Thanks

    Santosh