This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Linking C and Assembler File



I wrote the following code:-

 

main.c

#include<msp430.h>

extern void param(int);
extern int ret1(int);
extern void mult1(void);

void main()
{
 
  WDTCTL = WDTPW + WDTHOLD;   //holding watchdog timer
 
  P1DIR = 0X03;
  P1OUT = 0X00;
 
  int a = 0x01;               //variables
  int b;
 
  while(1)
  {
    P1OUT = 0X00;
    __delay_cycles(600000);
        param(a);                   //only one parameter is passed which will be through R12
    __delay_cycles(600000);
    b = ret1(0x02);             //here value is returned from asm program
    P1OUT = b;
    __delay_cycles(600000);
    mult1();
    __delay_cycles(600000);
  }
}
 
//function to be called from the asm program
unsigned long mult(unsigned int x, unsigned int y)
{
  return 1;
}

 

sample1.s43

#include "msp430.h"

          RSEG CODE
;...............................................................................      
          PUBLIC param
          EXTERN mult           ;to be called from the C function
param     mov.b  R12,&P1OUT
          RET
         
          PUBLIC ret1
ret1      mov.b R12,R13
          mov.b R13,R12
          ret

          PUBLIC mult1
mult1     mov.b #01h,R12
          mov.b #03h,R14
         
          call  #mult           ;calling mult function of C program
          mov.b R12,&P1OUT
          ret
         
         END

 

In this program when I am calling the mult function of .c file from mult1 of the assembler file, it is getting halted at that point only. Please help me debug this problem.

  • Hi Aayush,

    Are you using CPUX and IAR? I think I remember that from some of your other posts. If so, you need to use CALLA instead of CALL and RETA instead of RET.

    Jeff

  • thanks jeff

     

    Ret is working for properly. and i am using IAR. problem arises when it returns from C file to assembler file...

  • Hey thanks jeff...changing that to CALLA and RETA has worked... can u please me tell me the difference between CALLA and simple CALL.. it will be very helpful...

  • You should read the Users Guide section about CPUX and the instruction set for complete information.

    CALLA pushes a 20-bit return address on the stack, using 2 words of stack.  CALL pushes a 16-bit return address on the stack, using 1 word of stack.

    RETA pops a 20-bit return address from the stack.  RET pops a 16-bit return address from the stack.

    IAR assumes you want to use the large code model on any processor that can use it (CPUX).  So when it compiles function calls and returns, it uses CALLA and RETA.  When mixing C and ASM, you must use a compatible model.  Also the library you link with must also use a matching model.  Again, IAR takes care of this for you.

    Also don't get confused by the "data model" in IAR -- that's a different setting from "code model".  IAR gives you easy access to the data-model choice, but you must tweak advanced settings to change the code model.  (Typically nobody uses the small code model on CPUX.)

    Jeff

  • Hi Jeff,

    I've never seen any options to change the core model, as IAR makes up its mind by which device you select. Could you describe these advanced settings necessary to tweak a bit more in detail? I would actually like to learn how to do this.

    Thanks

    darkwzrd

  • Normally, tweaking the model isn't necessary. Using MSP430X large code model only applies to call and ret instructions. CALLA is two bytes larger and requires 2 more CPU cycles, while RETA requires one additional CPU cycle. Not much difference for funciton calls.

    However, if your code fits into the lower 64k or you use assembly code or precompiled librtaries which are not written for MSP430X, then old non-X model needs to be selected.
    The typical user won't have such a setup, and for the rest, I think IAR assumes that these special customers are smart enough to find the required option or their description. A wrong assumption, as being advanced in programming doesn't necessarily meant being advanced in tool-using :)

  • Hi darkwzrd,

    I've never actually done it before, but all the command-line controls are in place both for the compiler and the linker.

    For the compiler, the switch you need is --core=430 (instead of --core=430X).  For the linker you would just specify the non-X version of the library you're using, for example dl430dn.r43 instead of dl430xsdn.r43.

    So can you do it from within the IAR Embedded Workbench or would you have to switch over to a make utility (or similar scripts) to build the project?  I think trying to accomplish this from inside the EW would be a pain and confusing to any developers new to the project.  You would select the wrong target processor and tweak the linker command file in unexpected ways.  If I were doing it, I would use a make utility and have straightforward control over all the command-line switches, library selections, etc.  (Buy why would anyone do this in the first place?)

    By the way, calling it the "small code model" is not an IAR term at all.  It's a term I heard from users of other tools (like CCS or mspgcc).  But I still end up at the same conclusion, which is who would use the small code model and why?  Just the extra stack usage maybe?  It wouldn't be for execution speed.

    Jeff

     

  • I did actually try the --core option, but I got a complaint when I tried it earlier today (perhaps something about core already being defined, I don't remember). I was hoping for some easy way to do it through the IDE. Good point about the library, BTW, I didn't think of that!

    I could imagine either editing the definitions for existing processor files (change all files associated with 5529, for example), or perhaps somehow create a new entry. I have poked around in the internals before, and they mostly appear to be text files, but I've not needed to do much in that dark region of my harddrive just yet. The only thing I've had to do so far was change an erroneous address range definition related to one of the processors.

    The real challenge would be to guarantee that you actually edit all the necessary files. Much of the settings, from the core option, to which registers (and associated bits) are visible are based off the device setting.

    This approach is a bit too adventurous (dangerous!) for my tastes though. If there was an easy way to get the IDE to do it, then fine, but otherwise it isn't worth the risk.

     

    As far as the argument for WHY someone would ever need it? Well, it is a tough sell.

    You presented stack usage as an reason. Of any reason possible, this is likely the best contender. However, I cannot imagine ever writing an application that had a call tree that was really so large. I'm also not going to use recursion, or the heap, so those wouldn't be a problem either. There are some 430X cores with pretty small RAM (5131 for example). You might be down to the hairy edge if you had some large buffers, but it is an unlikely scenario considering the small RAM size in the first place. Maybe if you didn't want to pay that extra 15 cents to move up to the 5151 for an extra 1K of RAM.

    A more likely case is that you are on the hairy edge with a part that has larger RAM. Instead of the edge of RAM (as in running out), you may be just on the edge of the RAM sector boundary. If you trimmed a few more bytes, it might be possible to turn off power to an extra sector of RAM (on a 5xxx) and reap the power savings.

    It's also possible to do some nice things with function pointer tables. On the 430 core, I could define all the interrupt vectors, in C, in one nice table where you can see all vectors at once, without using any pragmas. If I try to do this on a 430X core, it doesn't work well because I cannot put the now 20 bit function addresses onto 16 bit locations. However, its not critical, as it can be done other ways.

     

    I don't really use the 430X core parts because of the extra code size, I use them for the added peripherals and other features that they tend to have over the 430 core parts. If I used the area outside of the 64K boundary, it would be for data storage, or a duplicate code image perhaps. Since I don't need the upper section for code, why cannot I tell the compiler not to push 4 bytes? Really it just bothers me more that IAR won't let me do it! =p

     

     

  • darkwzrd said:
    Maybe if you didn't want to pay that extra 15 cents to move up to the 5151 for an extra 1K of RAM

    You'd be surprised - on mass production, it is cheaper to torture a goup of programmers for a week than to spend these additional 15 cents per manufactured device. (luckily, I don't have this problem with my projects, they are low-volume, high price projects)

    But it's not only the increased stack usage (which greatly increases if you use large data model), it is two additional clockk cycles on each call and one on the return. Sometimes it is just the one cycle too much. Luckily it doesn't apply to ISR calls.

    darkwzrd said:
    I cannot put the now 20 bit function addresses onto 16 bit locations.

    Sure you can - by typecasting them into INTs. :)
    Since the funcitons must reside in lower 64k, there are no bits lost (well, the compiler doesn't know this, but who cares for what the compiler knows? it has to do what I say!)

    darkwzrd said:
    I don't really use the 430X core parts because of the extra code size, I use them for the added peripherals and other features that they tend to have over the 430 core parts

    Exactly my reason. Well, I do use the additional flash, but only as storage space, with handwritten assembly access functions.

    I use an older mspgcc compiler (last 2.3.2 release). The trick is that the compiler simply does not know of MSP430X and generates MSP430 code. The assembler and linker, however, do know the MSP430x codes, so I can use them in inline assembly.

    As long as my code fits into the lower 64k, this is perfect.

  • Jens-Michael Gross said:

    I cannot put the now 20 bit function addresses onto 16 bit locations.

    Sure you can - by typecasting them into INTs. :)

    Since the funcitons must reside in lower 64k, there are no bits lost (well, the compiler doesn't know this, but who cares for what the compiler knows? it has to do what I say!)

    [/quote]

     

    When you said this, I thought, "Gee, this seems like such a simple solution! I must have tried this! Didn't that work?". Shouldn't I be able to convert a pointer type to an integer type using the standard C conversion rules?

    Well, I had a bit of time to spare a few days ago and I gave it a shot. What I found was rather interesting (and I'm sure you will find it amusing since you use GCC!).

     

    I tried the code pointer 16 bit typecast on IAR. It failed to compile and gave an error. The compiler outright refused to do it. What nerve! So I played around with it. Here's what I found.

    1. The compiler would refuse to do the code pointer truncation in global scope.

    2. The compiler had no qualms doing the code pointer truncation in function scope.

    3. The compiler had no qualms doing data pointer truncation of any kind.

    4. The uintptr type was failing to take my code pointer! The manual stated that the uintptr (and related C99 types) were linked to the Data Model. On the default small model, it failed to work, but when I increased the Data Model to large, it worked just fine.

     

    This all comes down to the lack of Code Model support. How aggravating. I doubt that it would be fixed any time soon since it would likely require gutting their entire engine. Even if it came down to C99 compliance, it would likely be easier to just drop support for uintptr and the like, since it is not required in the implementation.

  • darkwzrd said:
    1. The compiler would refuse to do the code pointer truncation in global scope.
    2. The compiler had no qualms doing the code pointer truncation in function scope.


    Maybe there's a logical explanation, but I don't find one.

    darkwzrd said:
    3. The compiler had no qualms doing data pointer truncation of any kind.

    On large data model, teh compiler should at least issue a warning.

    darkwzrd said:
    4. The uintptr type was failing to take my code pointer! The manual stated that the uintptr (and related C99 types) were linked to the Data Model. On the default small model, it failed to work, but when I increased the Data Model to large, it worked just fine.

    With large data model, both, code and data pointers are 20 bit, so there is no problem.

    However, did you try to do castrs to void*? It should always work and also remove all constraints (code/data ehatever) from the original data type.

    If the compiler ever complains that one pointer cnanot be cast into another one, I do an intermediate typecast to void* and all is well.

     

  • Jens-Michael Gross said:

    1. The compiler would refuse to do the code pointer truncation in global scope.
    2. The compiler had no qualms doing the code pointer truncation in function scope.


    Maybe there's a logical explanation, but I don't find one.

    darkwzrd said:
    3. The compiler had no qualms doing data pointer truncation of any kind.

    On large data model, the compiler should at least issue a warning.

    [/quote]

    It does. Unless I use the proper typecast, which silences the warning. No complaints here. The compiler does the right thing.

     

    Jens-Michael Gross said:

    4. The uintptr type was failing to take my code pointer! The manual stated that the uintptr (and related C99 types) were linked to the Data Model. On the default small model, it failed to work, but when I increased the Data Model to large, it worked just fine.

    With large data model, both, code and data pointers are 20 bit, so there is no problem.

    However, did you try to do casts to void*? It should always work and also remove all constraints (code/data whatever) from the original data type.

    If the compiler ever complains that one pointer cannot be cast into another one, I do an intermediate typecast to void* and all is well.

    [/quote]

    I agree that you should be able to represent any pointer with (void *), but it doesn't seem to work. Here's the code I tried (with the goal of getting the 20 bit pointer to truncate into a 16 bit representation).

    uint16_t BCD = (uint16_t) (void *)Function;

    The error message is interesting.

    Error[Pa044]: no more than one representation changing cast allowed in constant expression C:\Path\main.c 23

     

  • The compiler does nto do a right-before-left conversion automatically - it assumes that there are two casts at the same tiome on the same  function pointer.

    Try

    uint16_t BCD = (uint16_t)((void*)Function);

    Now it's clear that you cast "Function" to a void* and then the void* to uint16_t.

  • Ha! I actually meant to put the parenthesis in. That was a mistake.

    I don't think I've ever actually tried two typecasts in a row before without the parenthesis. I'll have to look that one up.

    Regardless, I added the parenthesis, and now I get the same error message.

     

    Error[Pa044]: no more than one representation changing cast allowed in constant expression C:\Path\main.c 23

     

    Nice try but no cigar! ;)

     

    JMG, there is going to be a limit to what you can suggest unless you actually whip out IAR and start to play with it, and I understand that is a lot to ask from someone who uses a completely different setup. However, if you've got more ideas, keep them coming. I'll gladly take anything I can get.

    Jeff, I know that you're an IAR power user. Do you have any ideas?

  • darkwzrd said:
    Jeff, I know that you're an IAR power user. Do you have any ideas?

    It'll be give and take, and in the end, I think it's not worthwhile to convince IAR to use the small code model.

    1. Under General Options, select "Generic MSP430 Device" for the target processor (not "Generic MSP430X device").
    2. Under Linker, override the default linker configuration file and use $TOOLKIT_DIR$\CONFIG\lnk430****.xcl for your actual processor instead.
    3. In code, #include the actual target msp430****.h file, not the generic msp430.h.

    The problem with this approach is that you wont get the benefit of several machine instructions that are new to CPUX, and you'll probably utilize the HW multiplier as only a 16-bit multiplier.  (There may be other similar drawbacks, too.)

    So my real advice is to just accept the large code model.  CALLA and RETA on CPUX are just as fast as CALL and RET on CPU.  It's just a little extra stack usage.

    For a "table" of 16-bit function pointers, consider using assembly language with DS16 directives.  Your C code can grab them and cast them as function pointers if you like.

    Jeff

  • darkwzrd said:
    I added the parenthesis, and now I get the same error message.

    Looks like the compiler is smarter than necessary. If it would just do the typecasting as it was put in brackets, according to the normal rules for operating order, there would be no reason for an error message. For some unknown reason it is handling expressions which are entirely constant in a different way.
    Without knowing further details on the 'why' i'd consider this a compiler bug. However, the error message seems to indicated that this behavior is intentional.

    If you put Funciton in a proper pointer variable before and typecast the variable. Does this still give the same error message? If it does, I'd be very surprised, as the C/C++ language does nto forbid this. In face, you can do as many typecasts in a row as you want. And if it doesn't, there is no reason why it fails for a fully constant expression.

  • Jens-Michael Gross said:

     

    If you put Funciton in a proper pointer variable before and typecast the variable. Does this still give the same error message? If it does, I'd be very surprised, as the C/C++ language does nto forbid this. In face, you can do as many typecasts in a row as you want. And if it doesn't, there is no reason why it fails for a fully constant expression.

    I get a warning, but that's it. It compiles fine. This was expected, since you cannot actually do this at build time (the expression must have a constant value).

     

    Jeff Tenney said:

    So my real advice is to just accept the large code model.  CALLA and RETA on CPUX are just as fast as CALL and RET on CPU.  It's just a little extra stack usage.

    For a "table" of 16-bit function pointers, consider using assembly language with DS16 directives.  Your C code can grab them and cast them as function pointers if you like.

    Indeed. That's the simplest solution. It also doesn't require me to do anything different since I'm already using the table in assembly. =p

  • darkwzrd said:
    I get a warning, but that's it. It compiles fine. This was expected, since you cannot actually do this at build time (the expression must have a constant value).

    Yep. But it shows that the compiler has no problems analyzing the cast and generating the proper code. So I wonder why it refuses to do so when it is a constant?
    As I said: overly smart.
    Also, if optimization is on and you don't use the original function pointer variable anywhere again, the compiler should optimize it away, leaving you with a constant expression again - but without the error.. I wonder if IAR is that smart :)
    In any case, it's an annoying error. And an unneccessary one.

**Attention** This is a public forum