mspgcc to official gcc conversion ... where did the builtins go?

Having built a crosscompiler for the MSP430, using a gcc4.9 snapshot
(gcc-4.9-20140112)  the compiler seems OK and builds a simple
"blinky" LED flashing example.

But my slightly larger example, originally built using Peter Bigot's
mspgcc backend, no longer compiles ... 

mspgcc had a number of builtin functions, such as __nop(), __eint()
and __dint() respectively. Calling these would execute a nop, enable and
disable interrupts respectively. 

Others such as __bis_status_register(), __bic_status_register() would
manipulate system status, low power modes etc.

Now in the MSP430 port for gcc4.9, these builtin functions have gone. 

Reading the config/msp430 source files, e.g. config/msp430/msp430.md I
can see evidence that the _functionality_ is still there, e.g.

(define_insn "enable_interrupts"
  [(unspec_volatile [(const_int 0)] UNS_EINT)]
  ""
  "EINT"
  )
...
(define_insn "bis_SR"
  [(unspec_volatile [(match_operand 0 "nonmemory_operand" "ir")]
UNS_BIS_SR)]
  ""
  "BIS.W\t%0, %O0(SP)"
  )

... but how do I access it? In other words, what C code fragment would
cause the "enable_interrupts" instruction to be emitted, and generate
"EINT" in the assembler or object output?

I thought about seeing how the testsuite examples did it, e.g. the equivalent of
gcc/testsuite/gcc.target/msp430/builtins_bic_sr.c
(from mspgcc) ... but there is no testsuite folder for the msp430 yet ...
gcc/testsuite/gcc.target/msp430 doesn't exist in the gcc4.9 tree.

Any ideas how to access these functions from C code?

- Brian
  • Answered elsewhere (by one of the Red Hat maintainers) : the builtins (other than 2 essential ones) are gone : what I saw in the gcc sources is for compiler internal use only.

    Official policy : Use inline assembler instead...

    Details:

    http://gcc.gnu.org/ml/gcc/2014-02/msg00214.html

  • In reply to Brian Drummond:

    Brian Drummond
    Official policy : Use inline assembler instead...

    And for a reason. There are only three situations where you can't succeed with inline assembly:

    The LPM exit (BIC/BIS_SR_on_exit) inside an ISR (because it requires information about the current stack frame organization, which is only available to the compiler at the moment of ISR compile)

    the delay_cycles for CPU-cycle-exact delays (it could be done in assembly too, but it is very inconvenient)

    The _even_in_range, which can also be simulated with inline assembly but also only in an inconvenient way. (I've done it before on mspgcc 3.2.3)

    Most other intrinsics are simply setting or clearing bits in the status register. Which can easily be done with inline assembly. Or wrapped with a simple macro (that's how MSPGCC has done it all the years).

    And the 20-bit 'address' read or write intrinsic can also be done with a simple macro using inline assembly.

    The GCC inline assembly syntax allows way more complex things than the inline assembly on most other compilers. It allows defining parameters and required or clobbered registers, so the compiler can effectively optimize the code.

    However, if the header files are properly designed, you will have a macro defined for each 'missing' intrinsic and won't notice the difference.

    _____________________________________

    Time to say goodbye - I don't have the time anymore to read and answer forum posts. See my bio for details.

    Before posting bug reports or ask for help, do at least quick scan over this article. It applies to any kind of problem reporting. On any forum. And/or look here.
    I'm sorry that  I can no longer provide help  in the forum or by private conversation.

  • In reply to Jens-Michael Gross:

    Jens-Michael Gross

    the delay_cycles for CPU-cycle-exact delays (it could be done in assembly too, but it is very inconvenient)

    The _even_in_range, which can also be simulated with inline assembly but also only in an inconvenient way. (I've done it before on mspgcc 3.2.3)

    Thanks. These two I haven't been able to find : where are they? Is it possible they are in the current TI gcc release but didn't make it into upstream gcc4.9?

    Jens-Michael Gross

    However, if the header files are properly designed, you will have a macro defined for each 'missing' intrinsic and won't notice the difference.

    Indeed so. In my toolset the intrinsics are supplied by an Ada package, which I would like to keep compatible with the older mspgcc version of the Ada compiler.

    - Brian

  • In reply to Brian Drummond:

    I have no idea. I'm still using GCC3.2.3 (all of our firmware has been verified and tested with this compiler version) and I don't think it did have these two.

    __even_in_range has been added later in mspgcc4 I think. I implemented it in assembly.

    And __delay_cycles, well, letting the CPU cycle surely isn't the best idea, even though a tad better than using an empty FOR loop. I use a timer for µs or ms delays. It will survive a change of the CPU speed without changing the delay (or needing to alter all code that uses a constant delay with __delay_cycles)

    So I never missed them.

    But __even_in_range would be good to have, as the assembly solution is a bit inconvenient to handle.

    _____________________________________

    Time to say goodbye - I don't have the time anymore to read and answer forum posts. See my bio for details.

    Before posting bug reports or ask for help, do at least quick scan over this article. It applies to any kind of problem reporting. On any forum. And/or look here.
    I'm sorry that  I can no longer provide help  in the forum or by private conversation.

  • In reply to Jens-Michael Gross:

    Jens-Michael Gross

    Brian Drummond
    Official policy : Use inline assembler instead...

    And for a reason. There are only three situations where you can't succeed with inline assembly:

    Agreed.  However, gcc does not know anything about what goes on within inline assembly, except that certain referenced values are modified.  Using intrinsics the compiler can be told exactly what operations are being performed in terms of the register transfer language it uses, thus allowing constant propagation and other optimizations.

    Though support for mspgcc evaporated before I got that far, there are interesting things that could be done like flow analyses to detect whether interrupts are enabled/disabled, something that's pretty useful for things like using compiler-generated invocation of the multiplier peripheral.  That can't be done with inline assembly defined in the headers.

    In short, though you can do most things with inline assembly it's not the best approach from the perspective of having a robust toolchain.  But it's an adequate workaround until the toolchain matures.

  • In reply to Peter Bigot:

    Hi Peter, nice to hear from you again.

    When learning the GCC inline assembly syntax, I was surprised how much you can do beyond just inserting opcodes.

    If you do it right, GCC will know a lot about your inline assembly code. Not only which variables you reference, but also whether you read from or write to a reference, whether the code requires them in a specific position (memory, register), and you can also tell which registers you clobber during inline assembly. No other compiler I ever used offered that much interaction between higfh-level language Compiler and inline assembly.

    I agree that for long-distance analysis, telling the compiler what you do (e.g. enabling or disabling interrupts) rather than just doing it, might give some benefits. Like not needign to clear GIE before an MPY operation, when interrupts are already disabled. However, it won't work if you call an external/library function that implicitly enables interrupts. And the compiler thinks them still disabled and doesn't protect the MPY.

    The more nice things are done automatically, the more things can go automatically wrong, and the more discipline is required when designing the software.

    _____________________________________

    Time to say goodbye - I don't have the time anymore to read and answer forum posts. See my bio for details.

    Before posting bug reports or ask for help, do at least quick scan over this article. It applies to any kind of problem reporting. On any forum. And/or look here.
    I'm sorry that  I can no longer provide help  in the forum or by private conversation.

  • In reply to Jens-Michael Gross:

    Yes, you can do a lot; but you also have to know a lot.  Having "examples" in headers deludes novice programmers into thinking that they should use those techniques, or that some supposedly benign variation won't introduce a new problem.

    Example mistakes I frequently see: You should never reference specific registers in the instruction pattern: always pass input and outputs through operands, otherwise the compiler doesn't know they've been touched.  Even if you need a temporary, you should define it in C and pass it as an operand (though you can use a specific register and declare it clobbered, it'll inhibit the compiler's ability to generate optimal code).  You probably never want an asm instruction that isn't also marked volatile, though in a lot of cases you can appear to get away with it (but take into account that the compiler can and will move it, if it doesn't see a dependency relationship with something else: this can break inline assembly that's intended to bracket some sequence of C code to, for example, time its execution).  And fundamentally the compiler doesn't know the semantic effect of what you've done---e.g., that the output is the input rotated 3 bits left as a 16-bit value---so the value can be used in subsequent calculations that weren't in assembly.  For that level of detail you need RTL.

    Basically, if you have to use inline assembly the toolchain vendor didn't do their job.  Sometimes things will go wrong, but a responsible vendor will fix them promptly (and open source toolchains improve the turnaround time dramatically; for both mspgcc and PyXB I'd normally have a fix implemented and patch available in one or two days).  If you use a local hack like inline assembly instead,  then everybody else has to re-discover and re-implement your fix, and you (or your successors) are stuck maintaining it or perhaps sticking with an eleven-year-old compiler because its quirks are "understood" and you can't afford to update because your code is riddled with workarounds.

    I think there's more than enough evidence that the more things are done automatically, the more likely they are to be successful.  This fantasy that people can do a better job than software for the things that software can do is simply delusional.

    Peter

  • In reply to Peter Bigot:

    I completely agree with what you said about people not really knowing what they are doing. And about the possible drawbacks for compiler optimization. However, if the header file (provided by the toolchain manufacturer) contains a macro using inline assembly, instead of putting this as intrinsic into the compiler itself, I wouldn't say the IDE manufacturer didn't do his job.
    It has pros and cons: If the implementation (not the result, of course) of an intrinsic changes, you don't (and can't) even notice. Normally, it doesn't make a difference, sometimes it does - and you don't have a way to fix it then.

    Also, some things cannot be done efficiently by the compiler or by C language. Sometimes, hand-optimized assembly code is the only way to max the system out. Especially if timing is critical. Usually, the compiler does a good job, but not always. Sure, one really should know what one is doing then.

    But your comment about using an old compiler version doesn't fit at all:
    even if you only use 100% plain C code and did not need any workarounds, you cannot simply use a new compiler version. You cannot be sure whether the new compiler may have new bugs that break your code, or does a different optimization that breaks functionality. After all, the compiler compiles according to C language scope, which doesn't include any real-world effects. Sure, at least the second may be due to sloppy coding, but even if: the code was working and has been tested. Changing the compiler will require a complete and full new certification of already tested and certified code.

    In theory, all this is not a problem. In reality, it is. In a production environment, not everything that should be done can be done. Due to budget or resource limitations. Or whatever. In some companies, even installing a new compiler is a major issue (involving the IT department for compatibility testing and clearance, the management for budget, including time budget of the IT department, and much more) and cannot be justified without a really good reason that makes the management agree. Usually, the ones who have to use an IDE are not the ones who decide about its usage.

    _____________________________________

    Time to say goodbye - I don't have the time anymore to read and answer forum posts. See my bio for details.

    Before posting bug reports or ask for help, do at least quick scan over this article. It applies to any kind of problem reporting. On any forum. And/or look here.
    I'm sorry that  I can no longer provide help  in the forum or by private conversation.

  • In reply to Jens-Michael Gross:

    Jens-Michael Gross
    However, if the header file (provided by the toolchain manufacturer) contains a macro using inline assembly, instead of putting this as intrinsic into the compiler itself, I wouldn't say the IDE manufacturer didn't do his job.

    OK, you wouldn't.  I do.  I expect more of the developer/manufacturer.  (Toolchain developer, btw; IDE has little to do with the compiler, assembler, and linker except that they come bundled with it.)

    Jens-Michael Gross

    It has pros and cons: If the implementation (not the result, of course) of an intrinsic changes, you don't (and can't) even notice. Normally, it doesn't make a difference, sometimes it does - and you don't have a way to fix it then.

    Also, some things cannot be done efficiently by the compiler or by C language. Sometimes, hand-optimized assembly code is the only way to max the system out. Especially if timing is critical. Usually, the compiler does a good job, but not always. Sure, one really should know what one is doing then.

    Absolutely.  And a responsive and proactive toolchain vendor is aware of things that most users are not.

    As an example, I believe you were one of the first to explain the need for EINT to be followed by NOP on some MCUs in some circumstances, a fix that subsequently showed up in CCS 5.2.1.   __eint() (whether an intrinsic or a header-defined inline assembly macro) can deal with this, and developers should use that and be able to assume that it was implemented with the best state-of-the-practice knowledge available at the time the tool was released.  People who use earlier toolchain versions on MCUs that are subject to CPU42 have a latent bug in their systems, and most won't realize it.  (E.g., this mspgcc bug for which there is no public patch since it was discovered after funding for support was eliminated.)

    As for the rest: You can stick with an existing, "well understood" system, and assume that you're safe because it passes what you think is important to test.  Or you can keep up to date with what's provided by a vendor (who sees a lot more use cases and variations  than you do).  This is a management choice.

    All I can say is that, in my own multi-decade experience, the biggest long-term source of destabilization comes not from regular updates to the current toolchain, but from staying with old tools until something happens that forces you to make a multi-version jump to a  new compiler.  (And I agree that a new version is a new compiler and cannot just be assumed to work; this is why one should develop complete regression suites with test harnesses to check the "can't happen but actually did once" situations.)  I can't see what happens in proprietary systems, but it's been many years since an update to GCC has resulted in my discovery of an undesirable behavioral change that wasn't ultimately a bug in my own code, with the fix improving quality for that code and all code I've worked on since.

    If you're operating in a regulated environment where the cost of updating/certifying is prohibitive, so be it.  Best approach in that case is to keep with the toolchain used for original release of the product, and release new products with the most recent toolchain so you're always taking advantage of the best available solution at the time.

    I'm not saying there's a universally ideal policy, e.g. that you should always use the current toolchain.  I am saying that a shop that develops and releases new products using old toolchains without a strong reason behind that decision is not using best practices and is likely to produce an inferior product.  If management thinks they're saving money and reducing risk by not updating, there's a good chance they're being short-sighted.