This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Any documentation of floating-point performance shift in C6000 6.0.8->6.1.20

We have a very speed-sensitive library, in a some cases its performance is dominated by floating point performance.  We are currently using fixed-point DSP's (C64+).  We have existing products built with CGT 6.0.8 and before, and so while we have migrated to the most recent toolchain (at first to 6.1.20 and now 7.3.9) we have been forced to keep the older floating point support.  (We are seeing other performance issues with code built with 7.3.9, but that is covered in the previous post).

Looking at the floating point source, there was a substantial shift in the code between 6.0.8 and 6.1.20 (I haven't narrowed down exactly where, but I would guess in 6.1.0?).  Since that time, the source code is pretty much unchanged all the way to 7.3.9.

It looks like there might be improvements in handling of cases like NaN and our testing also shows what look like tiny improvements in precision in the slower 6.1.20 floating point.  But besides one floating point fix involving conversion, I can't find any record of this change in the CGT release notes.

Does this ring a bell for anyone?  Does anyone know of documentation of the changes that were made, especially the expected performance impact? 

As an aside, we have evaluated the fastRTS version of the floating point, and it sacrifices too much precision for our purposes.

-Mike

  • The performance issue is SDSCM00030177.

    Rounding in the floating-point division routine was corrected in 6.1.x, but there was a significant performance impact in order to round correctly.

    TI does not officially track the performance of floating-point operations on fixed-point devices.  Certainly we'd prefer to make them faster, but our efforts are focused elsewhere for now.