This should be an easy question: Why not use CLR instead of ABSSP?
Say you want to get the absolute value of an SP floating point number. You can do it two ways:
ABSSP A10, A10 ; This instruction imposes a Functional Unit Latency of 1.
CLR A10, 31, 31, A10 ; This instruction effectively does the same thing, but without imposing any functional unit latency.
The CLR instruction operates here by zeroing the highest bit of the register, which is the sign bit of any single precision floating point number contained therein. Thereby producing its absolute value.
There is a minor difference between the two instructions, which may be negligible under many/most circumstances. The difference is that the ABSSP instruction does some special handling for abnormal floating point numbers, such as setting flag bits as warnings. (For example, if the original register contains something that is Not-A-Number (NAN), or denormalized, or infinity.) But those occur only under special circumstances (such as dividing by zero, etc.)
If you are confident your original register contains a legitimate floating point number, then the CLR instruction will work to get the absolute value, and without the latency. Am I correct in this?