In 'DSP280x_CpuTimers.c', the function below multiplies two 'float's, then casts the result to 'long', then lets assignment do the final cast to 'unsigned long' ... and am baffled by it. Is this a sign-extension idiom? (I almost never do floating-point calculations; hence, the question.)
void ConfigCpuTimer(struct CPUTIMER_VARS *Timer, float Freq, float Period) {Uint32 temp;...temp = (long) (Freq * Period);