fpu/softfloat: re-factor minmax

Let's do the same re-factor treatment for minmax functions. I still
use the MACRO trick to expand but now all the checking code is common.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Alex Bennée 2017-12-05 12:36:01 +00:00
parent 0bfc9f1952
commit 8936006707
2 changed files with 126 additions and 107 deletions

View file

@ -245,6 +245,12 @@ float16 float16_mul(float16, float16, float_status *status);
float16 float16_muladd(float16, float16, float16, int, float_status *status);
float16 float16_div(float16, float16, float_status *status);
float16 float16_scalbn(float16, int, float_status *status);
float16 float16_min(float16, float16, float_status *status);
float16 float16_max(float16, float16, float_status *status);
float16 float16_minnum(float16, float16, float_status *status);
float16 float16_maxnum(float16, float16, float_status *status);
float16 float16_minnummag(float16, float16, float_status *status);
float16 float16_maxnummag(float16, float16, float_status *status);
int float16_is_quiet_nan(float16, float_status *status);
int float16_is_signaling_nan(float16, float_status *status);