ok at cs.otago.ac.nz ok at cs.otago.ac.nz
Tue Jun 2 02:08:52 UTC 2015

```The Haskell 2010 report defines, in chapter 9,
round :: (Real a, Fractional a, Integral b) => a -> b

round x =
let (n, r) = properFraction x
-- n = truncate x, r = x-n (same sign as x)
m      = if r < 0 then n - 1 else n + 1
in case signum (abs r - 0.5) of
-1 -> n  -- round in  if |r| < 0.5
1 -> m  -- round out if |r| > 0.5
0 -> if even n then n else m

(commented and slightly rearranged).  The traditional
definition of rounding to integer, so traditional that it
is actually given in the OED, is basically

round x = truncate (x + signum x * 0.5)

There was a discussion of rounding recently in another mailing
list and I put together this table:

* Round x.5 OUT
Ada, Algol W, C, COBOL, Fortran, Matlab, Pascal, PL/I,
Python, Quintus Prolog, Smalltalk.  The pre-computing tradition.

* Round x.5 to EVEN
Common Lisp, R, Haskell, SML, F#, Wolfram Language.

* Round x.5 UP to positive infinity
Java, JavaScript, ISO Prolog, Algol 60

* Rounding of x.5 UNSPECIFIED
Algol 68, IMP 77

What I was wondering was whether anyone on this list knew why