# [Haskell-cafe] Why 'round' does not just round numbers ?

Janis Voigtlaender voigt at tcs.inf.tu-dresden.de
Mon Oct 27 10:37:02 EDT 2008

```Ketil Malde wrote:
> Janis Voigtlaender <voigt at tcs.inf.tu-dresden.de> writes:
>
>
>>>Since just about every floating point operation involves some sort of
>>>loss of precision, repeated rounding is a fact of life.
>
>
>>Of course. But that was not the point of the discussion...
>
>
> Well, allow me to contribute to keeping the discussion on topic by
> stating that when I was in school, I was taught to round up.  Now if
> you will excuse a momentary digression:
>
> The point *I* wanted to make is that we have two qualitatively different
> rounding modes: round up on 0.5, or round to even (or randomly, or
> alternating), and they make sense in different contexts.
>
> Doing computations with fixed precision, you keep losing precision,
> and rounding bias accumulates - thus the need to use some non-biased
> rounding.
>
> Doing (small scale) calculations on paper, you can avoid repeated
> rounding, and only round the result.  In which case rounding up is
> okay, you don't introduce the amount of bias as with repeated
> rounding.  And if your input happens to be truncated, rounding up
> becomes the right thing to do.
>
> and do its calculations on infinite streams of digits.  Then, rounding
> upwards after 'take'ing a sufficient amount of decimals will be the
> right thing to do.

Yes, that all makes sense.

And I did not intend to cut you short.

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:voigt at tcs.inf.tu-dresden.de
```