wnoise at ofb.net
Tue Feb 1 05:45:21 EST 2005
On 2005-01-31, Seth Kurtzberg <seth at cql.com> wrote:
> I'm not, I hope, being pedantic here and talking about an irrelevant
> issue. But if you look in any first year engineering textbook (for an
> engineering discipline other than CS), you will find that the display of
> a value with a precision greater than the known precision of the inputs
> is a cardinal sin. It's the wrong answer. The roundoff error behavior
> is clearly going to be different.
I'm afraid you are. I think we're all aware of the problem of ascribing
to much accuracy to a result quoted to high precision. But truncating
the precision to known level of accuracy is merely a rule-of-thumb that
usually works well enough. If you truely need to know error bounds,
there's no escape but to manually propogate that error information
through. Setting a maximum precision because some sources for that
information aren't that accurate is throwing away the baby with the
bathwater, and if you have imprecise sources from outside the system,
you have the same problemi in managing their precision, if it's less
than what we demanded for the ClockTime precision.
> I can certainly work around the problem at the application code level,
> but that's a clear violation of the encapsulation principle, because I'm
> relying on the implementation of the calculations and not just the
In principle this is true. In practice this doesn't matter as the
time library will not be adding up many many small increments or
similar on its own -- these will be passed through the user level.
Even if it did, truncating the precision to which something
is represented wouldn't fix the misbehaviour, it would just hide it
by making it consistently wrong.
> If everyone disagrees with me, I'll shut up, but I truly believe that
> this is a serious error.
Ignoring it would be a serious error, but this solution doesn't work
well. The best we can do really is expose the clock resolution, either
in a seperate call, or bundling it up with every call to get the current
time. But we don't really have a good way to get that.
sysconf(_SC_CLK_TCK) might be a reasonable heuristic, but Martin gave an
example where it isn't. If we were to use the posix timer CLOCK_REALTIME
with clock_gettime(), then clock_getres() should would give us the
precision, but not, of course, the accuracy.
On the topic of resolution, I'd recommend nanosecond precision,
just because the posix timers use that as the base (struct combining
time_t seconds, and long nanoseconds).
More information about the Libraries