seth at cql.com
Mon Jan 31 07:26:25 EST 2005
Ashley Yakeley wrote:
>In article <41FE033F.1080507 at cql.com>, Seth Kurtzberg <seth at cql.com>
>>If, say, I make two calls to read the current time, and both return the
>>same value, what does that mean?
>That's an interesting question. For "Unix time" that's not supposed to
>happen: successive calls must return strictly later times unless the
>clock has been set back by more than a certain amount. This only really
>becomes an issue for leap-seconds, when one is trying to set the clock
>back one second. In such a case, successive clock calls increment one
>tick until time has caught up.
>It might be helpful to have the platform-dependent clock resolution
>available as a value.
Doesn't automatically forcing a system clock tick screw up the time?
Also, what happens when you are using NTP? NTP might just correct it,
but it would screw up the calculations NTP uses and it could start
>>Clearly these are two different things.
>Well, the system clock is just one of many sources for time values. The
>user might be dealing with times from some other clock, or measured
>times from a scientific experiment, or appointment times from a
>datebook, or phenomenon times from astronomical calculations. How should
>these all be represented?
The way you suggested. I'm not saying that there shouldn't be a
computation library with better resolution. That's necessary for two
reasons; one, the type of applications you just mentioned, and, two,
because you _do_ want a clock library independent of the system clock.
I'm saying that you _also_ need to handle the system clock case. I'm
also saying that I don't like the idea of using the same resolution
w.r.t. the system clock, because it suggests that the time is known to a
greater precision than is actually the case.
I don't think we disagree, in general, it's more a question of whether
or not system clock related computations should match the precision of
the system clock. 123.45000 implies that the value is known to be
accurate to five decimal points (just picking an arbitrary number of
digits beyond the decimal point, because I don't recall the actual
precision of the high resolution library). Truncating at the end is
also not "correct," because the final result in general might be
different if you compute with five digits and truncate, rather than
computing with two digits throughout. (Again, whatever the number is; I
pulled 2 digit out of the air, just to use a number.)
To me all this shows that the system clock needs to be handled as a
special case, not just converted into the high resolution representation
>>The core of the time
>>calculation can be shared by these two different types of time, but at
>>the user level it needs to be clear whether a value is derived from the
>>system clock, or is not. I don't see any way around the need for a
>>different interface for each. The alternatives are unacceptable.
>Wouldn't the user already know whether a value is derived from the
>system clock or not, from the program they write?
I see you haven't met some of the programmers who work for me. :)
Seriously, yes, they would know, but there are portability concerns.
Which, of course, is what you have been saying; I just have a slightly
different take on it.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Libraries