Time Resolution

Seth Kurtzberg seth at cql.com
Mon Jan 31 15:32:31 EST 2005


Marcin 'Qrczak' Kowalczyk wrote:

>Seth Kurtzberg <seth at cql.com> writes:
>
>  
>
>>Also, what happens when you are using NTP? NTP might just correct
>>it, but it would screw up the calculations NTP uses and it could
>>start oscillating.
>>    
>>
>
>The NTP client (at least on Linux) adjusts the time by making a jump
>only if the time is very inaccurate. If the time only drifted a bit,
>it temporarily adjusts the speed of the system clock instead.
>
>  
>
>>I'm also saying that I don't like the idea of using the same
>>resolution w.r.t. the system clock, because it suggests that the
>>time is known to a greater precision than is actually the case.
>>    
>>
>
>I don't think this would be a practical problem.
>
>  
>
>>To me all this shows that the system clock needs to be handled as
>>a special case, not just converted into the high resolution
>>representation
>>    
>>
>
>Using a different representation of time just because somebody might
>not be aware that the resolution of the system clock is not as good as
>the representation suggests? No, this is an unnecessary complication.
>  
>
I'm not, I hope, being pedantic here and talking about an irrelevant 
issue.  But if you look in any first year engineering textbook (for an 
engineering discipline other than CS), you will find that the display of 
a value with a precision greater than the known precision of the inputs 
is a cardinal sin.  It's the wrong answer.  The roundoff error behavior 
is clearly going to be different.

I can certainly work around the problem at the application code level, 
but that's a clear violation of the encapsulation principle, because I'm 
relying on the implementation of the calculations and not just the 
interface.

If everyone disagrees with me, I'll shut up, but I truly believe that 
this is a serious error.

>Anyway, what does the "resolution of the system clock" mean? On Linux
>  
>
It means, in general, a clock tick.  poll, select, etc. cannot provide a 
timeout of greater precision than the system clock, and in general, 
since execution of an assembly language instruction takes multiple clock 
ticks, poll and its family actually can't even reach the precision of a 
single clock tick.

It's important to distinguish between the fact that a method allows you 
to use a value, and the fact that, in a given environment, all values 
lower than some minimum (something of the order of 10 clock ticks, which 
is optimistic) are simply treated as zero.  Not in the sense that zero 
means blocking, in the sense that the interval from the perspective of 
poll() is actually zero.  The next time a context switch (or an 
interrupt, if poll is implemented using interrupts) the timeout will be 
exceeded.  Checking the documentation of poll, there is even a warning 
that you cannot rely on any given implementation to provide the 
granularity that poll allows you to specify.

You have a synchronous machine here (down at the processor level) and an 
instruction cannot be executed before the previous instruction has 
completed.  (There are pipeline processors for which this isn't 
precisely true, but it is true that there is _some_ amount of time that 
is the maximum achievable granularity for any machine.)

>the timeout of select(), poll() and epoll() is accurate only to the
>timer interrupt (10ms on older kernels and 1ms on newer ones), yet the
>gettimeofday is more precise (it returns microseconds, quite accurately;
>the call itself takes 2us on my system). So even if gettimeofday is
>accurate, sleeping for some time might be much less accurate.
>  
>
It definitely would be less accurate.  But I don't see why that is 
relevant to the discussion.  Sleep generally causes a context switch, so 
the granularity is much higher.  Whether poll causes a context switch is 
implementation dependent, and also dependent on the specified timeout.  
A smart machine might use a spin lock for a very low poll timeout 
value.  At the end of the day, there is a a minimum granularity for any 
machine, and this minimum granularity is always going to be higher than 
a clock, most frequently the system clock (or, if you will, the virtual 
system clock, as divider circuits that slow the clock value presented to 
different components, e.g., the bus vs. the memory vs. the processor).

As I said, I'll drop it if every disagrees with me, but I think it is 
worth some thought and should be based on the actual behavior of a 
machine, not on the fact that a method may allow you to specify a lower 
timeout value.  That makes sense, because you wouldn't want to code poll 
in such a way that it can't take advantage of higher clock speeds.  The 
speed is not infinite, and thus the granularity is not unlimited.

>BTW, this means that the most accurate way to make a delay is to use
>select/poll/epoll to sleep for some time below the given time (i.e.
>shorter by the typical largest interval by which the system makes the
>delay longer than requested), then to call gettimeofday in a loop
>until the given time arrives. The implementation of my language does
>that under the hood.
>  
>
Agreed, but again, the question isn't which is the best way to do it, 
the question is what is the granularity of the best way to do it.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org//pipermail/libraries/attachments/20050131/e08398fc/attachment.htm


More information about the Libraries mailing list