Time Resolution
Marcin 'Qrczak' Kowalczyk
qrczak at knm.org.pl
Tue Feb 1 06:25:13 EST 2005
Seth Kurtzberg <seth at cql.com> writes:
> But if you look in any first year engineering textbook (for an
> engineering discipline other than CS), you will find that the
> display of a value with a precision greater than the known precision
> of the inputs is a cardinal sin.
The time expressed as an absolute number of ticks is primarily used
for computation, not for display.
For display you usually choose the format explicitly, it rarely
includes more precision than seconds, and if it does, you are aware
how many digits you want to show.
> I can certainly work around the problem at the application code
> level, but that's a clear violation of the encapsulation principle,
> because I'm relying on the implementation of the calculations and
> not just the interface.
When I program timers, usually I don't even care what is the exact
resolution of the timer, as long as it's good enough for users to not
notice that delays are inaccurate by a few milliseconds. The OS can
preempt my process anyway, the GC may kick in, etc.
If someone makes a real-time version of Haskell which runs on a
real-time OS, it's still not necessary to change the resolution of the
interface - it's enough for the program to know what accuracy it
should expect.
>>Anyway, what does the "resolution of the system clock" mean? On Linux
>
> It means, in general, a clock tick. poll, select, etc. cannot
> provide a timeout of greater precision than the system clock, and in
> general, since execution of an assembly language instruction takes
> multiple clock ticks, poll and its family actually can't even reach
> the precision of a single clock tick.
Ah, so you mean the processor clock, not the timer interrupt. What has
the processor clock to do with getting the current time and setting up
delays? Anyway, I don't propose picoseconds nor attoseconds.
Some numbers from my PC:
- my processor's tick has 0.8ns
- gettimeofday interface has a resolution of 1us
- clock_gettime interface uses 1ns, but the actual time is always a
multiple of 1us
- a gettimeofday call takes 2us to complete
- select interface uses 1us, but the actual delay is accurate to 1ms
- poll allows to sleep for delays accurate to 1ms, but it must be at
least 1ms-2ms (two timer interrupts)
- epoll allows to sleep for delays accurate to 1ms
- if the same compiled program is run on an older kernel, select/poll
precision is 10 times worse
So I have two proposals for the resolution of Haskell's representation
of absolute time:
1. Use nanoseconds.
2. Use an implementation-dependent unit (will be probably nanoseconds
or microseconds with current implementations, but the interface
will not have to be changed if more accurate delays become practical
in 10 years).
> It's important to distinguish between the fact that a method allows
> you to use a value, and the fact that, in a given environment, all
> values lower than some minimum (something of the order of 10 clock
> ticks, which is optimistic) are simply treated as zero. Not in the
> sense that zero means blocking, in the sense that the interval from
> the perspective of poll() is actually zero. The next time a context
> switch (or an interrupt, if poll is implemented using interrupts) the
> timeout will be exceeded. Checking the documentation of poll, there
> is even a warning that you cannot rely on any given implementation to
> provide the granularity that poll allows you to specify.
Note that the behavior of poll and epoll on Linux differs, even
though they use the same interface for expressing the delay (number
of milliseconds as a C int).
poll rounds the time up to timer interrupts (usually 1ms or 10ms),
and sleeps for the resulting time or up to one tick *longer*.
epoll rounds the time up to timer interrupts, and sleeps for the
resulting time or up to one tick *shorter* (or sometimes longer if
the process is preempted).
The behavior of poll is consistent with POSIX, which tells that the
specified time is the minimum delay. The behavior of epoll allows to
sleep for the next timer interrupt by specifying 1ms (poll always
sleeps at least one full timer interrupt - I mean when it returns
because the delay has expired).
I've heard that by accident select is like epoll, not like poll
(except that the interface specifies microseconds; it's not more
accurate though), but I haven't checked.
So the compiler of my language measures (at ./configure time) the time
poll/epoll will usually sleep when asked to sleep for 1ms just after a
timer interrupt. This is used to calculate the delay to ask poll/epoll
for. The remaining time is slept using a loop which calls gettimeofday,
unless another thread wants to run. This gives a practical accuracy of
about 20us here. But this becomes 1ms when other threads or processes
interfere.
This means that the "resolution of a delay" is not a well defined
concept. It depends on too many variables, for example on the time it
takes for gettimeofday call to return and on activity of threads and
processes.
--
__("< Marcin Kowalczyk
\__/ qrczak at knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/
More information about the Libraries
mailing list