Tue, 17 Jun 2003 16:43:02 -0700
On Tue, Jun 17, 2003 at 06:20:32PM -0400, Matthew Donadio wrote:
> John Meacham wrote:
> > ClockTimes and TimeDiffs are ALWAYS TAI with an epoch of 1970-01-01
> > 00:00:10 TAI (to correspond to other libraries). a second is always a
> > second.
> The UTC second and the TAI second are precisely the same interval, and
> "tick" at the same time; TAI and UTC always differ by an integral number
> of seconds. TAI and UT0/UT1/UT2 are different.
yeah, this is what I meant by the difference is only made when
translating to a calendarTime. when represented as an offset from a
specific epoch, they SHOULD be the same. (but arn't in practice)
when systems which work via an offset from epoch system work with UTC
(even if the internal representation doesn't use offset from epoch, the
same problems apply to any system which wishes to find the differences
time ONE of the following MUST be true:
1) a UTC second is interpreted as a generally unpredictable different duration than a TAI second.
2) past timestamps (and possibly current) are incorrectly interpreted by a generally
3) you have a table of every leap second and all is well.
unixs tend to do 1 when synchronized externally (like via ntp).
free-running boxen (without external synchronization) do 2.
there are libraries which do 3 which is good.
> > this greatly simplified the internals as simple arithmetic on
> > integers is always correct.
> > the only time UTC and leap seconds should come into play is when
> > converting to or from a CalanderTime since UTC is meant to be a
> > human-consumable notion of time, not a precise one.
> I'm not sure if this is really a correct notion of UTC.
> TAI is atomic time, and ticks at a precisely defined rate. UT1 is
> corrected solar time (actually sidereal time converted to solar time and
> corrected), and due to quirks in the earth's rotation, is not constant.
> UTC is a comprimise between the two. UTC ticks at TAI's rate, but is
> corrected with leap seconds to it within +- 0.9 seconds of UT1.
simple arithmetic only works when option 1 above isn't chosen. however
most people that say 'just use UTC and forget about leap seconds' are
implicity choosing option 1 above without realizing it.
> > We will have to
> > assume that an oracle exists to tell us which years had leap seconds in
> > them, but such information is required by any scheme which works with
> > UTC, many systems provide them and it is easy enough to embed a table.
> I have to dig out my files on this (they are currently MIA due to job
> changes), but I believe the problem with this approach has to do with
> updating the leap second table in deployed systems. Also, all time
> broadcasts are by international agreement UTC (GPS may be different, but
> I can't remember), so anything a computer receives is going to be UTC.
> TAI may be the best thing to do in an ideal world, but the world is
> pretty much stuck with UTC.
* But you need those tables anyway. *
There is no correct solution which involves UTC and does not require
tables of leap seconds. But I recognize that such tables will not always
be available or up to date, in which case the time might be a little
off, but there is no way around that, such systems are just slightly
non-conformant which is okay for many people, but we should not
standardize on a vaugely defined incorrect semantics, rather we should
choose the correct solution and let implementations do their best to
conform to it on a given system.
for an example why you can't do UTC without a table:
convert 1000 seconds after epoch into a UTC CalendarTime. you can't
without knowing how many leap seconds occured in those 1000 seconds.
alternativly, assume you have UTC time nativly, convert 2000 seconds ago
into a CalanderTime, you cannot without leap second tables because you
don't know how many leap seconds occured in the last 2000. UNIX hacks
around this by changing the length of a second around a leap second, so
every timestamp when interpreted as an offset from epoch without any
leap seconds (i.e. every minute is 60 seconds) is correct, but the
tradeoff is that the length of a second is no longer defined and you
can't do time arithmetic or time offsets correctly.
the moral, saying use UTC doesn't mean anything precise unless you
specify the hacky way to interpret UTC which is going to be just as
complicated and less functional as saying use TAI in the first place. at
least now we would have the ability to actually represent precise times
when the system provides enough resources to do so.
John Meacham - California Institute of Technology, Alum. - firstname.lastname@example.org