Concerning Time.TimeDiff
Graham Klyne
GK@ninebynine.org
Thu, 19 Jun 2003 12:33:30 +0100
I'm going to offer what is probably a contrary view in this community. I
find myself concerned about the direction of this discussion, but am having
some trouble figuring out whether my concerns are justified or
over-conservatism.
My concern is that in pursuit of perfection we may sacrifice
utility. (Or: "the best is the enemy of the good".) On any hardware than
I'm familiar with, processing rationals, or unlimited precision numbers, is
significantly more expensive than using the native machine
capabilities. So, does the cost of using rational (or indefinite
precisions) fopr time calculations sufficiently justify the benefits,
especially when every program that uses the common library function must
pay those cost? I think the answer partly depends on what kinds of
application Haskell will be used to implement.
If the view is that Haskell is primarily for writing programs that are
provably correct in all conceivable circumstances, then the case for using
rational time values is clear. But (partly inspired by Backus back in
1978, and the very practically useful work of the Haskell community in
developing the language and tools) I see Haskell as something far more
approaching a "mainstream" programming option. I think the evolving work
on type-safety and generics gives Haskell real potential value in an
"industrial" setting, where the errors of concern are usually not about
losing leap-seconds, or software that will still be operationally correct
millennia from now, but rather about will it help us deal with the
increasing complexity of application design without leaving stupid
trapdoors for accidental or malicious subversion of the code.
I guess that reasonably efficient 64-bit support is pretty much universal
on any machine (with a little software assist) I can imagine running
Haskell, and I note that 64 bits (about 10^19) comfortably holds a second's
worth of picoseconds.
A rough calculation gives 2^64 picoseconds = 5000 hours, so 64 bits clearly
isn't enough to hold all useful dates in picoseconds. 2^64 seconds is
enough to represent many more years than I could shake a stick at
(something like 5*10^11 years). Dealing with sub-picosecond intervals is
something I find hard to imagine being a common requirement (I may often
talk about nanoseconds in the context of computers, but I've never really
had to compute with them: milliseconds has been about the smallest I've
had to deal with).
My point is that seconds and picoseconds, represented using 64 bit binary
values, are a pretty efficient engineering choice that I think will satisfy
a vast majority of the requirements of actual applications that use a
common time library, and which don't hold any potential performance pitfalls.
I could, of course, be wrong and short-sighted in this view, but I find it
hard to lose sleep over missing leap-seconds and dates beyond the lifetime
of the Universe(?) for the majority of applications built using a
general-purpose programming system. And the cost of supporting all this
may be trivial in practical terms -- I don't have a good handle on that,
but I'll comment that time calculations might be a significant
computational burden for a real-time system dealing with high event rates
(and I think we'll see lots of these applications).
#g
--
At 13:09 18/06/03 -0400, Dean Herington wrote:
>"Ketil Z. Malde" wrote:
>
> > "Simon Marlow" <simonmar@microsoft.com> writes:
> >
> > > - ClockTime and TimeDiff are now represented as
> > > Integer picoseconds only. Hence, they also now derive
> > > Num, Enum, and Integral.
> >
> > I think this is the most aesthetically pleasing. From a practical
> > point of view, we should perhaps consider the possible need to
> > represent times of higher resolution, and the practical need to use of
> > much lower resolution. Division by 10^12, or the need to push really
> > large integers around isn't going to end up being costly, is it?
>
>Representing times as `Rational` seems more elegant:
> * It handles widely varying needs for resolution nicely.
> * It avoids choosing picoseconds as the finest possible resolution.
>What are the downsides to `Rational`? And if those downsides are serious
>enough, it would seem that the next best approach would be to represent times
>abstractly.
>
>Dean
>
>_______________________________________________
>Libraries mailing list
>Libraries@haskell.org
>http://www.haskell.org/mailman/listinfo/libraries
-------------------
Graham Klyne
<GK@NineByNine.org>
PGP: 0FAA 69FF C083 000B A2E9 A131 01B9 1C7A DBCA CB5E