<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hi,</p>
<p>I discovered that in base <i>logBase</i>, the functions
representing arbitrary logarithms, are defined in terms of two
applications of <i>log</i>. <i>log</i>, the functions
representing the natural logarithm, just pass responsibility on to
some primitives. That is fine, mathematically speaking. But from
an implementation standpoint, I wonder why one would do that.</p>
<p>The logarithm that should be the fastest and the most precise one
to approximate by a cpu should be the one to base two. In fact if
one already has a floating point representation, it should be
almost ridiculously easy to compute from the exponent.</p>
<p>I suppose it's probably a good idea to use primitive functions
for the harder cases to get some speed-up. What I don't see is why
the choice seems to have been an <i>xor</i> instead of an <i>and</i>.<br>
</p>
<p>I am by absolutely no means an expert on these things, so I am
probably missing something here. Is there actually a good reason
for this choice? For example will ghc optimize such logarithms
anyway?</p>
<p>Cheers,</p>
<p>MarLinn<br>
</p>
</body>
</html>