[Haskell-cafe] Tor project
auke at tulcod.com
Fri Aug 1 08:03:01 UTC 2014
On 1 August 2014 08:59, Friedrich Wiemer <friedrichwiemer at gmail.com> wrote:
> A comparision would be interesting though. I'm aware of constant time
> implementations of cryptographic functions to reduce timing issues, but
> these are often coded in C or ASM - I have no clue if this would be
> possible in a functional language, as the compiler has to know not to
> optimize for short cuts or anything in the code?
I thought about the possibilities to make a "fixed time"
implementation of some standard functions a while ago.
What I *think* should give a solution is the following.
Why do C implementations against, say, timing attack works?
Well, a very typical scenario is password checking: you have two input
[Char]s, and need to see if they match, which you do by zipping them
and seeing if all pairs are of equal elements. Now of course, once you
found a first element which doesn't match, you /can/ stop computation
because you know the result, but you don't /want/ to, because... well,
because you don't want to expose the fact that you're done. In other
words, you want to look like you're still processing!
So instead, let's make an implementation where even if halfway through
checking equality of two passwords we discover that they're unequal,
there is still some more information we need to compute! Let's
/actually/ make it so that we still need to do more computations.
So what if we made, for every standard library haskell function,
something that somehow accumulates small bits of data that take the
full input data to process, but in the end we are not interested in?
For example, we could make an implementation of
all::(a->Bool)->[a]->Bool (with a different signature) which doesn't
just compute the output we need, but also computes a "cumulative hash
function" (for lack of better words) of all the elements.
So let us give a name to the "trash data" we attach to any "input
data", whose hash function (or whatever constant time function) gets
accumulated throughout the algorithm that processes the input....
Now, exactly what slack data you attach to input data depends on the
thing that you want to be constant in: if you want your computation to
always take as long as the length of the input data, attach that much
slack data. But, for example, if passwords can be up to 128 chars
long, and you don't want to expose /any/ information about the given
password, you should attach 128 items of slack data to your input,
possibly making it larger.
Let me know if the above didn't make sense. I don't know how to
implement this, but I think it might give a way to fix *many*
side-channel leaks using functional techniques.
PS: obviously this is not how you want to be checking passwords, but
this is quite a typical side channel leak scenario.
More information about the Haskell-Cafe