[Haskell-cafe] Tor project

Tobias Dammers tdammers at gmail.com
Fri Aug 1 09:56:35 UTC 2014


On Fri, Aug 01, 2014 at 09:35:37AM +0200, Wojtek Narczyński wrote:
> 
> Why the gimmick, for each and every piece of crypto code? Why not just use
> the clock for timing? The crypto lib would probably have to tune itself on
> startup.

Sorry, I didn't mean use the crypto function to cause the delay; I meant
using a cryptographic hash to determine how long the delay should be.
Once we have a number, the system clock should of course be used to
actually delay things.

The point of this is as follows.

The attacker sends a large number of messages, each one of them multiple
times, averaging execution times. The reasonable expectation is that the
average execution times will fall into distinct groups, each of them
representing one execution path. The averaging is needed because there
will be fluctuations anyway, if only because there are other things
happening on the network and on the target machine, but those noise
sources are typically all uniform or at least normal distributions, so
averaging will filter them out. The part that remains after filtering is
what interests the attacker. Then if you have sampled, say, 10,000
random usernames, and 10 of them are in one group and the other 9,990
are in the other, it's a pretty good guess to assume that those 10 are
usernames that actually exist on the target system and the others don't.

Now, if we add some random noise, we just add one more uniformly
distributed noise source to our signal, but we already have a bunch of
those, so we're not really winning anything here - the extra noise will
just get filtered out along with the rest.

However, if we add noise that depends on the input, the filtering won't
remove it, because for each given input it is constant. Instead of
removing the noise component from `code(input) + system_noise()`,
leaving `code(input)`, the attacker now deals with `code(input) +
hash(input) + system_noise()`, leaving them with `code(input) +
hash(input)`. Without cracking the hash, I don't know how you'd remove
the hash-based delay from the sample; possible mistakes would be
insufficient granularity in the hash-based delay (e.g. if the difference
between code paths is 1ms and our delay has a 100ms granularity, it's
still trivial to filter out the hash noise), insufficient range (if the
difference between code paths is 100ms and our delay is between 0 and
25ms, we still have two distinct groups), a weak hash function, and
probably a bunch more.

> 
> -- 
> Wojtek N.
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe


More information about the Haskell-Cafe mailing list