[Haskell-cafe] Tor project
wojtek at power.com.pl
Fri Aug 1 07:35:37 UTC 2014
On 01.08.2014 09:10, Tobias Dammers wrote:
> Timing attacks generally work such that the attacker samples a large
> number of runs and maps their execution times; "humps" in the graph show
> different code paths being taken.
> Because they're statistical attacks, adding uniform noise (random
> delays) does not work - it's uniform noise after all, you'll just raise
> the floor, but the humps will still be there; the random delays will
> just average out. Ten million times (10 + rand()) is still going to be
> smaller than ten million times (1000 + rand()).
Okay, so basically we'd would have to add dependent random noise. For
example send out a Normal distribution. Or a smiley (Beta, Weibull).
> Instead, a way I've seen recommended is run the entire input through a
> cryptographic hash function, and derive a delay from that, in a range
> that dwarfs the actual timing difference. This way, things will still
> average out, but the averaging will not compensate for the delay we
> added, because that delay is always the same for a given input.
> Of course designing functions to minimize execution time differences is
> still a good idea; it's just not very reliable on its own, and for many
> real-world cases, it's not even practical (or outright impossible) -
> even if your code doesn't branch based on whether a username was found
> in the database or not, your database does.
Why the gimmick, for each and every piece of crypto code? Why not just
use the clock for timing? The crypto lib would probably have to tune
itself on startup.
More information about the Haskell-Cafe