[Haskell-cafe] Improvements to package hosting and security

Michael Snoyman michael at snoyman.com
Fri Apr 17 03:34:06 UTC 2015

On Thu, Apr 16, 2015 at 4:36 PM Bardur Arantsson <spam at scientician.net>

> On 16-04-2015 14:18, Michael Snoyman wrote:
> [--snip--]
> > I never claimed nor intended to imply that range requests are
> non-standard.
> > In fact, I'm quite familiar with them, given that I implemented that
> > feature of Warp myself! What I *am* claiming as non-standard is using
> range
> > requests to implement an incremental update protocol of a tar file. Is
> > there any prior art to this working correctly? Do you know that web
> servers
> > will do what you need and server the byte offsets from the uncompressed
> tar
> > file instead of the compressed tar.gz?
> Why would HTTP servers serve anything other than the raw contents of the
> file? You usually need special configuration for that sort of thing,
> e.g. mapping based on requested content type. (Which the client should
> always supply correctly, regardless.)
> "Dumb" HTTP servers certainly don't do anything weird here.
There actually is a weird point to browsers and servers around pre-gziped
contents, which is what I was trying to get at (but didn't do a clear
enough job of doing). There's some ambiguity when sending compressed
tarballs as to whether the browser should decompress, for instance.
http-client had to implement a workaround for this specifically:


> [--snip--]
> > On the security front: it seems that we have two options here:
> >
> > 1. Use a widely used piece of software (Git), likely already in use by
> the
> > vast majority of people reading this mailing list, relied on by countless
> > companies and individuals, holding source code for the kernel of likely
> > every mail server between my fingertips and the people reading this
> email,
> > to distribute incremental updates. And as an aside: that software has
> built
> > in support for securely signing commits and verifying those signatures.
> >
> I think the point that was being made was that it might not have been
> hardened sufficiently against mailicious servers (being much more
> complicated than a HTTP client, for good reasons). I honestly don't know
> how much such hardening it has received, but I doubt that it's anywhere
> close to HTTP clients in general. (As to the HTTP client Cabal uses, I
> wouldn't know.)
AFAIK, neither of these proposals as they stand have anything to do with
security against a malicious server. In both cases, we need to simply trust
the server to be sending the right data. Using some kind of signing
mechanism is a mitigation against that, such as the GPG signatures I added
to all-cabal-files. HTTPS from Hackage would help prevent MITM attacks, and
having the 00-index file be cryptographically signed would be another
(though I don't know what Duncan has planned here).

> [--snip--]
> > I get that you've been working on this TUF-based system in private for a
> > while, and are probably heavily invested already in the solutions you
> came
> > up with in private. But I'm finding it very difficult to see the
> reasoning
> > to reinventing wheels that need to reinventing.
> >
> That's pretty... uncharitable. Especially given that you also have a
> horse in this race.
> (Especially, also considering that your proposal *doesn't* address some
> of the vulnerabilities mitigated by the TUF work.)
I actually really don't have a horse in this race. It seems like a lot of
people missed this from the first email I sent, so to repeat myself:

> I wrote up a strawman proposal last week[5] which clearly needs work to
be a realistic option. My question is: are people interested in moving
forward on this? If there's no interest, and everyone is satisfied with
continuing with the current Hackage-central-authority, then we can proceed
with having reliable and secure services built around Hackage. But if
others- like me- would like to see a more secure system built from the
ground up, please say so and let's continue that conversation.

My "horse in the race" is a security model that's not around putting all
trust in a single entity. Other than that, I'm not invested in any specific
direction. Using TUF sounds like a promising idea, but- as I raised in the
other thread- I have my concerns.

All of that said: the discussion here is about efficient incremental
downloads, not package signing. For some reason those two points are
getting conflated here.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150417/7fa726db/attachment.html>

More information about the Haskell-Cafe mailing list