<div dir="ltr"><br><br><div class="gmail_quote">On Thu, Apr 16, 2015 at 4:36 PM Bardur Arantsson <<a href="mailto:spam@scientician.net">spam@scientician.net</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 16-04-2015 14:18, Michael Snoyman wrote:<br>
[--snip--]<br>
> I never claimed nor intended to imply that range requests are non-standard.<br>
> In fact, I'm quite familiar with them, given that I implemented that<br>
> feature of Warp myself! What I *am* claiming as non-standard is using range<br>
> requests to implement an incremental update protocol of a tar file. Is<br>
> there any prior art to this working correctly? Do you know that web servers<br>
> will do what you need and server the byte offsets from the uncompressed tar<br>
> file instead of the compressed tar.gz?<br>
<br>
Why would HTTP servers serve anything other than the raw contents of the<br>
file? You usually need special configuration for that sort of thing,<br>
e.g. mapping based on requested content type. (Which the client should<br>
always supply correctly, regardless.)<br>
<br>
"Dumb" HTTP servers certainly don't do anything weird here.<br>
<br></blockquote><div><br></div><div>There actually is a weird point to browsers and servers around pre-gziped contents, which is what I was trying to get at (but didn't do a clear enough job of doing). There's some ambiguity when sending compressed tarballs as to whether the browser should decompress, for instance. http-client had to implement a workaround for this specifically:</div><div><br></div><div><a href="https://www.stackage.org/haddock/nightly-2015-04-16/http-client-0.4.11.1/Network-HTTP-Client-Internal.html#v:browserDecompress">https://www.stackage.org/haddock/nightly-2015-04-16/http-client-0.4.11.1/Network-HTTP-Client-Internal.html#v:browserDecompress</a><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
[--snip--]<br>
> On the security front: it seems that we have two options here:<br>
><br>
> 1. Use a widely used piece of software (Git), likely already in use by the<br>
> vast majority of people reading this mailing list, relied on by countless<br>
> companies and individuals, holding source code for the kernel of likely<br>
> every mail server between my fingertips and the people reading this email,<br>
> to distribute incremental updates. And as an aside: that software has built<br>
> in support for securely signing commits and verifying those signatures.<br>
><br>
<br>
I think the point that was being made was that it might not have been<br>
hardened sufficiently against mailicious servers (being much more<br>
complicated than a HTTP client, for good reasons). I honestly don't know<br>
how much such hardening it has received, but I doubt that it's anywhere<br>
close to HTTP clients in general. (As to the HTTP client Cabal uses, I<br>
wouldn't know.)<br>
<br></blockquote><div><br></div><div>AFAIK, neither of these proposals as they stand have anything to do with security against a malicious server. In both cases, we need to simply trust the server to be sending the right data. Using some kind of signing mechanism is a mitigation against that, such as the GPG signatures I added to all-cabal-files. HTTPS from Hackage would help prevent MITM attacks, and having the 00-index file be cryptographically signed would be another (though I don't know what Duncan has planned here).</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
[--snip--]<br>
> I get that you've been working on this TUF-based system in private for a<br>
> while, and are probably heavily invested already in the solutions you came<br>
> up with in private. But I'm finding it very difficult to see the reasoning<br>
> to reinventing wheels that need to reinventing.<br>
><br>
<br>
That's pretty... uncharitable. Especially given that you also have a<br>
horse in this race.<br>
<br>
(Especially, also considering that your proposal *doesn't* address some<br>
of the vulnerabilities mitigated by the TUF work.)<br>
<br><br></blockquote><div><br></div><div>I actually really don't have a horse in this race. It seems like a lot of people missed this from the first email I sent, so to repeat myself: </div><div><br></div><div><div style="font-size:13.1999998092651px;line-height:19.7999992370605px">> I wrote up a strawman proposal last week[5] which clearly needs work to be a realistic option. My question is: are people interested in moving forward on this? If there's no interest, and everyone is satisfied with continuing with the current Hackage-central-authority, then we can proceed with having reliable and secure services built around Hackage. But if others- like me- would like to see a more secure system built from the ground up, please say so and let's continue that conversation.</div></div><div><br></div><div>My "horse in the race" is a security model that's not around putting all trust in a single entity. Other than that, I'm not invested in any specific direction. Using TUF sounds like a promising idea, but- as I raised in the other thread- I have my concerns.</div><div><br></div><div>All of that said: the discussion here is about efficient incremental downloads, not package signing. For some reason those two points are getting conflated here.</div><div><br></div><div>Michael</div></div></div>