<div dir="ltr"><br><br><div class="gmail_quote">On Thu, Apr 16, 2015 at 1:12 PM Duncan Coutts <<a href="mailto:duncan@well-typed.com">duncan@well-typed.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Thu, 2015-04-16 at 09:52 +0000, Michael Snoyman wrote:<br>
> Thanks for responding, I intend to go read up on TUF and your blog post<br>
> now. One question:<br>
><br>
> * We're incorporating an existing design for incremental updates<br>
> of the package index to significantly improve "cabal update"<br>
> times.<br>
><br>
> Can you give any details about what you're planning here?<br>
<br>
Sure, it's partially explained in the blog post.<br>
<br>
> I put together a<br>
> Git repo already that has all of the cabal files from Hackage and which<br>
> updates every 30 minutes, and it seems that, instead of reinventing<br>
> anything, simply using `git pull` would be the right solution here:<br>
><br>
> <a href="https://github.com/commercialhaskell/all-cabal-files" target="_blank">https://github.com/commercialhaskell/all-cabal-files</a><br>
<br>
It's great that we can mirror to lots of different formats so<br>
easily :-).<br>
<br>
I see that we now have two hackage mirror tools, one for mirroring to a<br>
hackage-server instance and one for S3. The bit I think is missing is<br>
mirroring to a simple directory based archive, e.g. to be served by a<br>
normal http server.<br>
<br>
>From the blog post:<br>
<br>
The trick is that the tar format was originally designed to be<br>
append only (for tape drives) and so if the server simply<br>
updates the index in an append only way then the clients only<br>
need to download the tail (with appropriate checks and fallback<br>
to a full update). Effectively the index becomes an append only<br>
transaction log of all the package metadata changes. This is<br>
also fully backwards compatible.<br>
<br>
The extra detail is that we can use HTTP range requests. These are<br>
supported on pretty much all dumb/passive http servers, so it's still<br>
possible to host a hackage archive on a filesystem or ordinary web<br>
server (this has always been a design goal of the repository format).<br>
<br>
We use a HTTP range request to get the tail of the tarball, so we only<br>
have to download the data that has been added since the client last<br>
fetched the index. This is obviously much much smaller than the whole<br>
index. For safety (and indeed security) the final tarball content is<br>
checked to make sure it matches up with what is expected. Resetting and<br>
changing files earlier in the tarball is still possible: if the content<br>
check fails then we have to revert to downloading the whole index from<br>
scratch. In practice we would not expect this to happen except when<br>
completely blowing away a repository and starting again.<br>
<br>
The advantage of this approach compared to others like rsync or git is<br>
that it's fully compatible with the existing format and existing<br>
clients. It's also in the typical case a smaller download than rsync and<br>
probably similar or smaller than git. It also doesn't need much new from<br>
the clients, they just need the same tar, zlib and HTTP features as they<br>
have now (e.g. in cabal-install) and don't have to distribute<br>
rsync/git/etc binaries on other platforms (e.g. windows).<br>
<br>
That said, I have no problem whatsoever with there being git or rsync<br>
based mirrors. Indeed the central hackage server could provide an rsync<br>
point for easy setup for public mirrors (including the package files).<br>
<br><br></blockquote><div><br></div><div>I don't like this approach at all. There are many tools out there that do a good job of dealing with incremental updates. Instead of using any of those, the idea is to create a brand new approach, implement it in both Hackage Server and cabal-install (two projects that already have a massive bug deficit), and roll it out hoping for the best. There's no explanation here as to how you'll deal with things like cabal file revisions, which are very common these days and seem to necessitate redownloading the entire database in your proposal.</div><div><br></div><div>Here's my proposal: use Git. If Git isn't available on the host, then revert to the current codepath and download the index. We can roll that out in an hour of work and everyone gets the benefits, without the detriments of creating a new incremental update framework.</div><div><br></div><div>Also: it seems like your biggest complaint about Git is "distributing Git." Making Git an optional upgrade is one way of solving that. Another approach is: don't use the official Git command line tool, but one of the many other implementations out there that implement the necessary subset of functionality. I'd guess writing that functionality from scratch in Cabal would be a comparable amount of code to what you're proposing.</div><div><br></div><div>Comments on package signing to be continued later, I haven't finished reading it yet.</div><div><br></div><div>Michael </div></div></div>