<div dir="ltr"><br><br><div class="gmail_quote">On Wed, Apr 15, 2015 at 4:13 PM Gershom B <<a href="mailto:gershomb@gmail.com">gershomb@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On April 15, 2015 at 8:34:07 AM, Michael Snoyman (<a href="mailto:michael@snoyman.com" target="_blank">michael@snoyman.com</a>) wrote:<br>
> I've given plenty of concrete attack vectors in this thread. I'm not going<br>
> to repeat all of them here. But addressing your "simpler idea": how do we<br>
> know that the claimed person actually performed that action? If Hackage is<br>
> hacked, there's no way to verify *any* such log. With a crypto-based<br>
> system, we know specifically which key is tied to which action, and can<br>
> invalidate those actions in the case of a key becoming compromised.<br>
<br>
So amend Carter’s proposal with the requirement that admin/trustee actions be signed as well. Now we can audit the verification trail. Done.<br>
<br>
But let me pose a more basic question: Assume somebody falsified the log, but could _not_ falsify any package contents (because the latter were verified at the use site). And further, assume we had a signing trail for revisions as well. Now what is the worst that this bad actor could accomplish? <br>
<br>
This is why it helps to have a “threat model”. I think there is a misunderstanding here on what Carter is asking for. A “threat model” is not a list of potential vulnerabilities. Rather, it is a statement of what types of things are important to mitigate against, and from whom. There is no such thing as a completely secure system, except, perhaps an unplugged one. So when you say you want something “safe” and then tell us ways the current system is “unsafe” then that’s not enough. We need to have a criterion by which we _could_ judge a future system at least “reasonably safe enough”.<br>
<br>
My sense of a threat model prioritizes package signing (and I guess revision signing now too) but e.g. doesn’t consider a signed verifiable audit trail a big deal, because falsifying those logs doesn’t easily translate into an attack vector.<br>
<br>
You are proposing large, drastic changes. Such changes are likely to get bogged down and fail, especially to the degree they involve designing systems in ways that are not in widespread use already. And even if such changes were feasible, and even if they were a sound approach, it would take a long time to put the pieces together to carry them out smoothly across the ecosystem.<br>
<br>
Meanwhile, if we can say “in fact this problem decomposes into six nearly unrelated problems” and then prioritize those problems, it is likely that all can be addressed incrementally, which means less development work, greater chance of success, and easier rollout. I remain convinced that you raise some genuine issues, but they decompose into nearly unrelated problems that can and should be tackled individually.<br>
<br><br></blockquote><div><br></div><div>I think you've missed what I've said, so I'll try to say it more clearly: we have no insight right now into how Hackage makes decisions about who's allowed to upload and revise packages. We have no idea how to make a correspondence between a Hackage username and some externally-verifiable identity (like a GPG public key). In that world: how can we externally verify signatures of packages on Hackage?</div><div><br></div><div>I'm pretty familiar with Chris's package signing work. It's a huge step forward. But by necessity of the weaknesses in what Hackage is exposing, we have no way of fully verifying all signatures.</div><div><br></div><div>If you see the world differently, please explain. Both you and Carter seem to assume I'm talking about some other problem that's not yet been described. I'm just trying to solve the problem already identified. I think you've missed a few steps necessary to have a proper package signing system in place.</div><div><br></div><div>You may think that the proposal I've put together is large and a massive shift. It's honestly the minimal number of changes I can see towards having a method to fully verify all signatures of packages that Hackage is publishing. If you see a better way to do it, I'd rather do that, so tell me what it is.</div><div><br></div><div>Michael</div><div><br></div><div>* * *</div><div><br></div><div>I think the above was clear enough, but in case it's not, here's an example. Take the yesod-core package, for which MichaelSnoyman and GregWeber are listed as maintainers. Suppose that we have information from Hackage saying:</div><div><br></div><div>yesod-core-1.4.0 released by MichaelSnoyman</div><div>yesod-core-1.4.1 released by FelipeLessa</div><div>yesod-core-1.4.2 released by GregWeber</div>yesod-core-1.4.2 cabal file revision by HerbertValerioRiedel</div><div class="gmail_quote"><br></div><div class="gmail_quote">How do I know:</div><div class="gmail_quote"><br></div><div class="gmail_quote">* Which signatures on yesod-core-1.4.0 to trust? Should I trust MichaelSnoyman's and GregWeber's only? What if GregWeber wasn't a maintainer when 1.4.0 was released?</div><div class="gmail_quote">* How can 1.4.1 be trusted? It was released by a non-maintainer. In reality, we can guess that FelipeLessa used to be a maintainer but was then removed, but how do we know this?</div><div class="gmail_quote">* Similarly, we can guess that HerbertValerioRiedel is granted as a trustee the right to revise a cabal file.</div><div class="gmail_quote">* But in any event: how do we get the GPG keys for any of these users?</div><div class="gmail_quote">* And since Hackage isn't enforcing any GPG signatures, what should we do when the signatures for a package don't exist?</div><div class="gmail_quote"><br></div><div class="gmail_quote">This is just one example of the impediments to adding package signing to the current Hackage system.</div></div>