[Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security
Dennis J. McWherter, Jr.
dennis at deathbytape.com
Wed Apr 15 13:24:47 UTC 2015
As far as the threat model is concerned, I believe the major concern is
using "untrusted" code (for the definition of untrusted such that the
source is not the author you expected). Supposing this group succeeds in
facilitating greater commercial adoption of Haskell, the one of the easiest
vectors (at this moment) to break someone's Haskell-based system is to
simply swap a modified version of a library containing an exploit.
That said, we should also recognize this as a general problem. Some ideas
on package manager attacks are at .
Further, I see what Gershom is saying about gaining adoption within the
current community. However, I wonder (going off of his thought about
decomposing the problem) if the system for trust could be generic enough to
integrate into an existing solution to help mitigate this risk.
On Wednesday, April 15, 2015 at 8:13:28 AM UTC-5, Gershom B wrote:
> On April 15, 2015 at 8:34:07 AM, Michael Snoyman (mic... at snoyman.com
> > I've given plenty of concrete attack vectors in this thread. I'm not
> > to repeat all of them here. But addressing your "simpler idea": how do
> > know that the claimed person actually performed that action? If Hackage
> > hacked, there's no way to verify *any* such log. With a crypto-based
> > system, we know specifically which key is tied to which action, and can
> > invalidate those actions in the case of a key becoming compromised.
> So amend Carter’s proposal with the requirement that admin/trustee actions
> be signed as well. Now we can audit the verification trail. Done.
> But let me pose a more basic question: Assume somebody falsified the log,
> but could _not_ falsify any package contents (because the latter were
> verified at the use site). And further, assume we had a signing trail for
> revisions as well. Now what is the worst that this bad actor could
> This is why it helps to have a “threat model”. I think there is a
> misunderstanding here on what Carter is asking for. A “threat model” is not
> a list of potential vulnerabilities. Rather, it is a statement of what
> types of things are important to mitigate against, and from whom. There is
> no such thing as a completely secure system, except, perhaps an unplugged
> one. So when you say you want something “safe” and then tell us ways the
> current system is “unsafe” then that’s not enough. We need to have a
> criterion by which we _could_ judge a future system at least “reasonably
> safe enough”.
> My sense of a threat model prioritizes package signing (and I guess
> revision signing now too) but e.g. doesn’t consider a signed verifiable
> audit trail a big deal, because falsifying those logs doesn’t easily
> translate into an attack vector.
> You are proposing large, drastic changes. Such changes are likely to get
> bogged down and fail, especially to the degree they involve designing
> systems in ways that are not in widespread use already. And even if such
> changes were feasible, and even if they were a sound approach, it would
> take a long time to put the pieces together to carry them out smoothly
> across the ecosystem.
> Meanwhile, if we can say “in fact this problem decomposes into six nearly
> unrelated problems” and then prioritize those problems, it is likely that
> all can be addressed incrementally, which means less development work,
> greater chance of success, and easier rollout. I remain convinced that you
> raise some genuine issues, but they decompose into nearly unrelated
> problems that can and should be tackled individually.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe