<html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"></head><body dir="auto"><div dir="auto"><br></div><div dir="auto">I have implemented something like that actually: https://github.com/cgohla/pledge</div><div dir="auto"><br></div><div dir="auto">Working out a portable API could be difficult.</div><div dir="auto"><br></div><div dir="auto"><br></div><div id="composer_signature" dir="auto"><div style="font-size:85%;color:#575757" dir="auto">Sent from my Galaxy</div></div><div dir="auto"><br></div><div><br></div><div align="left" dir="auto" style="font-size:100%;color:#000000"><div>-------- Original message --------</div><div>From: Hécate <hecate@glitchbra.in> </div><div>Date: 27/12/22 20:39 (GMT+00:00) </div><div>To: ghc-devs@haskell.org </div><div>Subject: Re: Deprecating Safe Haskell, or heavily investing in it? </div><div><br></div></div>Thanks for your input Viktor!<br><br>I came across the nsjail system from Google a little while after posting <br>this thread: https://github.com/google/nsjail/#overview<br><br>Perhaps we could get the most value for our buck if we externalise the <br>solution to work with OS-level mechanisms?<br>What do you think of that? Something based upon eBPF would certainly <br>incur less modifications to the RTS?<br><br>Le 27/12/2022 à 21:12, Viktor Dukhovni a écrit :<br>> On Tue, Dec 27, 2022 at 06:09:59PM +0100, Hécate wrote:<br>><br>>> Now, there are two options (convenient!) that are left to us:<br>>><br>>> 1. Deprecate Safe Haskell: We remove the Safe mechanism as it exists<br>>> today, and keep the IO restriction under another name. This will<br>>> certainly cause much joy amongst maintainers and GHC developers alike.<br>>> The downside is that we don't have a mechanism to enforce "Strict<br>>> type-safety" anymore.<br>>><br>>> 2. We heavily invest in Safe Haskell: This is the option where we amend<br>>> the PVP to take changes of Safety annotations into account, invest in<br>>> workforce to fix the bugs on the GHC side. Which means we also invest in<br>>> the tools that check for PVP compatibility to check for Safety. This is<br>>> not the matter of a GSoC, or a 2-days hackathon, and I would certainly<br>>> have remorse sending students to the salt mines like that.<br>>><br>>> I do not list the Status Quo as an option because it is terrible and has<br>>> led us to regularly have complaints from both GHC & Ecosystem libraries<br>>> maintainers. There can be no half-measures that they usually tend to<br>>> make us slide back into the status quo.<br>>><br>>> So, what do you think?<br>> I think that "Restricted IO" would in principle be the more sensible<br>> approach. HOWEVER, for robust "sandboxing" of untrusted code what's<br>> required is more than just hiding the raw IO Monad from the sandboxed<br>> code. Doing that securely is much too difficult to do correctly, as<br>> evidenced by the ultimate failure (long history of bypass issues) of<br>> similar efforts for enabling restricted execution of untrusted code in<br>> Java (anyone still using Java "applets", or running Flash in their<br>> browser???).<br>><br>> The only way to do this correctly is to provide strong memory separation<br>> between the untrusted code and the TCB. The only mainstream working<br>> examples of this that I know of are:<br>><br>> * Kernel vs. user space memory separation.<br>><br>> * Tcl's multiple interpreters, where untrusted code runs in<br>> slave interpreters stripped of most verbs, with aliases<br>> added to wrappers that call back into the parent interpreter<br>> for argument validation and restricted execution.<br>><br>> Both systems provide strong memory isolation of untrusted code, only<br>> data passes between the untrusted code and the TCB through a limited<br>> set of callbacks (system calls if you like).<br>><br>> For "Safe Haskell" to really be *safe*, memory access from untrusted<br>> code would need to be "virtualised", with a separate heap and foreign<br>> memory allocator for evaluation of untrusted code, and the RTS rewriting<br>> and restricting all direct memory access. This means that "peek" and<br>> "poke" et. al. would not directly read memory, but rather be restricted<br>> to specific address ranges allocated to the untrusted task.<br>><br>> Essentially the RTS would have to become a user-space microkernel.<br>><br>> This is in principle possible, but it is not clear whether this is worth<br>> doing, given limited resources.<br>><br>> To achieve "safe" execution, restricted code needs to give up some<br>> runtime performance, just compile-time safety checks are not<br>> sufficiently robust in practice. For example, the underlying byte<br>> arrays (pinned or not) behind ByteString and Text when used from<br>> untrusted code would not allow access to data beyond the array bounds<br>> (range checked on every access), ... which again speaks to some<br>> "virtualisation" of memory access by the RTS, at least to the extent of<br>> always performing range checks when running untrusted code.<br>><br>> Bottom line, I don't trust systems like Safe Haskell, or Java's<br>> type-system-based sandboxing of untrusted code, ... that try to perform<br>> sandboxing in a shared address space by essentially static analysis<br>> alone. We've long left shared address space security systems DOS and<br>> MacOS 9 behind... good riddance.<br>><br>-- <br>Hécate ✨<br>🐦: @TechnoEmpress<br>IRC: Hecate<br>WWW: https://glitchbra.in<br>RUN: BSD<br><br>_______________________________________________<br>ghc-devs mailing list<br>ghc-devs@haskell.org<br>http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs<br></body></html>