> But what if you're an environmentally conscious mother who needs to drive the 5 minute walk to your kids' school? Surely, a modern car must be less polluting?
> CO2 emissions/km:
No, you have already compared fuel consumption. This is equivalent.
> Older alternatives like sandbox-2 exist, but they provide isolation near the OS level, not the language level. At that point we might as well use Docker or VMs.
No,no, Docker is not a sandbox for untrusted code.
What if I told you that, back in the day, we were letting thousands of untrusted, unruly, mischievous people execute arbitrary code on the same machine, and somehow, the world didn't end?
We live in a bizarre world where somehow "you need a hypervisor to be secure" and "to install this random piece of software, run curl | sudo bash" can live next to each other and both be treated seriously.
I don't think it is generally possible to escape from a docker container in default configuration (e.g. `docker run --rm -it alpine:3 sh`) if you have a reasonably update-to-date kernel from your distro. AFAIK a lot of kernel lpe use features like unprivileged user ns and io_uring which is not available in container by default, and truly unprivileged kernel lpe seems to be sufficient rare.
The kernel policy is that any distro that isn't using a rolling release kernel is unpatched and vulnerable, so "reasonably up-to-date" is going to lean heavily on what you consider "reasonable".
LPEs abound - unprivileged user ns was a whole gateway that was closed, io-uring was hot for a while, ebpf is another great target, and I'm sure more and more will be found every year as has been the case. Seccomp and unprivileged containers etc make a huge different to stomp out a lot of the attack surface, you can decide how comfortable you are with that though.
>The kernel policy is that any distro that isn't using a rolling release kernel is unpatched and vulnerable, so "reasonably up-to-date" is going to lean heavily on what you consider "reasonable".
I would expect major distributions to have embargoed CVE access specifically to prevent this issue.
Nope, that is not the case. For one thing, upstream doesn't issue CVEs and doesn't really care about CVEs or consider them valid. For another, they forbid or severely limit embargos.
You're right, Docker isn't a sandbox for untrusted code. I mentioned it because I've seen teams default to using it for isolating their agents on larger servers.
So I made sure to clarify in the article that it's not secure for that purpose.
It depends on the task, and the risk of isolation failure. Docker can be sufficient if inputs are from trusted sources and network egress is reasonably limited.
This kind of response isn't helpful. He's right to ask about the motivations for the claim that containers in general are "not a sandbox" when the design of containers/namespaces/etc. looks like it should support using these things to make a sandbox. He's right to be confused!
If you look at the interface contract, both containers and VMs ought to be about equally secure! Nobody is an idiot for reading about the two concepts and arriving at this conclusion.
What you should have written is something about your belief that the inter-container, intra-kernel attacker surface is larger than the intra-hypervisor, inter-kernel attack surface and so it's less likely that someone will screw up implementing a hypervisor so as to open a security hole. I wouldn't agree with this position, but it would at least be defensible.
Instead, you pulled out the tired old "education yourself" trope. You compounded the error with the weasely "are considered" passive-voice construction that lets you present the superior security of VMs as a law of nature instead of your personal opinion.
In general, there's a lot of alpha in questioning supposedly established "facts" presented this way.
> This is not a weakness in the design of containers.
Partially correct.
Many container escapes are also because the security of the underlying host, container runtime, or container itself was poorly or inconsistently implemented. This creates gaps that allow escapes from the container. There is a much larger potential for mistakes, creating a much larger attack surface. This is in addition to kernel vulnerabilities.
While you can implement effective hardening across all the layers, the potential for misconfiguration is still there, therefore there is still a large attack surface.
While a virtual host can be escaped from, the attack surface is much smaller, leaving less room for potential escapes.
This is why containers are considered riskier for a sandbox than a virtual host. Which one you use, and why, really should depend on your use case and threat model.
Sad to say it, a disappointing amount of people don't put much hardening into their container environments, including production k8s clusters. So it's much easier to say that a virtual host is better for sandboxing than containers, because many people are less likely to get it wrong.
> Many container escapes are also because the security of the underlying host, container runtime, or container itself was poorly or inconsistently implemented.
Sure, so running `npm install` inside the container is no worse than `npm install` on my machine. And in most cases, it is much better.
Escaping a properly set up container is a kernel 0day. Due to how large the kernel attack surface is, such 0days are generally believed to exist. Unless you are a high value target, a container sandbox will likely be sufficient for your needs. If cloud service providers discounted this possibility then a 0day could be burned to attack them at scale.
Also, you can use the runsc (gvisor) runtime for docker, if you are careful not to expose vulnerable protocols to the container there will be nothing escaping it with that runtime.
The last two are included to be complete, but in the case of the original article running untrusted python code makes them irrelevant in this circumstance.
My point you must consider the system as a whole to consider its overall attack surface and risk of compromise. There is a lot more that can go wrong to enable a container escape than you implied.
There are some people who are knowledgeable enough to ensure their containers are hardened at every level of the attack surface. Even then, how many are diligent enough to ensure that attention to detail every time? how many automate their configurations?
Most default configurations are not hardened as a compromise to enable usability. Most people who build containers do not consider hardening every possible attack surface. Many don't even know the basics. Most companies don't do a good job hardening their shared container environments - often as a compromise to be "faster".
So yeah, a properly set up container is hard to escape.
Not all containers are set up properly - I'd argue most are not.
> Escaping a properly set up container is a kernel 0day.
Not it is not. In fact many of the container escapes we see are because of bugs in the container runtimes themselves which can be quite different in their various implementations. CVE-2025-31133 was published 2? months ago and had nothing at all do with the kernel - just like many container escapes don't.
If a runtime is vulnerable then it didn't "set up a container properly".
Containers are a kernel technology for isolating and restricting resources for a process and its descendants. Once set up correctly, any escape is a kernel 0day.
For anyone who wants to understand what a container is I would recommend bubblewrap: https://github.com/containers/bubblewrap
This is also what flatpak happens to use.
It should not take long to realize that you can set it up in ways that are secure and ways which allow the process inside to reach out in undesired ways. As runtimes go, it's as simple as it gets.
Note CVE-2025-31133 requires one of: (1) persistent container (2) attacker-controlled image. That means that as long as you always use "docker run" on known images (as opposed to "docker start"), you cannot be exploited via that bug even if the service itself is compromised.
I am not saying that you should never update the OS, but a lot of of those container escapes have severe restrictions and may not apply to your specific config.
Note this lists 3 vulnerabilities as an example: CVE-2016-5195 (Dirty COW), CVE-2019-5736 (host runc override) and CVE-2022-0185 (io_uring escape)
Out of those, only first one is actually exploitable in common setups.
CVE-2019-5736 requires either attacker-controlled image or "docker exec". This is not likely to be the case in the "untrusted python" use case, nor in many docker setups.
CVE-2022-0185 is blocked by seccomp filter in default installs, so as long as you don't give your containers --privileged flags, you are OK. (And if you do give this flag, the escape is trivial without any vulnerabilities)
Exploit the Linux kernel underneath it (not the only way, just the obvious one). Docker is a security boundary but it is not suitable for "I'm running arbitrary code".
That is to say, Docker is typically a security win because you get things like seccomp and user/DAC isolation "for free". That's great. That's a win. Typically exploitation requires a way to get execution in the environment plus a privilege escalation. The combination of those two things may be considered sufficient.
It is not sufficient for "I'm explicitly giving an attacker execution rights in this environment" because you remove the cost of "get execution in the environment" and the full burden is on the kernel, which is not very expensive to exploit.
> Exploit the Linux kernel underneath it (not the only way, just the obvious one). Docker is a security boundary but it is not suitable for "I'm running arbitrary code".
Dockler is better for running arbitrary code compared to the direct `npm install <random-package>` that's common these days.
I moved to a Dockerized sandbox[1], and I feel much better now against such malicious packages.
It's better than nothing, obviously. But I don't consider `npm install <random-package>` to be equivalent to "RCE as a service", although it's somewhat close. I definitely wouldn't recommend `npm install <actually a random package>`, even in Docker.
I also implemented `insanitybit/cargo-sandbox` using Docker but that doesn't mean I think `insanitybit/cargo-sandbox` is a sufficient barrier to arbitrary code execution, which is why I also had a hardened `cargo add` that looked for typosquatting of package names, and why I think package manager security in general needs to be improved.
You can and should feel better about running commands like that in a container, as I said - seccomp and DAC are security boundaries. I wouldn't say "you should feel good enough to run an open SSH server and publish it for anyone to use".
> `npm install <random-package>` to be equivalent to "RCE as a service"
It is literally that. When you write "npm install foo", npm will proceed to install the package called "foo" and then run its installation scripts. It's as if you'd run curl | bash. That npm install script can do literally anything your shell in your terminal can do.
It's not "somewhat close" to RCE. It is literally, exactly, fully, completely RCE delivered as a god damn service to which you connect over the internet.
I'm familiar with how build scripts work. As mentioned, I build insanitybit/cargo-sandbox exactly to deal with malicious build scripts.
The reason I consider it different from "I'm opening SSH to the public, anyone can run a shell" is because the attack typically has to either be through a random package, which significantly reduces exposure, or through a compromised package, which requires an additional attack. Basically, somewhere along the way, something else had to go wrong if `npm install <x>` gives an attacker code execution, whereas "I'm giving a shell to the public" involves nothing else going wrong.
Running a command yourself that may include code you don't expect is not, to me, the same as arbitrary code execution. It often implies it but I don't consider those to be identical.
You can disagree with whether or not this meaningfully changes things (I don't feel strongly about it), but then I'd just point to "I don't think it's a sufficient barrier for either threat model but it's still an improvement".
That isn't to downplay the situation at all. Once again,
> that doesn't mean I think `insanitybit/cargo-sandbox` is a sufficient barrier to arbitrary code execution, which is why I also had a hardened `cargo add` that looked for typosquatting of package names, and why I think package manager security in general needs to be improved.
> definitely wouldn't recommend `npm install <actually a random package>`, even in Docker.
That's not the main attack vector.
The attack vector is some random dependency that is used by a lot of popular packages, which you `npm install` indirectly.
Docker provides some host isolation which can be used effectively as a sandbox. It's not designed for security (and it does have some reasonable defaults) but it does give you options to layer on security modules like apparmor and seccomp very easily.
Yeah I helped out a bit with Freenet before I saw what was being posted. Basically 4chan. Lots of edge lords.
But I helped because a friend dragged me to Amnesty International meetings in college and so I knew there were people who legitimately needed this shit.
Tor is the big example for me, created to allow people to have the ability to speak freely without being tracked, often criticized because it allows those things for our criminals (it has to be kept in mind that the spies and dissidents that are/were using Tor are considered criminals in their country)
When a law is unjust it will be broken by those on the right side of history. Software can’t tell if a law is just or not.
So if you want to support suffragists or underground railroads you’re making software that breaks the law.
Really we are all breaking some law all the time. Which is how oppression works. Selective enforcement. ‘Give me six lines from the most innocent man and I will find in them something to damn his soul.”
There is no such thing as "good" or "bad" - actions are meaningless - it's the context that makes the difference.
Example: Sex
Good when the context is consenting adult (humans)
Bad when the context is not.
Further, "One man's 'freedom fighter' is another man's 'terrorist'" - meaning context is very much in the eye of the beholder.
Couple this with the Taoist? fable "What luck you lost a horse" where the outcome of an event can not really be determined immediately, it may take days, months, years to show.
And you are left with - do we really have any idea on what is right/wrong
So, my philosophical take is - if it leads toward healthy outcomes (ooo dripping with subjective context there...) then it's /likely/ the right thing to do.
When I spoke with an AI on this recently the AI was quick to respond that "Recreational drug use 'feels good' at first, but can lead to a very dark outcome" - which is partly true, but also demonstrates the first point. Recreational drug use is fine (as far as I am concerned, after my 4th cup of tea) as long as the context isn't "masking" or "crutch" (although in some cases, eg. PTSD, drug use to help people forget is a vital tool)
I have not only used linear programming in the industry, I have also had to write my own solver because the existing ones (even commercial) were to slow. (This was possible only because I only cares about a very approximate solution)
The triangulations you mention are important in the current group I'm working in.
I'm curious to hear what you specifically use these algorithms for!
PS: My point is not that these things are never used, they clearly are, I'm saying that the majority of CPU cycles globally goes towards "idle", then pushing pixels around with simple bitblt-like algorithms for 2D graphics, then whatever it is that browsers do on the inside, then operating system internals, and then specialised and more interesting algorithms like Linear Programming are a vanishingly small slice of whatever is left of that pie chart.
> CO2 emissions/km:
No, you have already compared fuel consumption. This is equivalent.
reply