- new
- past
- show
- ask
- show
- jobs
- submit
1. I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
2. The write-to-RO-page-cache primitive STILL WORKED! It’s just that the particular exploit used had no meaningful effect in the already-root-in-a-container context. If you think you are safe, you’re probably wrong. All you need to make a new exploit is an fd representing something that you aren’t supposed to be able to write. This likely includes CoW things where you are supposed to be able to write after CoW but you aren’t supposed to be able to write to the source.
So:
- Are you using these containers with a common image or even a common layer in an image to isolate dangerous workloads from each other. Oops, they can modify the image layers and corrupt each other. There goes any sort of cross-tenant isolation.
- What if you get an fd backed by the zero page and write to it? This can’t result in anything that the administrator would approve of.
- What if you ro-bind-mount something in? It’s not ro any more.
I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?
I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.
The need for this feature/functionality in the fist place is questioned by some:
> As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).
> The algorithm being used in this [specific] exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API. […]
* https://news.ycombinator.com/item?id=47952181#unv_47956312
More than one time.
> a cryptographic performance enhancement feature
It's very rarely used.
> Did we blacklist OpenSSL's binaries after Heartbleed?
No, but lots of companies have since migrated away. OpenSSL was harder to move away from because there weren't as obvious drop-in replacements. Blocking a syscall that you never actually used is simple and effective.
But I am disappointed that we still don't have clear OpenSSL successor, there is nothing to be salvaged from this mess of a project
Yes, the syscall API is (famously) stable, but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).
> but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).
or vote with your wallet and get device that has well supported card.
To my knowledge, not many things were using the in-kernel code anyways, the recommended way is to use userland tools...
It's optional for openssl, systemd apparently needs it, but deleting the module from one of my systems didn't cause any issues. /shrug
Yeah, exactly - that's a pretty big "if", and not how a lot of container automation does things. In particular you'd need to hit the base system, it's no help at all if some application files that the host does nothing with can be hit.
Now, there are some "funky" no-fs things that could be opened and are mmap'able/spliceable (some stuff in /proc/*, no idea what exactly though), but it's not immediately obvious to me how this is a generic container escape.
Oh, an this [2] just happened
[1] https://github.com/containers/oci-seccomp-bpf-hook/pull/209 [2] https://github.com/moby/moby/pull/52501
https://salsa.debian.org/glibc-team/glibc/-/blob/sid/debian/...
https://src.fedoraproject.org/rpms/glibc/blob/rawhide/f/glib...
Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?
VMs are not different due to 'magic' but through hardware assist with things like Intel VT-x and AMD-V:
* https://en.wikipedia.org/wiki/X86_virtualization#Hardware-as...
* https://blog.lyc8503.net/en/post/hypervisor-explore/
* https://binarydebt.wordpress.com/2018/10/14/intel-virtualisa...
Hardware virtualization has a strong effect on (b), but it’s not at all a foregone conclusion that it’s strictly in the direction of being more straightforward and thus more secure. And hardware features like fancy device passthrough encourages applications with a very, very large attack surface that has historically been full of holes.
VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.
If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?
Security is obviously a continuum (eg. you can even have a bug in your IPMI FW, and a network packet could break in without any interaction with the OS; or there could be a HW bug too), but there is a discrete "jump" between containers and VMs to the extent that it is useful to call one a security boundary and the other not. Just like a firewall is a security boundary even if it can have security bugs.
Whether this jump between exploitable surface area warrants this distinction is what the point is: many believe it does.
Containers are mostly used as a deployment/packaging model where typically VMs are used where stronger security is needed. This has been the established industry standard for a while. Look at major cloud providers for example.
AWS:
> Unless explicitly stated, AWS does not consider a container or primitives such as an ECS task or a Kubernetes pod to be a security boundary. A notable exception to this is ECS tasks running AWS Fargate, where the isolation boundary is a task. To account for this, we recommend that you use Fargate with ECS if your applications have strict isolation requirements.
> When you’re using the Fargate launch type, each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.
They also further recommend that for even higher security requirements use different EC2 instances - which you can also run on dedicated hardware etc. But the fact that you can further increase isolation beyond VMs, does not make containers the same as VMs.
https://aws.amazon.com/blogs/security/security-consideration...
GCP:
> There’s one myth worth clearing up: containers do not provide an impermeable security boundary, nor do they aim to. They provide some restrictions on access to shared resources on a host, but they don’t necessarily prevent a malicious attacker from circumventing these restrictions. Although both containers and VMs encapsulate an application, the container is a boundary for the application, but the VM is a boundary for the application and its resources, including resource allocation.
> If you're running an untrusted workload on Kubernetes Engine and need a strong security boundary, you should fall back on the isolation provided by the Google Cloud Platform project. For workloads sharing the same level of trust, you may get by with multi-tenancy, where a container is run on the same node as other containers or another node in the same cluster.
https://cloud.google.com/blog/products/gcp/exploring-contain...
> Applications that run in traditional Linux containers access system resources in the same way that regular (non-containerized) applications do: by making system calls directly to the host kernel.
> One approach to improve container isolation is to run each container in its own virtual machine (VM). This gives each container its own "machine," including kernel and virtualized devices, completely separate from the host. Even if there is a vulnerability in the guest, the hypervisor still isolates the host, as well as other applications/containers running on the host.
> gVisor is more lightweight than a VM while maintaining a similar level of isolation. The core of gVisor is a kernel that runs as a normal, unprivileged process that supports most Linux system calls. This kernel is written in Go, which was chosen for its memory- and type-safety. Just like within a VM, an application running in a gVisor sandbox gets its own kernel and set of virtualized devices, distinct from the host and other sandboxes.
https://cloud.google.com/blog/products/identity-security/ope...
These guys are experts when it comes to securing workloads on shared infra and while there are different levels of isolation using various techniques, the current industry practice is to not consider regular Linux containers a security boundary.
> A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
Yeah this almost never happens though whereas Linux privesc is 10x a day.
I would have thought they provide better isolation than using multiple users which is the traditional security boundary.
It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?
The article was about Podman and Linux namespaces
Namespaces are used as a security mechanism.
It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?
Does this increase complexity? Yes, it does. Is it worth the cost? Depends on each individual case IMO.
E.g.,
> Container Security stores and scans container images as the images are built, before production. It provides vulnerability and malware detection, along with continuous monitoring of container images. By integrating with the continuous integration and continuous deployment (CI/CD) systems that build container images, Container Security ensures every container reaching production is secure and compliant with enterprise policy.
* https://docs.tenable.com/enclave-security/container-security...
MicroVMs have much lower attack surface and you can even toss a container into one if you'd like.
Or use gvisor, which mitigates this vulnerability.
there is no reason it would be default policy. Else might as well block every socket and just multiplex everything on stdin/out
You may be on to something…
share and enjoy!
A compounding issue is that using AF_ALG doesn't require a separate syscall: it's just using SYS_socket with the first argument set to 38. Your container behaviour specification needs to be specific enough to not only enumerate allowed syscalls, but the allowed values for each syscall parameter.
In the vast majority of the world, you set permissions to what's reasonable and trust that most of the time things will work out pretty well and have a plan for if you need to fix things on the fly.
I personally am not terribly paranoid, but I've worked places where we had to be pretty paranoid (shared hosting).
Couldn't you then simply re-run the exploit again as unprivileged podman user and gain root on the host?
If you can orchestrate a container escape from the container's "root", then you're on to something.
And its a great idea in general, it just doesn't stop this exploit.
The proof of concept becomes root as a quick way to prove it has control of your computer. The system in the article isnt blocking the exploit its just blocking the mechanism to prove it worked. It still worked, just the test to verify is now giving a false negative.
Good defense in depth disables neccesary steps that by themselves arent sufficient but are a neccesary condition. In the context of this exploit (but not in general) this mitigation is more like renaming the su command to mysu and hoping nobody notices.
The dedicated website: https://copy.fail
However copy fail can be used in many other ways not contained by containers or the above settings. For example it can modify the /etc/ssl/certs to prepare for MitM attacks. If you have multiple containers based on the same image then one compromised CA set affects another.
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
NoNewPrivileges=yes
to my .service. Is it good enough?I could be wrong, but I’m not sure those settings are enough to mitigate Copy Fail.
If your distro offers a patched kernel, it’s best to upgrade to that one and reboot.
You can also disable the vulnerable module (how to do it depends on what distro you’re using). But if you stay on an old unpatched kernel you might be exposed to other vulnerabilites.
So the question is, before I learned about copy fail, what could I have done that would have limited the possible damage this vulnerability could do to me? CapabilityBoundingSet is one answer and rootless podman as mentioned in this article is another. They don’t prevent all but at least `su` is useless.
Other hardening solutions could be to run the workloads inside of a VM such as Firecracker, or gVisor. But that might be more work to implement compared to seccomp.
But… does this escape the container? If not (the author seems to indicate it does not) then does it matter if you are in Docker or rootless Podman, right, since the end result is always: you have elevated to root within the container. If the rest of the container filesystem isolation does its job, the end result is the same? Though I guess another chained exploit to escape the container would be worse in Docker? Do I have that right?
“ While rootless containers prevent the attacker from escalating to host root, the page cache is still shared across the host. Containers that re-use the same base image layers share the same cached pages for those layers — if a malicious CI job corrupts a binary in the page cache, other containers launched from that same image could end up executing the poisoned version.”
I don't believe the kernel maintains separate page caches for each container; a malicious CI job could corrupt a binary from any container, or the host.
On top of this there's another thing worth mentioning, it's a common thing in Openshift (for non rootless podman) to allow CAP_SETGID/CAP_SETUID for being able to create a container within a container (this is called the allowPrivilegeEscalation in SCCs), that effectively grants you the ability to become uid=root in the container and in that scenario uid=0 matches the host uid=0. The important difference is that specific instance of the root user doesn't have CAP_SYS_ADMIN (or most of the other privileged kernel capabilities) meaning the actions the user can then perform are very limited.
As befits a history of perl, it is full of random quotes and rambling discourses about history, but it has a lot of info in it.
>This is not raw shellcode — it is a fully formed ELF executable
echo -e 'install algif_aead /bin/false\n' > /etc/modprobe.d/disable-algif.conf
that just prevents the faulty module from loading. So you have time to fix it properly (kernel upgrade)Technically there should be zero impact (the very very few tools that use it will fall back to userspace), I haven't even found that module loaded in infrastructure
Then check if it is loaded, and if it is, unload/reboot
If algif_aead was a builtin module, it needs to be disabled by adding initcall_blacklist=algif_aead_init to the boot cmdline.
However initcall_blacklist requires the kernel to be built with CONFIG_KALLSYMS.
It could fail though. But I have not yet heard of anyone noticing big problems due to disabling the problematic modules. And I have not noticed any such issues on our systems at ${DAYJOB}.
IMHO, since these parts of the Linux kernel are so crappy I personally would say disabling them is a good default choice. YMMV. But if you encounter problems, then you can always re-enable the modules. (Preferably after upgrading your kernel, obviously.)
My concern is to try to understand the mechanisms of the exploit.
Copy Fail is not simply ”hey, kernel, give me root”. I would say it’s more general than that. It’s rather: ”Hey, kernel, when you present file /foo to a process, make the contents of that file appear according to my wishes”. Which can be used (in various ways) to advance the attacker’s position.
That’s why I think it’s interesting to ponder if that power allows the attacker to simply sneak past security policies such as allowPrivilegeEscalation=false.
Wouldn’t the exploit then just use ip addresses directly?
I don’t oppose reasonable crypto in the kernel, like WireGuard.
Except, you know, many things
We need on disk encryption, and we need to be able boot from an encrypted disk. So we need encryption for that.
We need network filesystems, and we need the traffic over the network to be encrypted. So we need encryption.
IPsec, for better or for worse, is authenticated and partially encrypted at the transport layer, so if we want a linux machine to speak IPsec, we need encryption.
Fixing/changing this would require a huge restructuring of the kernel; it would basically require switching to a microkernel. Given the fact that nobody's ever written a microkernel that doesn't completely suck ass, I don't know that it would be worth the effort.