Back in 2010 I built and operated MySpace' analytics system on 14 EC2 instances. Handled 30 billion writes per day. Later I was involved in ESPN's streaming service which handled several million concurrent connections with VMs but no containers. More recently I ran an Alexa top 2k website (45 million visitors per month) off of a single container-free EC2 insurance. Then I spent two years working for a streaming company that used k8s + containers and would fall over of it had more than about 60 concurrent connections per EC2 instance. K8s + docker is much heavier than advertised.
Docker is far heavier - the overhead is the flexibility and process isolation you get. I imagine that's really useful for certain types of workloads (e.g. an ETL pipeline), but is crazy inefficient for something single purpose like a web app.
Docker is heavier (and more dangerous) because of dockerd, the management and api daemon that runs as root. Actual process isolation is handled by cgroup controls which are already built into the kernel and have been for years. You can apply them to any process, not just docker ones.
However, Docker is essentially dead; the future is CRI-O or something similar which has no daemon and runs as an unprivileged user. And you still get the flexibility and process isolation, but with more security.
All the so-called "docker killers" are essentially unfinished products. They don't compare 1:1 to docker in feature set and even if they run as rootless, they still are vulnerable to namespace exploits in the Linux kernel. Though docker runs as root, it's still well protected out-of-the-box for the average user and is a very mature technology.
Are you from 2018? Everyone running OpenShift is using CRI-O and that footprint is not small. We made the switch in our EKS and vanilla k8s clusters in 2021. Docker has now even made their API OCI-compliant in order to not be left behind. And the point is that most people don't want a docker feature-for-feature running in prod. The attack surface is simply too large. I don't need an API server running as root on all my container hosts.
Use docker on your laptop, sure. Its time in prod is over.
Agreed. Tons of obsolete assumptions in this thread. We have been using Podman / OpenShift in production and never ran into a use case where Docker was needed.
One of the biggest benefits of k8s for me, back in 2016 when I first used it in prod, was that it threw away all the extra features of Docker and implemented them directly by itself - better. Writing was already in the wall that docker will face stern competition that doesn't have all of its accidental complexity (rktnetes and hypernetes were a thing already)
Not everything - for a bunch of things, the actual setup increasingly happened outside docker then docker was just informed how to access it, bypassing all the higher level logic in Docker.
1.20 is when docker mode got deprecated, IIRC, but many of us were already happily running in containerd for some time.
Are you sure? This isn't my subject area but CRI-O looks like an alternative to containerd and implements the OCI compliant runtime like containerd does. And then there is a 3rd which is docker engine which is the one being dropped.
Sorry, I mixed up CRI and CRI-O. The roadmap is to remove dockershim (the interface to docker-compatible container runtime) and use only CRI - of which containerd and CRI-O are two compatible implementations.
Yep. Mantis has announced intent to continue maintaining the docker shim (which allows k8s to talk to docker programmatically, but I can't imagine many people switching the default to docker unless they are manually installing k8s on their nodes, which used to be common but no longer is.