Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Curse of Docker (computer.rip)
204 points by sklargh on Nov 26, 2023 | hide | past | favorite | 161 comments


Nah docker is excellent and far far preferable to the standard way of doing things for most people.

All the traditional distribution methods have barely even figured out how to uninstall a piece of software. `apt remove` will often leave other things lying around.

He complains about configuration being more complex because its not a file? Except it is, and its so much simpler to just have a compose file that tells you EXACTLY which files are used for configuration and where they are. This is still not easy in normal linux packages. You have to google or dig through a list of 10-15 places that may be used for config..

The other brilliant opinion is that docker makes the barrier to entry for packaging too low.. but the alternative is not those things being packaged well in apt or whatever, its them not being packaged at all.. this is not a win.

Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.

And if you want to run it without docker, the docker images are basically a self-documenting system for how to do that. This is strictly a win over previous systems, where there generally just is no documentation for that.

Personally docker lowers the activation energy to deploy something that I can now try/run so many complex pieces of software so easily. I run sourcegraph, nginx, postgres, redis on my machine easily. A week ago I wanted to learn more about data engineering - got a whole Apache Airflow cluster setup with a single compose file, took a few minutes. That would have been at least an hour of following some half-outdated deploy guide before so I just wouldn't have done it.

---

The beginning of the post is most revealing though:

> The fact that I have multiple times had to unpack flatpaks and modify them to fix dependencies reveals ... Still, these systems work reasonably well, well enough that they continue to proliferate...

Basically you are okay with opening up flatpaks to modify them but not opening up docker images.. it just comes down to familiarity.


The thing I'm always interested in finding out is how people using lots of containers deal with security updates.

Do they go into every image and run the updates for the base image it's based on? Does that just mean you're now subject to multiple OS security and patching and best practices?

Do they think it doesn't matter and just get new images then they're published, which from what I can see is just when the overplayed software has an update and not the system anduvraries it relies on?

When glibc or zlib have a security update, what do you do? For RHEL and derivatives it looks line quay.io tries to help with that, but what's the best practice for the rest of the community?

We haven't really adopted containers at all at work yet, and to be honest this is at least part of the reason. It feels like we'd be doing all the patching we currently are, and then a bunch more. Sure, there are some gains, but if also feels like there's a lot more work involved there too?


So i primarily use containers on my local machine walled off from the internet, so it's not a big concern for me. Watchtower [1] is popular among home server users too which automatically updates containers to the latest image.

For production uses I think companies generally build their own containers. They would have a common base linux container and build the other containers based off that with a typical CI/CD pipeline. So if glibc is patched, it's probably patched in the base container and the others are then rebuilt. You don't have to patch each container individually, just the base. Once the whole build pipeline is automated its not too hard to add checks for security updates and rebuild when needed. Production also minimizes the scope of containers with nothing installed except what's necessary so they have few dependencies.

[1] https://github.com/containrrr/watchtower


The gist of what I'm reading here and in other comments is that you either have a full build pipeline and rebuild nightly or on demand, and then roll out container updates too, along with OS updates that are needed? If so, that seems like the "a lot more work" I mentioned, because you still have to update all the systems, but now you have a separate process to follow for all the apps/containers on the systems too, and that's assuming you have a build pipeline.

Or is the assumption that it's all k8s so the container updates being applied part is automated as well? Because assuming k8s in a discussion about containers does seem to be a bad habit from both detractors and proponents of containers from what I've seen, so I wouldn't be that surprised...

(for home user stuff I'm not too worried about finding out if there's a newer image, but I am worried that the image is updated once a year because the publishers of it don't care about base image security)


On a single system with `docker compose`, redeploying containers is just `docker compose down && docker compose up`. On clusters you could use k8s or you could use the same orchestration tools you would use to manage a cluster without Docker like Chef/Puppet to run these same commands. Like whatever orchestration system you have for restarting applications without docker, could be used with Docker as well.

Personally I do think `k8s` is poorly documented and hard to use so I stay away from it. I don't know if there are good alternatives for production uses though.


As one of those people: we don’t do much, if anything at all. I use containers to try things out behind a VPN like Tailscale. I use containers at work to deploy software but there it is put behind some kind of SSO proxy and lots of different rules apply. Even if I deploy something externally facing, 3rd party containers are going to be on a private docker network - much easier setup than dealing with allow/blocklists and/or firewall rules. Of course I need to take care of my code’s security but that is true with or without containers. For the situations where you deploy 3rd party services facing externally there are solutions (like Watchtower) that I can run and be more confident that this update would not ripple through the system with unknown (and sometimes hidden) consequences.


You can use tools like Snyk to scan images for vulnerabilities (even the Docker(tm) cli tool has one now), you can do things like failing your CI pipeline if there are critica vulnerabilities.

You can also use Dependabot (and others) to update your images on a cronjob-like schedule.

This is a solved problem


This is a solution for the app developer, not for the users that might have to deal with older and unsupported apps.


I highly advise going with Trivy over Snyk or, as it's still in beta at the moment, Docker Scout.


Hadn't heard of Trivy, but glad to see another great product from AquaSec :^)


Don’t use glibc…

I am of course partially joking.

But seriously, use musl libc, build static binaries, build the images from scratch, and have a CI server handle it for you to keep it updated.

Alternatively use a small image like Alpine as base if you want some tools in the image for debugging.


We use What's Up Docker [1] to monitor for new versions of docker containers that are created by others (eg. self hosted apps).

For containers we create ourselves, we automatically rebuild them each night which pulls the latest security updates.

[1] https://github.com/fmartinou/whats-up-docker


I build docker images with Nix, so I just update the library by updating my nix flake.


This is what I get for making a comment on my phone and then watching a movie until the the edit window is closed.

s/overplayed/overlaid/

s/anduvraries/and libraries/

s/looks line/looks like/


> ...how people using lots of containers deal with security updates.

TL/DR: they don't. #yolo, don't be a square.


> Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.

Even with Docker, it's not as easy as `docker compose up -d`.

You need to setup backups too. How do you backup all these containers? Do they even support it? Meanwhile I already have Postgres and Redis tuned, running, and backuped. I'd even have a warm Postgres replica if I could figure how to make that work.

Then you need to monitor them for security updates. How are you notified your Sourcegraph container and its dependencies' containers need to be updated and restarted? If they used system packages, I'd already have this solved with Munin's APT plugin and/or Debian's unattended-upgrades. So you need to install Watchtower or equivalent. Which can't tell the difference between a security update and other updates, so you might have your software updated with breaking changes at any point.

Alternatively, you can locate and subscribe to the RSS feed for the repository of every Dockerfile involved (or use one of these third-party services providing RSS feeds for Dockerhub) and hope they contain a changelog. If I installed from Debian I wouldn't need that because I'm already subscribed to debian-security-announce@lists.debian.org


Long lived data is stored as a volume which is just a directory on the host. You just backup the directory like you would any other.

I feel like people are misunderstanding.. containers are a wrapper. You can run whatever APT plugin or unattended-upgrades on the container as well its just linux. You can then even snapshot this new state of the container into a new image if you want to persist them. Like you can fully simulate the usual workflow of a regular server if you really want. They don't take away any functionality.

Another thing is docker is not necessarily the be-all end-all way to deploy things especially in production. If I was running Sourcegraph seriously in production, I might not use it. But it does make it so much easier to just try things out or run them as a hobbyist.


> You can run whatever APT plugin or unattended-upgrades on the container as well its just linux.

Only if the software inside was installed with APT. When you just do `docker compose up -d` you have no idea how software inside is installed so you need to poke around as if you had installed it without docker.

> Another thing is docker is not necessarily the be-all end-all way to deploy things especially in production. If I was running Sourcegraph seriously in production, I might not use it.

But containers are increasingly the only supported method, because developers assume every production's constraints/preferences look like their own. For example https://docs.sourcegraph.com/admin/deploy only lists the following options:

* VMs

* "install script" unsuitable for use outside a dedicated VM: (re)installs k3s (as a one-off without updates as far as I can tell), overwrites existing config files, disables firewall

* k8s

* Docker-Compose

> But it does make it so much easier to just try things out or run them as a hobbyist.

Sure, I definitely agree with that


I really don't know what other production methods you would expect to find before docker.. install scripts and READMEs is pretty much all there is or ever was for complex software with many interlocking components. I don't really think there's ever been a way to install an entire stack like that from a single apt command. It's always been bespoke scripts and procedures with a lot of manual steps.


What I expect is a list of dependencies I can install myself, instructions to compile the code, and a description of config files to write/edit, eg. to point the main software to its dependencies' URLs. Like this, for example: https://docs.joinmastodon.org/admin/install/


Yes this is called a Dockerfile. Better yet, its not just a list, its working code so you don't have to run through the list manually.


While a readable Dockerfile can work as documentation, there are a few caveats:

* the application needs to be designed to work outside containers (so, no hardcoded URLs, ports, or paths). Also, not directly related to containers, but it's nice if it can be easily compiled in most environments and not just on the base image.

* I still need a way to be notified of updates; if the Dockerfile just wgets a binary, this doesn't help me.

* The Dockerfiles need to be easy to find. Sourcegraph's don't seem to be referenced from the documentation, I had to look through their Github repos to find https://github.com/sourcegraph/sourcegraph/tree/main/docker-... (though most are bazel scripts instead of Dockerfiles, but serve the same purpose)


So basically, you want a Dockerfile.


I don't back up containers, why would you? I back up the attached volumes, or, more often, their state is fully in a database, which isn't in a container (it doesn't need to be), so I just back the database up.


I meant backing up volumes.


Number one advice for using docker volumes: Don't use docker volumes. Just mount directories or files from the host file system. I have never had a situation where I didn't regret docker volumes but where I regretted host FS mounts.

How do you backup a directory? You tell me, because I have to use a multi gigabyte software package that, mind you, is a pain to install since it consists of like two dozen packages.


UID mapping?


Wait wait wait. Docker has two use cases and you're conflating them. The original use case is:

Project Foo uses libqxt version 7. Project Bar uses libqxt version 8. They are incompatible so I'd need two development workstations (or later two LXC containers). This is slow and heavy on diskspace; docker solves that problem. This is a great use of docker.

The second use case that it has morphed into is:

I've decided stack management is hard and I can't tell my downstream which libraries they need because even I'm no longer sure. So I'll just bundle them all as this opaque container and distribute it that way and now literally nobody knows what versions of what software they are running in production. This is a very harmful use case of docker that is unfortunately nearly universal at this point.


I don't understand how stack management without Docker is any better. As far as I can tell, the alternative without Docker is the same. A list of `apt install`s with no versions listed or no mention of dependencies at all. If you used a lockfile in a language package manager, you can use the same lockfile in the docker too.


So, as an example, 20 years ago when we started a project the devs and I (the sysadmin) would sit down and have a meeting where they would talk about what libraries they wanted to use (think CPAN or PEAR) and I would tell them what versions were compatible with our servers and those are the versions they would use for the lifetime of the project. Docker came about because devs got tired of hearing "no" to upgrades and admins got tired of saying it. But in 2023 if there was a zero day (though we didn't call them that back then) for libqxt7.2 (but not 7.0 or 8.1) I could tell you immediately if we were impacted. In 2023 I can't even tell you how many versions of OpenSSL are on my laptop right now as I type.


Stack management is a bullshit job, and the downstream will hit the libqxt-7 vs libqxt-8 problem anyway.


> Project Foo uses libqxt version 7. Project Bar uses libqxt version 8

Pretty much fixed by asdf or any sane manager.. Also, two containers aren't slow at all, but rebuilding one completely from scratch is.

The latter is a good use case. No big deal to include a commit/version number, and a list of packages / libraries used in a lock file..

The problems with docker I have are: no lock files to pin versions. No integration with other package management tools. For example, I want to install my dependencies as a step in Docker. I don't want to manually copy over the list of dependencies from my lockfiles (gemfile, package, etc). The reason why I'd want this is that it speeds up builds.


Exactly, Docker + Compose is the best way of running server software these days.

I keep my compose files in source control and I have got a CI server building images for anything that doesn’t have first party images available.

Updates are super easy as well, just update the pinned version at the top of the compose file (if not using latest), then ’docker-compose pull’ followed by ’docker-compose up -d’

The entire thing is so much more stable and easier to manage than the RedHat/Ubuntu/FreeBSD systems I used to manage.

(I use Alpine Linux + ZFS for the host OS)


I spent the last couple of days trying to set up some software waddling though open source repositories, fixing broken deps, pinning minor versions, browsing SO for obscure errors... I wish I had a Docker container instead.


> `apt remove` will often leave other things lying around.

If you mean configuration files, then this is by design.

`apt purge` removes those as well.


I just found about a half a gigabyte of files from programs I uninstalled two laptops ago on my machine, and files from a Steam game I got a refund for because it would crash after fifteen minutes. It’s frustrating.


> Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.

Maybe my recollection is just fuzzy, but it seems to me back in the day many projects just had fewer dependencies and more of them were optional. "For larger installations and/or better performance, here's where to configure your redis instance."

Instead now you try and run someone's little self-hosted bookmark manager and it's "just" docker-compose up to spin up the backend API, frontend UI, Postgres, redis, elasticsearch, thumbor, and a KinD container that we use to dynamically run pods for scraping your bookmarked websites!

I'd almost _rather_ that sort of setup be reserved for stuff where it's worth investing the time to set it up.

All of this complexity is easier to get up this way, but that doesn't make it easier to properly manage or a _good_ way to do things. I'd much rather run _one_ instance of Postgres, set up proper backups _once_, perform upgrades _once_, etc. Even if I don't care about the hardware resource usage, I do care about my time. How do I point this at an external postgres instance? Unfortunately, the setup instructions for many services these days start _and end_ at `docker-compose up`.

And this idea of "dockerfiles as documentation" should really die. There are often so many implicit assumptions baked into them as to make them a complete minefield for use as a reference. And unless you're going to dig into a thousand lines of bash scripts, they're not going to answer the questions you actually need answers to like "how do I change this configuration option?".


> figured out how to uninstall a piece of software. `apt remove` will often leave other things lying around.

What's the Docker way of uninstall? In most cases Docker packaged software uses some kind of volume or local mount to save data. Is there a way to remove these when you remove the container? What about networks? (besides running prune on all available entities)


You can `docker rm -v <container>` to remove a ccontainer + its volumes. For larger stuff typically I just read the docker-compose file with all of that listed in one place, so its pretty easy to just `docker rm` the volumes and networks. With apt I have no idea what files were added or modified and there's no simple "undo".


I wasn't aware of '-v' so thanks for that.

You can list package content with 'dpkg -L', although, granted, it doesn't cover files that were somehow created by the install script. Also 'apt purge' removes all files including config.


It’s very nice, although I had a hard time getting the btrfs driver to actually remove the backing store a year or so back.


> All the traditional distribution methods have barely even figured out how to uninstall a piece of software. `apt remove` will often leave other things lying around.

Sounds like an issue that needs to be fixed instead of working around it. Also dependency hell. Few distros manage those hard problems nicely but they do exist.


> A week ago I wanted to learn more about data engineering - got a whole Apache Airflow cluster setup with a single compose file, took a few minutes.

Off-topic, but how did you like it? Tried it out a couple of years ago and felt like it overcomplicates things for probably 99% of use cases, and the overhead is huge.


Over-complicated, I agree. "This cluster could have been a Makefile"

There probably are good reasons to use it when you have complex distributed DAGs though.


> All the traditional distribution methods have barely even figured out how to uninstall a piece of software.

The most traditional way is to compile under /usr/local/application_name, and symlinking to /usr/local/(s)bin. Remove the folder and the links, and you're done.

> `apt remove` will often leave other things lying around.

"remove" is designed to leave config and database files in place, assuming that you might want to install it later without losing data. apt has "purge" option for the last decade which removes anything and everything completely.

> Whats the alternative to running a big project like say Sourcegraph through a docker compose instance? You have to get set up ~10 services yourself including Redis, Postgres, some logging thing blah blah. I do not believe this is ever easier than `docker compose up -d`.

Install in a single or on a couple of pet servers. Configure them, forget them. If you fancy, spawn a couple of VMs, and snapshot them daily. While it takes a bit more time, for a single time job, I won't mind.

Docker's biggest curse is it enables many bad practices and advertises as best practices. Why enable SSL on a service while I can add a SSL terminating container? Why tune a piece of software if I can spawn five more copies of that for scaling?, etc.

Docker is nice when it's packaged and documented well, and good for packaging stable meta-utilities (e.g.: A documenting pipeline which runs daily and terminates), but using it as a silver bullet and accepting its bad practices as gold standards is wasting resources, space, time; and creating security problems at the same time.

Basically you are okay with installing containers blindly but not installing services and learning apt purge. it just comes down to familiarity.

Obligatory XKCD: https://xkcd.com/1988/


> Anyway, what I really wanted to complain a bit about is the realm of software intended to be run on servers.

Okay.

> I'm not sure that Docker has saved me more hours than it's cost

I'm not sure what's the alternative for servers here. Containers have certainly saved me of a lot of headache and created very little overhead. N=1 (as it seems to be the OP).

> The problem is the use of Docker as a lowest common denominator [...] approach to distributing software to end users.

Isn't the issue specific for server use? Are you running random images from the internet on your servers?

> In the worst case, some Docker images provide no documentation at all

Well, in the same vein as my last comment, Docker is not a silver bullet for everything. You still have to take care of what you're actually running.

Honestly the discussion is valid, but I think the OP aimed at "the current state of things" and hit a very valuable tool that doesn't deserve some of the targeted cristicism I read here.

edit: my two cents for those who cannot bother and expect just because it's a container, everything will magically be solved: use official images and those from Bitnami. There, you're set.


> I'm not sure what's the alternative for servers here.

Nixos/nixpkgs: isolated dependencies / services, easy to override if needed, configs follow relatively consistent pattern (main options exposed, others can be passed as text), service files can do isolation by whitelisting paths without going full-blown self-contained-os container.

> Are you running random images from the internet on your servers?

Many home server users do this. In business use, unless you invest lots of time into this, a part of your services is still effectively a random image from the internet.

> and those from Bitnami

Yes, that's a random image from the internet.


> Yes, that's a random image from the internet.

By that definition, you are running it on a random OS, random processor with some random network infra.


Kinda. It depends what your risk tolerance is.

But seriously, what's your business relationship to bitnami? What are the guarantees about keeping those images up to date? What are the guarantees about the feature set provided? How long will the specific image be available publicly/free? Is the base system guaranteed to stay the same? What about architecture support?


I mostly agree with GGP, but GP has a point. I'm a semi-frequent users of Docker at work and personally, and I still don't know what the fuck "Bitnami" is. I'm gonna guess now (and check later) they're probably some corporate body that fell out of whatever kerfuffle happened with Docker licensing a year or so ago; but otherwise, "Bitnami" sounds to me like "Bincrafters" from Conan/C++ world - no fucking clue who they are (the name doesn't help), but everyone sure likes to depend on what sounds like a random third party.


It's always been surprising to me how little people seem to care about the provenance of their images. It's even more surprising that infosec isn't forcing developing to start their images `FROM scratch`.


Are you compiling you os distro's `FROM scratch`?

Ther's always a certin level of trust, and for many, docker containers are just as trusty as distros from trusty orgs or volunteers.


Like the other person said, Nix solves that problem.

You are in control of the entire supply chain with Nix.


If you have sources, you have full control, using any distribution method. Some Nix packages are without sources, so this is not true for Nix.


Supply chain management isn't about sources per se (though source makes it easier). What you need is hashes and signatures for every dependency at every step.


If you have sources, you can do anything you want at every step. If you haven't, then you can do fewer things at some steps. Some Nix packages have no sources, so full control is not available for them.


> Are you compiling you os distro's `FROM scratch`?

As a matter of fact, yes.


There once was a man named Terry Davis...


Or for a less out-there example, Ken Thompson's classic "Reflections on Trusting Trust". At some point unless you are literally producing all of the hardware and software yourself you have to trust someone. The challenge is figuring out where that line of acceptable risk lies for you. It's going to be very different for an indie game dev vs a FinTech company vs the US DoD.


I imagine anyone with a BA in CS has wrote a OS from scratch.

How many systems on chips are in a moderen computer, not the main system and cpu, but every little chip and controller, the boot system, every board seems to have a little OS.

In regards to security, its all about analysing risk and trade-offs.

For me using containers from known vendors is a risk im willing to take.


> I imagine anyone with a BA in CS has wrote a OS from scratch.

Not even close. Very few courses require anything that advanced and only some of those are non-optional.


But that could be said about each an any dependency.

And some (very rare) companies do enforce that, everyone else has to build up a bit of trust.


> Isn't the issue specific for server use? Are you running random images from the internet on your servers?

Well exactly, there's what the author is writing about.

The whole article is dedicated to the problem of Docker being used as a distribution method, that is as a replacement for say Debian package.

So in order to use that software you need to run a Docker image from the internet which is open poorly made and incompatible with your infrastructure. Had a package been available you'd simply do "apt-get install" inside your own image built with your infrastructure in mind.


> use official images and those from Bitnami

In other words, random images from the internet.

> You still have to take care of what you’re actually running.

This is the central thesis of OP, though. Pre-made/official images are not very good and docker in general doesn’t provide any means to improve/control quality.


You know who really knows how to package software? Mark Russinovich, Nir Sofer, and all the others who gave us beautiful utililies in standalone EXE's that don't require any dependencies.

For the longest time I stayed on older versions of .NET so any version of Windows since 2003 could run my software out of the box. Made use of ILMerge or a custom AssemblyResolve handler to bundle support DLL's right into my single-file tools - it wasn't hard.

I have no complaints about Docker, but I do find where I used to be able to download simple zip files and place their contents into my project I now just get a black box Docker link with zero documentation and that makes me sad.


And then you just make sure all the libraries you use have no vulnerabilities ever, so they don’t need a way to be updated. Smart!


They get updated after I conduct regression testing and release a new build of my software.


And you will never be slow in updating the software (or disappear) so it doesn’t matter you’re essentially creating a mini distro you have to keep updating forever! And also the user magically gets notified that there is a new version or it automatically updates the cd the user is running it from.

And the same goes for the other 20 apps the user uses of course, that all need things like an ssl library. They all have responsible maintainers that can be trusted to promptly regression test, build, package and release every update in the libraries the user doesn’t know they are using. You’d think it is impractical but actually it’s very easy. Apparently.


After my company "disappears" as you've suggested it's only a matter of time before said libraries, despite their best efforts, introduce application-breaking changes. Short of open-sourcing the whole thing (which actually is a possible contingency plan in the cards) all I'd be doing is foisting an unsolvable problem onto my users.

Even if I wasn't embedding DLL's into my binary it's not like users would be dropping in updated copies of them alongside my app.

I understand what you're getting at but it only works if you can outsource package management to competent distro maintainers (not a thing on Windows), and ultimately in my own experience as a user with decades of computing experience I've had a heck of a lot more problems from faulty updates than I ever have from vulnerabilities.


So instead of the application no longer starting, they get an application that starts fine but quietly allows them to get hacked and become part of a botnet and perpetuate ransomware.


Exactly my thoughts. The Linux guys have been discussing the merits of package management and various related systems since I started getting interested in computers.

Yet after all this time they have not come close to something as simple as the double click to run .exe or self-installing binary you can find on windows (macOS also has completely self-contained apps). So having managed linux servers and relatives I'm a bit confused that we are still there, discussing the merits of stupid packaging software, that follow some sort of ideal but never actually work properly (at least, scale very badly and react poorly to changes).

Everything he said about docker is true but it also applies to the regular package management in various linux distros. In the age of very fast upload bandwidth and very affordable storage, docker is even more suspicious against a regular lightweight VM. Doesn't have as good of security separation, more annoying to reproduce and required more setup; VM bit for bit copy is also extremely simple. I believe one reason we got docker is because they couldn't figure out how to partition hardware at the machine level efficiently so instead, they partitioned CPUs. Easier to manage in hardware, more of a pain in software...

But the reason no user facing operating system ever uses this kind of software management is that it never works, no matter the ideals, a lot like communism. What a waste of time, I guess at least it makes for some fun discussion from time to time.


>One of the great sins of Docker is having normalized running software as root. Yes, Docker provides a degree of isolation, but from a perspective of defense in depth running anything with user exposure as root continues to be a poor practice.

>Perhaps one of the problems with Docker is that it's too easy to use

If you've ever had to make a nonroot docker image (or an image that runs properly with the `--read-only` flag), it's not as trivial and fast to get things going—if it was default, perhaps docker wouldn't have been so successful in getting engineers of all types and levels to adopt it?

It's rare to find tooling in the DevOps/SRE world that's easy to just get started with productively, so docker's low barrier to entry is an exception IMO. Yes, the downside is you get a lot of poorly-made `Dockerfiles` in the wild, but it's also easy to iterate and improve them, given that there's a common ground. It's a curse I suppose, but I'd rather have a well-understood curse than the alternative being an arbitrary amount of bespoke curses.


Running all your software as root is the IT equivalent of sales guys slapping themselves on the back for selling something for free.

It’s all fun and games until the bills come due.


> One of the basic concepts shared by most Linux systems is centralization of dependencies. Libraries should be declared as dependencies, and the packages depended on should be installed in a common location for use of the linker. This can create a challenge: different pieces of software might depend on different versions of a library, which may not be compatible. This is the central challenge of maintaining a Linux distribution, in the classical sense: providing repositories of software versions that will all work correctly together.

Maybe someone with more knowledge of Linux history can explain this for me, because I never understood it: Why is it so important that there must always only be one single version of a library installed on the entire system? What keeps a distribution from identifying a library by its name and version and allowing application A to use v1.1 and application B to use v1.2 at the same time?

Instead the solution of distros seems to be to enforce a single version and then to bend the entire world around this restriction - which then leads to unhappy developers that try to sidestep distro package management altogether and undermine the (very reasonable) "all software in a distro is known to work together" invariant.

So, why?


If there's a chain of dependencies (libraries depending on other libraries), a single process might end up with different versions of the same library in it's memory. That's not going to work, since the interface/API of the library is typically not versioned.


True, but shouldn't this issue already show up when a developer builds/tests the application on their own machine?


For security (and general bug fixing). If a security issue is found you want to only fix it in one place. The container alternative is searching down how all the containers were built, which might be very varied and some not even reproducible, and fixing them all.


Indeed, containers suck for this: Not just do you have to look into each container separately, there is also no requirement all containers use the same structure at all, as you said.

Yet this is exactly what happens in practice if the one-version-for-all paradigm is impractical for developers.

So it would be in the interest of distros to give some way in that regard: Keep the centralised dependency dogma in place, but allow multiple independent versions and configurations be stored.

Then bug fixing might be slightly harder than with a single version, because you might have to fix 3 versions instead of one, but still much easier than monkey-patching half a dozen containers.

(It would still be good to "nudge" developers towards a canonical version, i.e. by sending them reminders when the software uses a lower version or a different configuration - or if you want even a warning that everything below a certain minimum version will be rejected)


Distros do this already, at least some do. It depends on the distro exactly how it is done, but Debian certainly allows multiple versions of libraries to be packaged (with variant names: https://packages.debian.org/buster/libreadline7 vs https://packages.debian.org/bullseye/libreadline8), and in Fedora, compat-* packages are used to store previous versions of libraries (eg: https://koji.fedoraproject.org/koji/packageinfo?packageID=23...)

There's usually some soft or hard insistence that the package that depends on the old version of the dependency should have a plan to update to the new version. Definitely in Fedora we don't "like" compat libraries, although we tolerate them.

An interesting fact is the original RPM specification was designed to allow parallel installation of different versions of the same-named package. However at a distro level it turned out to be quite difficult to use this, because you have to be really careful about file conflicts. So now it is disallowed, except for the kernel package.


Why not to become a maintainer and solve this problem for us? It looks like you have the vision.

Fedora had support for modularity: https://docs.fedoraproject.org/en-US/modularity/ . Join the Fedora project, please.


What's with the passive-agressiveness?

The idea is as basic as you can get - and has probably been thought up by every dev after their first major dependency conflict. I'm pretty sure distro maintainers know about it too.

So my question is just what the problems are that prevent adoption, even after decades of dealing with the problem, even in the face of growing threats from devs to abandon the distro model altogether.


Fedora project abandons modularity, because nobody needs it. I found the developer, which needs modularity, so he can save the modularity for all of us. I'm politely asking to join the Fedora project and just maintain the solution. Where you found the aggression?


>Doing anything non-default with networks in Docker Compose will often create stacks that don't work correctly on machines with complex network setups.

I run into this often. Docker networking is a mess.


Depending on the load and use case, encapsulating docker itself in an lxc container or a standalone vm can be a semi maintainable and separated solution.


I like this approach for running multiple services on customer servers/clouds. for my own cloud though, i'll use some orchestrator that makes sense


Thanks. I'd be interested to learn what direction your orchestrator search goes in.

My thought usually is how to make any environment the equivalent of an appliance and require as little upkeep as possible.

I have been contemplating how Terraform may be able to sit in a private/hybrid cloud design, but haven't looked into it much.


Docker in lxc is also a huge mess when it comes to storage drivers with some backends...


I appreciate the heads up on this in case I run into it.

If you have any examples of what to look out for it would be great to know if it hits my use case


you are not using podman right? the only implementation to give me headaches on the networking side


no, not using podman. i hear podman has performance issues with shared tmpfs volumes.


Dockerfiles are really really simple. they do essentially three things: set a base image, set environment variables, and run scripts. and then as sort of a meta-thing, they prove that the steps in the dockerfile actually work, when you start the docker container and see that it works.

if you don't want to run in docker, a dockerfile is still a perfect setup script. open it up and see what it does, and use that as install instructions.


I’ve debugged project Dockerfiles to discover that they were pulling dependencies from URLS with “LATEST” in them. A Dockerfile isn’t really proof that anything currently works.


I remember how things were before docker. Better is not a word I'd use for that.

It sucked. Deploying software meant dealing with lots of operating system and distribution specific configuration, issues, bugs, etc. All of that had to be orchestrated with complicated scripts. First those were hand written, and later we got things like chef and puppet. Docker wiped most of that out and replaced it with simple build time tools that eliminate the need for having a lot of deploy time tools that take ages to run, are very complex to maintain, etc.

I also love to use it for development a lot. It allows me to use lots of different things that I need without having to bother installing those things. Saves a lot of time.

Docker gives us a nice standard way to run and configure whatever. Mostly configuration only gets hard when the underlying software is hard to configure. That's usually a problem with the software, not with docker. These days if you are building something that requires fiddling with some configuration file, it kind of is broken by design. You should design your configuration with docker in mind and not force people to have to mount volumes just so they can set some properties.

The reason docker is so widespread is that it is so obviously a good idea and there hasn't been anyone that came along with something better that actually managed to get any traction worth talking about. Most of the docker alternatives tend to be compatible with docker to the point that the differences are mostly in how they run dockerized software.

And while I like docker, I think Kubernetes is a hopelessly over engineered and convoluted mess.


> Deploying software meant dealing with lots of operating system and distribution specific configuration

But Docker didn't solve that at all, unless you consider "we only support Linux, so just run Linux or fuck you" as a "solution". And starting a Linux VM on Windows or macOS doesn't really count (and comes with a lot of issues).

Containers as a concept are fine. Docker as an implementation is not very good. These are people who released a program in 2014 that can only run as root, which should say a thing or two about the engineering ethos. What is this, 1982?

> All of that had to be orchestrated with complicated scripts.

Hacking deploy scripts (and configure scripts and Makefiles) wasn't fun either, but at least I could understand it. Hacking on Docker once something goes wrong is pretty much impossible.

It really is hugely over-engineered for what it does. You really really don't need a million lines of code to build and run containers on Linux.

I've had a number of machines where binary JSON files in /var/lib/docker got corrupted and the only solution I've ever been able to find is "completely wipe away the lot and start from scratch". The entire overlayfs thing they have can really go haywire for reasons I've never been able to reproduce or figure out (and the "solution" is similar: wipe away /var/lib/docker and start from 0), and things like that. It all "works", but it's a hugely untransparent black box where your only solution when things go wrong is to shrug and give up (or spend days or even weeks on figuring it all out).

I've had enough issues and outright bugs that I probably spent more time on Docker and docker-compose than I saved. At my last job I just bypassed the "official Docker development environment" with a few small shell scripts to run things locally, because it was a never-ending source of grief. I only had to support my own Linux system so I had it a bit easy, but it wasn't much more than running our programs with the right flags. The platform-specific stuff was "my-pkg-manager install postgres redis", and I don't see what's so hard about that.


How is writing those complicated scripts but in a dockerfile or docker-compose file any better?

Software written with docker in mind is easier to manage because it generally follows better design principles, such as separating configuration from state, being failure tolerant, treating the network as opaque, etc. This software would be easy to deploy without using docker as well.

If you're trying to deploy some complex piece of software which doesn't follow these principles, it's exactly as hard or even harder with docker. Unless you outsource the work to random people on the internet, but then you are not building production systems.

Containers are great for lots of things and containerization in general has forced developers to write better software, but there really isn't a lot of difference in difficulty in running that webapp in a container vs just running it on the machine directly.


You don’t get config drift or “Works on my machine” with docker, compared to scripts on the host.


These sorts of things usually take longer to get working than equivalent software distributed as a conventional Linux package or to be built from source.

Yes, but they are done once, and forced to ship the docker image. The stupid amount of time we've spent looking for that one package dependency because someone forgot that they installed something to make a project work... Or the classic we setup SSL, no one knows how they setup SSL once they are done, etc.

Docker forces a lot of the infrastructure decisions that devs make in their sandbox to be actually well defined. Not that it makes their choices any more sane, safer, secure. At least someone can take a look at the mess and replicate it, as many time as they want, quickly, break it, fix it, upgrade it, without ever requesting a dev VM build, having sysops install preqs, etc.

Is docker work? Everything is work. Do I think docker should be the default go to? No I'd like for people to use app services and simply perform the build on the app node, but that magic is even harder to debug and troubleshoot.


The original method to distribute library- independent software was "cc --static" née "cc -s"...

The world was a much simpler place so long ago.


Or use applications in golang. Not a user myself, but all the stuff like vault and such are just an executable. What is even funnier is go apps tend to have the simplest containers, for example kaniko seems to be a go app kaniko as init and nothing else.

One could also argue that apps that mandate running in docker probably have some ridiculous dependency issues or too clever by half runtime.


It wasn't because the C compiler has to generate the dependency tree for every compilation unit which then needs to be included into the Makefile.


> Making things worse, a lot of Docker images try to make configuration less painful by providing some sort of entry-point shell script that generates the full configuration from some simpler document provided to the container.

I see this as the container world reinventing the wheel of reasonable defaults for software that has long since lost sight of that. Nginx and Apache are two of the worst offenders, which won't just serve files out of a directory without a few dozens lines of config.


I think most of the comments here largely miss the point of the article. The guy doesn't complain about docker as a whole and doesn't criticise it's usage for software deployment which normally is the main application.

He complains about Docker being used as a software distribution method, that is as a replacement for say Debian package, pip package, npm package etc.

So in order to use that software you need to run a Docker image from the internet which is often poorly made and is incompatible with your infrastructure. Had a package been available you'd simply do "install" inside your own image built with your infrastructure in mind.

With that I agree completely. Docker and worse even docker-compose are terrible ways of software distribution and should never be used except for demos and for rare cases where software is not distributed to the customer in the normal sense but rather directly deployed into his system.

Docker and docker-compose are still very good methods of software deployment.


This, but I want to add, I think the root cause is less due to the ease of writing a Dockerfile vs writing a deb, rpm, etc. and more due to the low cost of hosting. Whatever low-quality Dockerfile you write, you can sign up on Docker Hub, build and push there, and you're done. GitHub Packages doesn't support deb or rpm, and anyway for better or for worse, packages are tightly coupled to the distribution they're packaged for. That means either getting your package into the package repository of each distribution you want to target, or hosting your own package repository, which is non-trivial in both financial and labor cost.

Docker has a lot more crap, but it did dramatically lower the barrier of entry, which is a good thing. The proper response isn't to bemoan the lower barrier of entry, but to attempt to lower the barrier of entry of traditional packaging.


>He complains about Docker being used as a software distribution method, that is as a replacement for say Debian package, pip package, npm package etc.

If that is a valid complaint, why does he choose two examples where that is not the case? Nextcloud AIO is just one option among many and certainly not the "standard way" of hosting your nextcloud instance. Coincidentally I came from hosting nextcloud the "standard way" and I'm really glad AIO exists and I don't have to manage nonsense like nextcloud ending support for the latest Debian php version. And Home Assistant is mainly distributed as the OS variant with the docker version being the step child afterthought that barely functions.


Don’t tell this guy about Helm-charts.

Oboy, if you thought docker images overriding default configuration options was bad. Wait until you add yet another layer of new config parameters on top, that do absolutely nothing at all, just translate down into an environment variable that the docker image translates into the application config. The ever increasingly popular ones from bitnami are the worst in this regard.

I always get questioned for reinventing the wheel when writing my own Dockerfiles. In the end it always end up more maintainable. The problem is these premade charts put a wheel inside a wheel inside a wheel. If you get a flat tire, you have to open up all of them at once to understand what is going on.


I have rarely seen end user software being distributed as a docker image, only server applications. Occasionally I see CLI tools that are hard to package available as docker images, but those are generally done because the maintainer hasn't provided precompiled binaries.


Docker keeps reminding me of people in the past shipping VM images, or sometimes physical machines, often with rather inappropriate desktop hardware and software. So do these points:

> The problem is the use of Docker as a lowest common denominator, or perhaps more accurately lowest common effort, approach to distributing software to end users.

Shipping physical machines probably is even lower than that though. Or even the VMs.

> One of the great sins of Docker is having normalized running software as root.

I am not as familiar with Docker practices, but unfortunately, AFAICT, people did that frequently on regular systems as well, just to not bother with permissions. (Edit: now I recalled people also not following the FHS and storing things in the root directory inside Docker images, but sometimes it was/is similar without containers as well, and inside a container it does not clutter the host system, at least).

> Having "pi" in the name of a software product is a big red flag in my mind, it immediately makes me think "they will not have documented how to run this on a shared device."

This approach is similar to shipping physical machines. Or at least maintaining odd legacy software on a dedicated machine.

I think a rather pessimistic view is that proper packaging switched to Docker or single-purpose machines, but an optimistic one is that those are the unnecessary VMs and larger single-purpose machines that were replaced by Docker and RPi. Maybe there is a little of both going on.


>AFAICT, people did that frequently on regular systems as well, just to not bother with permissions.

This was an easy way to tell if you were dealing with a Muppet. If you saw this you would know that the software was going to be a problem.


Unpopular Opinion: Linux got it wrong and Windows got it right. (Or at least closer to right.) Programs shouldn't use centralized dependencies. Programs should ship ALL of their dependencies. Running a program should be as simple as downloading a zip, extracting, and clicking/typing run.

Docker exists because building and running software is so outrageously complex that it requires a full system image. And it turns out Docker didn't actually solve it after all!


It's much easier to ship more than one program, with all their dependencies packaged, because of lower cost of maintenance per program/library. It is also easier to automate downloading and unpacking than do that manually for every package. If you want to try this idea, just download a Linux distribution.


Doesn't that fail the minute one of your programs decides he needs a different package version or one that doesn't exist all together?

It looks more efficient and easier at first, but ultimately it becomes less and less maintenable and the "dumb" inefficient approach of shipping all the things at once becomes better.

It's like in real life, the more you are dependent on different peoples/incomes, the more your life becomes complicated and unlivable. Just ask someone who needs multiple jobs to survive. But we went ahead and implemented just that into software. Kinda madness...


There are a lot of use cases for docker, I think this is complaining about a few specific things in more complex applications that probably shouldn't be deployed that way because you're not just exposing port 80, it's a whole tool deployed to end users, like an appliance. But that's not my primary use case for Docker.

For me, the best thing about Docker is that it's brought the average developer experience from "oh, I think I have an install.sh for that around somewhere" to mostly repeatable builds that are mostly self documenting. Any time a tool is self documenting it's a win. If you want the cake, you have to write down the recipe. That's huge. It's forcing lazy devs (which we all are) to just write it down. At this point the amount of "weird bearded guy tribal knowledge" that is now documented in a Dockerfile somewhere is a treasure trove.

Things still break all the time for a million dumb reasons but as a least common denominator it's a great place to start. It's not a solution for everything and it sounds like that's what this article is about. Docker+Compose is not great for everything, so don't use it for those situations. But it's so much better than what was before.


Without reading the article, I would also agree that Docker has made things better for me rather than worse. If someone else spent the effort to create a Dockerfile for their app, it will reduce the amount of issues I have trying to deploy it greatly. At least at that point they have figured out the majority of dependencies required to run their app, then I only have to troubleshoot the details rather than starting from scratch for whatever server distribution that I'm running it on.


> mostly repeatable builds that are mostly self documenting

"Mostly" is going a bit far there.

If you want those things you should use nix to build your docker images, but you're going to have to want them pretty badly.


I distribute server-side software and it was a pain to provide the infra requirements. At higher level users easily miss what is yours and what's another tech, they just don't care to tell the difference. Is not a matter of documenting, is just not their concern.

That drives many issues and it becomes a snowball soon as inexperienced users start to vent bad practices. In a effort to help other users they often spread more damage.

With Docker I was able to take charge of the infra, which erased all the uncertainties on my next layer but it spawned the uncomfortable need of learning Docker to use my stuff, which users took very reluctant.

The best distribution method is to pack a binary release. Not only the package is lightweight, it doesn't need any fancy instruction. You can keep Docker for your internal use, don't ship it to end users.


Having worked at Red Hat and worked on many Docker / Kubernetes systems I agree with some parts of the article, but my view is that the wrold is going through a transition phase right now of moving to containerised systems.

Take for example running something like 3Scale (an internet gateway) in Docker or Kubernete. It can be a nightmare to configure and run 3Scale using containers with the multiple memory limits and other container specific issues. Far easier to get 3Scale running without containers.

So many software systems were not designed in the Docker era, and going forward many container applications will be designed to be easier to configure/use in the Container world due to a "Container/Docker native" mindset when designing the system in the first place


99.9% of the problems you spoke to, which are very real. Could be solved if people building the software would just understand one thing. A container is not a mini VM. It is not in any way shape or form a virtual machine. If what you need is a lightweight virtual machine. Build that. Do not build a container because it's the latest and greatest buzzword. But instead I see large monolithic applications, shoved into a container, and then I hear a multitude of complaints about performance issues ETC. You may be able to drive a nail with a screwdriver but it's not a good idea.


Only applies to docker.

LXC for example designed container like a VM


In the old days, the age-old cry of the beleaguered developer was “My code isn’t buggy, it works on my machine!”, to which the response was “We’re not shipping your machine to the customer!”. Well, science marches on, and we’ve invented a way to do exactly that. Instead of, you know, writing actually robust and simple-to-deploy software.

See also: <https://blog.brixit.nl/developers-are-lazy-thus-flatpak/>


> Instead of, you know, writing actually robust and simple-to-deploy software.

This is unfortunately much harder to do than doing docker images. If you're trying to ship a robust and simple to deploy app, you will fail at some point - you just haven't seen a system where that happens yet. You can't be robust vs unknown unknowns and what a system will look like in a couple of years (next LTS) can be very surprising.


Hacking on some code and making random changes until it works (”It works! Ship it!”) is much easier (or at least feels easier) than designing the code carefully and using tests to verify the correctness of our code. Yet for good reasons we do it the harder way. For mostly the same reasons, I’m very wary of a system image of some pre-installed software instead of packaged software which is meant to run in a variety of environments, and to be easily configured to do so.

Note also that a system which is robust and adaptive to different environments today is also robust to different future environments. And nobody can escape the march of time. Everybody needs updates. TLS updates. Time zone updates. Security fixes. I’d rather have a system designed with robustness against differences as a primary concern than a system where it’s assumed that it will run in an unchanging static universe.


but it is much easier to test something in a container, and even to update everything in a container "fearlessly".

simply considering that we had untouchable, unreproducible and totally undocumented servers running for years in basements/cabinets as a meme (classic anecdote, common experience) can allow us to infer that laziness is not inherent to containerization


> but it is much easier to test something in a container, and even to update everything in a container "fearlessly".

Test, maybe. Develop and modify, not so much.

> simply considering that we had untouchable, unreproducible and totally undocumented servers running for years in basements/cabinets as a meme (classic anecdote, common experience) can allow us to infer that laziness is not inherent to containerization

Yes, but this is a cautionary tale, not an example to be followed.


Has any great design been invented that isn't tied to hardcoded paths preventing multiple versions of the same library and would allow easy linking to any version? Nix? Anything else?


Yes, Nix. And it works great.


I can't believe what I am reading. It took you more time to set up nextcloud with docker compose? What? For my hobby stuff, I can usually try something out really quickly by giving a compose a quick read and pasting it into portainer for a test.

I have my gripes with docker - especially for my professional work, and I am not a fan about how much it obscures images, nor its shibboleth approach to using a CLI-daemon interface (should be a `dockerctl` command in my mind) but I really can't believe that you would want to manually set up multi process platforms manually. When you want to connect to a data base, and have it be ephemeral? I have rooted around in enough application servers to understand that docker-compose and dockerfiles is a very sane approach.

The real issue I am seeing in the comments are that people are complaining that they would prefer getting a maintained package for software. Would rather use `apt install` or `dnf install` than mess with containers. That has been discussed to death and we get the same answer is the same every time - yes that would be a nice world to live in, but it is a fantasy to imagine anyone doing that much maintenance work for packages.


The curse of docker is that it allows devs to be lazy and rather then making sure their configuration and deployment are straightforward, they get to keep all their mess working (longer).

Ofcourse, if you have really complex setup docker is invaluable. But if all you are using it is to make the depreciation warnings go away on your single executable app, that's just abuse of the tool for the wrong reasons


I'm a developer with pretty limited "serious" DevOps experience. I can do tweaks here and there, create some simple cloud pipelines, etc. I'm definitely not fit to set up any enterprise-grade DevOps or manage a server.

I run a home server just for fun, with quite a lot of applications.

Docker and images from linuxserver.io are the best thing that happened to me in this realm. I can try out new applications very quickly, I can reinstall the OS on my server and get all my things up and running in no time. Linuxserver.io images have a standardised configuration which is great. There is no way in hell I could manage to set up this many applications without Docker, set up SSL for all of this, have fancy 2FA and SSO and have it so quickly redeployable. Guess I'm the target audience for this, it has been a lot of fun tinkering in a low-friction environment that Docker has provided me with.


I sympathize with pretty much everything in this article, in particular the configuration/filesystem pains.

I can't remember the last time I tried to get a file or folder into or out of a container without running into some sort of issue, usually involving permissions, and I feel like there really should be a better solution for this.


I do DevOps and never once used Compose in my entire career. Single containers in my machine (in userspace with Podman) and Kubernetes in servers. 99% of the time there's an official Helm chart that installs any dependencies and has sane default, which you just tweak to your needs. This guy is doing it wrong.


Great read. If I had to quibble, it would be with

> Even if 90-day ephemeral TLS certificates and a general atmosphere of laziness have deteriorated our discipline in this regard, private key material should be closely guarded. It should be stored in only one place and accessible to only one principal. You don't even have to get into these types of lofty security concerns, though. TLS is also sort of complicated to configure.

Any time there is complexity and only the One True Priest can manage it, there will be tears.

There had better be a break glass solution backed by Righteous Documentation (a hypothetical substance I heard about this one time) if we are boxed in to a One True Priest situation.


piggybacking on this, due to the ease of getting a cert for a subdomain, basically one cert per app ... just have one cert per "compose stack"

and it's perfect acceptable to run them with rootless docker-in-docker or on separate VMs to get security. the one true sacred nginx in front of everything is nice, but also has access to everything.

of course, the lack of public IPv4 addresses in many homelab/selfhosted/hobbyist situations is the true forcing factor. (an gettting a wildcard cert is very easy with certbot nowdays, so I understand the lure.)


His points about networking and the file system are spot on. It still blows my mind that the default user in a container is root and that dealing with the UID issue is such a pain. The only solutions I have seen can only be described as hacks.


Has any great design been invented that isn't tied to hardcoded paths preventing multiple versions of the same library etc and would allow easy linking to any version? Nix? Anything else?


I don't know if this is normal or if anyone else does it but I usually either download binaries or compile from source and move the executable to /usr/local/bin/ and create a symlink. Lets me easily switch between versions. I avoid using a package manager for anything where I want control over the version and installation.

- curl -fSLJO $RELEASE

- tar xvf $DOWNLOAD.tar.gz && cd $DOWNLOAD`

- make .

- mv $EXECUTABLE /usr/local/bin/$EXECUTABLE-$VERSION`

- ln -s /usr/local/bin/$EXECUTABLE-$VERSION /usr/local/bin/$EXECUTABLE

- # chmod 750, chown root:$appuser, etc

Works great for everything I've tried thus far. Redis, HAproy, Prometheus exporters, and many more.


Yes, these "great designs" are invented at a constant rate. I saw about 30 of them, or 1 per year at average. For example, Fedora invented and then abandoned modular design recently.


I think a lot of your comments here are very true. I don’t think docker made my world worse and I certainly don’t think it cost me more time, but I also sort of agree with the author. I came into Typescript late, and to “tame” the wildness of the ecosystem I set up a lot of opinionated systems for our developers. I didn’t dictate them, it was a collaborative process where we worked long and hard go agree on how we would utilise the Node ecosystem as well as the 10ish rules that we’ve changed as opposed to the “global” strict eslint rules. Rules which have themselves been born out of a decade of different rulesets like the Airbnb ones most people in the JS community will have heard of. My approach to containers has been similar.

We don’t use docker-compose and never have as an example, we went with Terraform, though today we’re using Bicep, and we make heavy use of dapr side cars. Which makes working with infrastructure sort of easy. Easy to lock down and opinionate at least. You can get most templates directly from Azures GitHub, but you can obviously also build your own, and then it’s simply a matter of having some decent template projects so that your developers can be up and running in a few minutes whenever they need to start a new project. Obviously there is a cost when you’re moving legacy projects into this sort of infrastructure, but it’s not like it’s really that resource consuming if it was already containerised, and it’s not too bad even if it wasn’t.

Now, that’s how we do it. It’s also how I think everyone should do it, but it’s not how you “have” to do it. In many ways, you’re as free to utilize the container ecosystems as freely as you can with the Node ecosystem. Well maybe not as free as that, but still free enough to create a lot of horror stories. Which is exactly what has happened with containers in many organisations. As such I think the author has a valid point. I’m not sure where we would go without the freedom though. Our exact setup works for us, you might not agree with our choices and neither of us would be wrong. So while some opinionated processes might be compatible with container infrastructure, others will need to remain “free”. I’m certainly with a lot of you in that it was worse before containers. I’ve also done work with organisations where container deployments worked so poorly that it was clearly not better, however, and I suspect those pipelines are far more common than given credit in this comment section.


There's no merit to this complaint about Docker that it complicates configuration. Actually it simplifies a great deal.

Author should consider how configuration is at different locations and sometimes even further split differently in different distributions for the same software package (apache, PostgreSQL etc) while a docker Image has it all at a very well known location within image and it doesn't matter where you use that image.


One problem he doesn't mention is with the libraries / dependencies themselves. They should offer stable APIs and ABIs so that you don't have to have software that needs to depend on specific versions in the first place. Poor software development practices in certain ecosystems encourage this churn.


The author needs to look into confd. I wrote about it in 2017: https://andrewwippler.com/2017/11/28/reusable-containers-wit...


It's funny how WebAssembly can help overcome most of the issues mentioned on the blogpost (packaging, configuration, portability) if addressed properly.

That's the main reason Wasmer [1] was created :)

[1] https://wasmer.io


It seems to me that many of the points in the article are not really problems with Docker, but problems with Linux.

I.e. the way Linux handles multiple versions of libraries and of course the fragmentation hell that is Linux distributions.

Docker then is just a bandaid for these problems.


Linux (kernel) can run any libraries in any combination. You are talking about distributions. Support for multiple versions of a library is not a goal for a typical distribution. Typically, this is necessary for commercial software vendors, which have money to throw at the problem. If commercial software vendors needs this so badly, they can sign a support contract with distributor, so a dedicated team of maintainers will care about this problem with libraries for them.

In the open-source world, it's an order of magnitude easier to patch the source, than introduce two versions of a same library. In rare cases, a -compat package is created, which then abandoned as soon as possible.


Did the author say anything actually negative about Docker? All I saw was "Docker can be suboptimal" but nothing to suggest that it has been anything other than at least a partial improvement?


>Consider Windows: the operating system's most alarming defect in the eyes of many "Linux people" is its lack of package management

Windows has WinGet.


WinGet isn't really a package manager.

Package manager role is, well, managing packages.

WinGet is just a repo with installers of the programs listed, but dependency managing and installation it self is handled by the installers, not winget. (Don't know if the ms-store package format isn't exception to that, but still)


>WinGet isn't really a package manager.

Microsoft literally calls it along with some complementary services the "Windows Package Manager"

>Package manager role is, well, managing packages.

Which is what it refers to what it is managing.

>but dependency managing and installation it self is handled by the installers

The package manifest is allows for dependencies on other packages.


That's great that microsoft calls it that, but that doesn't really matter.

Winget is just as much a package manager, as the "add or uninstall apps" tab in windows settings is. It's just a interface. But the actual act of installing is done by different program.


Docker containers and images have saved me a lot of shipping pain in web part.

I haven't come across them being used as package manager or distributors yet.


We are probably going to go around the packaging wheel until we come up with one that does first class sandboxing. Any bets on webassembly?


> distributing an application only as a Docker image is often evidence of a relatively immature project

Or, you know, a contractual requirement. And, as far as those go, I actually kind of like it: you ship a well-tested container, and the only thing the infra dept at the customer site has to configure-via-the-environment is the URL for (or path to, via any kind of supported file system mounted into the container, which most IT shops can still just about manage) the instance configuration file.

This reduces most initial troubleshooting to "well, what does the instance log say about retrieving and parsing the configuration file?", which, trust me, is way preferable over what you get with most other deployment methods I've been involved with...


Clearly we need another layer between the host and the docker-compose spaghetti. Some lightweight (true, no docker-in-docker bs) VM that exists to lock up & let whatever is in those yamls kick and scream around.

I wish I knew if I am being sarcastic here...


I hate Docker (honestly I can't believe the MacOS version requires a Linux emulator :rolleyes:), but it basically is the best way to run server software. The only major alternative is Nix, which might actually work out.


what about AWX then? it moved from shipping as a simple docker compose to a kubernetes operator! I'd much rather it didn't.


Obligatory Hitler uses Docker post: https://www.youtube.com/watch?v=PivpCKEiQOQ

Lol @ "I'm moving everyone to Windows! Don't cry, you can run bash on Windows 10 now."


While the article makes some good valid points about docker's shortcomings, it is full of strong assertions made entirely from the author's blind spots.

Comes off as a boomer rant more than anything.


How’s this for a boomer rant. I never used Docker because in my day we didn’t use virtual machines. We FTP’ed files to the PHP server, on port 21. Somewhere along the way all these Gen-Z kids started writing GraphQL APIs in TypeScript transpiled Cofee’node and pushing the Docker VMs to the Nomad Kubernetes Azure. I didn’t read the article but the curse of Docker is that kids think they’re better than me. After all I’ve accomplished! I had to buy a 165hz monitor to keep up with them in Counter Strike: Go, and my cholesterol is at an all time max.


What those kids probably did that you few admins couldn't while using "your" FTP and other antiquated tools, was to reduce the friction for anyone who wanted to maintain a server.

Embrace new technology or be left behind. I can't think of anything in the past decade that has made more strides in software distribution than docker has.


Administering a server full of random docker-compose.yml's downloaded from the Internet is a nightmare.

We got rid of Docker from all our production servers. Docker is technical debt in box.


My experience exactly. I honestly don't know how someone can think that Docker is a solution to anything. At best it's a shortcut for the lazy and incompetent but it doesn't lead to a bright future. Of course, the people setting up this kind of crap are never really responsible for actually maintaining the thing properly so they don't have to care. Of and it creates a lot of job positions that are kind of interchangeable regardless of actual competency. The whole thing is a poor attempt at Fordism for software development.


The curse of that website, it made me dizzy just scrolling.

Containers are amazing, they're bigger than docker, end of story.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: