You already got a few good answers, but I'll echo them: you can do reproducible builds in containers, and nothing's stopping you from using nix inside containers. But you're at the mercy of all the different package managers that people will end up using (apt, npm, pip, make, curl, etc). So your system is only as good as the worst one.
I inherited a dozen or so docker containers a while back that I tried to maintain. Literally none of them would build— they all required going down the rabbit hole of troubleshooting some build error of some transitive dependency. So most of them never got updated and the problem got worse over time until they were abandoned.
The reason Nix is different is because it was a radically ambitious idea to penetrate deep into how all software is built, and fix issues however many layers down the stack it needed to. They set out to boil the ocean, and somehow succeeded. Containers give up, and paper over the problems by adding another layer of complexity on top. Nix is also complex, but it solves a much larger problem, and it goes much deeper to address root causes of issues.
Yeah, the environment is bit-for-bit identical in dev and prod. Any difference is an opportunity for bugs.
OK, there's one concession, there's an env var that indicates if it's a dev and prod environment. We try to use it sparingly. Useful for stuff like not reporting exceptions that originate in a dev environment.
Basically, there's a default.nix file in the repo, and you run nix-shell and it builds and launches you into the environment. We don't depend on anything outside of the environment. There's also a dev.nix and a prod.nix, with that single env var different. There's nothing you can't run and test natively, including databases.
Oh, it also works on MacOS, but that's a different environment because some dependencies don't make sense on MacOS, so some stuff is missing.
No, we have address space randomization and hash table randomization since those happen at runtime. /dev/random works as you'd expect.
The immutability is just at build time. So chrome and firefox aren't able to seed a unique ID in the binaries like you might be accustomed to. Funny story, we had a python dependency that would try to update itself when you imported it. I noticed because it would raise an exception when it was on a read only mount.
We use python. If we were writing in a compiled language, we'd use the same compiler toolchain as everyone else, but with the versions of all of our dependencies exactly the same from nix. We have some c extensions and compile Typescript and deploy those build artifacts. In the case of javascript, our node modules is built by nix, and our own code is built by webpack --watch in development.
I don't know Nix and can't comment on that, but in my experience, when I've inherited containers that couldn't build, this was usually due to its image orphaned from their parent Dockerfiles (i.e. someone wrote a Dockerfile, pushed an image from said Dockerfile, but never committed the Dockerfile anywhere, so now the image is orphaned and unreproducable) or due to the container being mutated after being brought up with `docker exec` or similar.
Assuming that the container's Dockerfile is persisted somewhere in source control, the base image used by that Dockerfile is tagged with a version whose upstream hasn't changed, and that the container isn't modified from the image that Dockerfile produced, you get extremely reproducable builds with extremely explicit dependencies therein.
That said, I definitely see the faults in all of this (the base image version is mutable, and the Dockerfile schema doesn't allow you to verify that an image is what you'd expect with a checksum or something like that, containers can be mutated after startup, containers running as root is still a huge problem, etc), but this is definitely a step up from running apps in VMs. Now that I'm typing this out, I'm surprised that buildpacks or Chef's Habitat didn't take off; they solve a lot of these problems while providing similar reproducability and isolation guarantees.
So as a quick example from my past experiences, using an Ubuntu base image is fraught. If you don't pin to a specific version (e.g. pinning to ubuntu:20.04 instead of ubuntu:focal-20220316), then you're already playing with a build that isn't reproducible (since the image you get from the ubuntu:20.04 tag is going to change). If you do pin, you have a different problem: your apt database, 6 months from now, will be out of date and lots of the packages in it will no longer exist as that specific version. The solution is "easy": run an "apt update" early on in your Dockerfile... except that goes out onto the Internet and again becomes non-deterministic.
To make it much more reproducible, you need to do it in two stages: first, pinning to a specific upstream version, installing all of the packages you want, and then tagging the output image. Then you pin to to that tag and use that to prep your app. That's... probably... going to be pretty repeatable. Only downside is that if there is, say, a security update released by Ubuntu that's relevant, you've now got to rebuild both your custom base and your app, and hope that everything still works.
Yup, that can, indeed, be a problem. Relying on apt (for example) is generally a bad idea, hence why you'd want to vendor everything that your app needs if a specific version of, say, libcurl is something that your app requires.
This along with the supply chain issues you mentioned is why some maintainers are moving towards using distroless base images instead, though these can be challenging to debug when things go wrong due to them being extremely minimal, down to not having shells.
I inherited a dozen or so docker containers a while back that I tried to maintain. Literally none of them would build— they all required going down the rabbit hole of troubleshooting some build error of some transitive dependency. So most of them never got updated and the problem got worse over time until they were abandoned.
The reason Nix is different is because it was a radically ambitious idea to penetrate deep into how all software is built, and fix issues however many layers down the stack it needed to. They set out to boil the ocean, and somehow succeeded. Containers give up, and paper over the problems by adding another layer of complexity on top. Nix is also complex, but it solves a much larger problem, and it goes much deeper to address root causes of issues.