Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This. SO MUCH THIS. We use docker for our CI, which is mostly okay. But we devs don't have root on our workstations, so we can't just load the docker image and test stuff in the CI env. Enter podman: I've written a small perl wrapper around it and now everyone can just call `ci-chroot` to drop to a CI-equivalent shell. They can get virtual root (default) or normal user (--user), and the script takes care of everything (binding some useful folders, fetching/updating our internal image, starting/stopping the actual container and cleaning up,...). Only change necessary to our [Manjaro] environment was adding podman to the package list and a small bash script that generates the `/etc/sub{u,g}id` from our LDAP.


I'm intrigued that you must run Manjaro and kind of want work with you because of it. :)

Manjaro is great. Best Linux experience I've had so far. It's good that people recognize it.


It's just great. But I also like Arch@home.


I tried Arch but they are a bit too trigger-happy with the rolling releases and this has caused problems for me in the past.

I recognize it's a minor pet peeve and would not try to convince anyone though. It's just that I found Manjaro's release policy more to my liking (and unwillingness to fix a broken installation which granted, was easy every time yet took time nonetheless).


> we devs don't have root on our workstations

What's the justification for this... while at the same time allowing the use of containers?


With root it's easy to mess up the system badly ("I'll just fix that small error") and/or let all systems slowly diverge. And if something fails on my workstation, it will fail on many others as well and needs a coordinated solution. Also, giving every dev root is a security liability, especially when there is absolutely no work reason that requires us to become root. So only a few "core IT" people have root access.

I don't see how that contradicts the use of containers? When we only used docker for the CI we didn't have access to them, because escalating to root from within docker is still pretty easy. But with podman they're running with our privileges (and due to gid/uid mapping, I can run our installer "as root" in the container and see if it sets up everthing correctly - while in reality the process runs under my uid).

Disclaimer: This is in a professional enviroment. At home I despise containers and prefer to have all my service installed and easily updated by the system's package manager.


Developing anything without root in your own machine these days must be an absolute nightmare. If I may ask, what stack are you using? I can imagine Java, but that's about it. Go, Python, Node, etc, all of them I've needed root access to test some packages, for instance.

I would guess developers must be using a lot of "workarounds" without management's knowledge.


For Python, you can get most packages as non-root via `pip install --user $package` - but you are right, you might need root if these packages depend on system libraries (that they often wrap).

However, it maybe should not be a requirement. AFAIK nix allows you to install stuff as a normal user (and so does dnf on Fedora, but it depends on the package).


C++. And it's no problem :) I don't need privileged ports, and I don't need write access to the OS.

Only thing that annoyed me was when I did some work on an internal web tool (php+js) and had no php-fpm & httpd. But by now I've got a container for that.


Well, I suppose if your dependencies are very locked down it might work in C++.

I would guess, though, that having to change the prefix and default locations in all tarballs you download must be quite a pain. You can't install DEBs or any other tools either?


We have all deps in gits. A build tool takes care of everything. I get all necessary source gits checked out, configured (e.g. change prefix) and built. If I change something (which is my job), rebuilding is just a 'make' away; and rebuilds only the changed parts. Pretty frictionless and painless.

We use some external stuff like sqlite, qt,... but that's all versioned in our gits as well (-> easily reproducible builds). Since we sell a commercial product we can't just add random deps anyway. Plus, I think there is very little code that would benefit from being replaced by an external libs.

Relevant tools are on a global package list. We can get stuff added there on a short notice, and with little questions asked. Eg when I needed some lib for a FPGA dev kit that was a matter of minutes.

Uaaah, Firefox on Android is messing up the input again (can only append, not edit). I guess I should just hit that "reply" button then).


That actually sounds pretty cool to work with, then. If you truly have support from the company to work like that... Sadly not how it works in most places


Why does it matter what happens to your machine?

The big corp I work for has no issue with any of this so I genuinely don’t understand the motivation for this restriction.

Your machine shouldn’t have anything important on it and if it does I’m not sure that root/non-root will protect you from that.


It's only because your environment is fragile running stuff on metal instead of in a virtual environment.

Why not just give everyone an isolated space on a server with stuff like systemd-nspawn and let people do whatever they want including using docker inside and if they've screwed up, that's 1 lesson for them and redeploy his environment from a daily backup of the entire space or from a base image quickly.


I gave your post some thought, but there is not too much I can really say. You throw around some conclusion/assertion ("it's fragile"), but that's just plain wrong. And then you propose a perceived solution for that non-problem ("put people on sliced servers") even though our workloads are poorly suited for that AND that's mostly orthogonal to the stated "problem" to begin with.


In big companies there isn't always a good justification for a restriction ;-)


With no courage to voice a concern, it's your problem to be strangled by such limitations or there's actually a valid concern that you're not aware of.


Queue "you don't know me" ;) if I was strangled, I would voice concern very much. But as noted several times, everything is setup so that we never need root. The only thing that was an issue, was docker for about a year (we used chroot before that). That's why I had the sysadmin install podman (no discussion necessary) and why I build a wrapper for super easy usage (which is globally available now).

And please note that getting a CI like env is usually not necessary. I only need that exact env to prebuild things that should run in the CI, e.g. target-specific GCCs. And we can't upgrade the CI because some customers pay good money to have our product run on ancient Linux versions (safety critical industries, once something is certified, you use it a looooong time).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: