He missed a big one: you have no way to stop Linux distributions from hacking up your software, and you'll suffer the consequences of whatever they do.
I hit this with procps. (a package with ps, top, vmstat, free, kill...) It was horrifically demotivating, helping to end my involvement as the maintainer from roughly 1997 to 2007. (the other big issue was real life intruding, with me joining a start-up and having 5 kids)
I had plans for command line option letters, carefully paying attention to compatibility with other UNIX-like systems, and then Red Hat would come along and patch their package to block my plans. They were changing the interface without even asking for a letter allocation. I was then sort of stuck. I could ignore them, but then my users would have all sorts of compatibility problems and Red Hat would likely keep patching in their own allocation. I could accept the allocation, letting Red Hat have control over my interface.
Red Hat sometimes added major bugs. I'd get lots of complaints in my email. These would be a mystery until I figured out that the user had a buggy Red Hat change.
Patches would often be hoarded by Linux distributions. I used to regularly download packages and open them up to look for new crazy patches. Sometimes I took the patches, sometimes I ignored the patches, and sometimes a wrote my own versions. What I could never do was reject patches. The upstream software maintainer has no ability to do that.
The backlog of unresolved troubles of this sort kept growing, making me really miserable. Eventually I just gave up on trying to put out a new release. That was painful, since I'd written ps itself and being the maintainer had become part of my identity. Letting go was not easy.
Maybe it had to happen at some point, since I now have more than twice as many kids, but I will be forever bitter about how Red Hat didn't give a damn about the maintainer relationship.
Hmm. That's painful indeed.
Sorry that you had such a dispiriting experience with the 'procps' package.
For what it's worth, allow me to share my experience (I joined after 2008; so I can only speak about that period onwards) of being at Red Hat. One of the reasons that keep me at Red Hat is the iron-clad (with some sensible exceptions, e.g. security embargoes) principle of Upstream First.
I see every day (and the community can verify -- the source out it there) maintainers upholding that value. And I've seen several times over the years maintainers, including yours truly, vigorously reject requests of 'downstream-only' patches or other deviations from upstream. When there are occasional exceptions, they need extraordinary justifications; either that, or those downstream patches are irrelevant in context of upstream.
I've learnt enormously from observing inspiring maintainers at Red Hat (many are also upstream maintainers) on how to do the delicate Tango of balancing the upstream vs. downstream hats.
So if it's any consolation, please know that for every aberration, there are thousands of other packages that cooperate peacefully with relevant upstreams.
I had reserved -M for security labels, to be compatible with Trusted Irix, and I think this was clear in the source code at the time. I intended to avoid any additional library dependency if possible, because ps is a critical tool that must not break. If I couldn't manage without a weird SELinux library, then I'd dlopen() it only if needed.
Red Hat swiped both -Z and Z, giving them the same meaning. For at least one of those, probably -Z but this was a long time ago, my plans were to use it for being compatible with a different feature of another OS. There are only 52 possible command option letters, not counting weirdness like non-ASCII and punctuation, and most are already taken. Now 3 of them, almost 6% of the possible space, are redundantly dedicated to an obscure feature. An added annoyance was that -Z got wrongly thrown into ps's list of POSIX-standard options, which can affect parsing in subtle ways.
One day I discovered this as it was being shipped in RHEL.
A more recent and amusing issue is with the recent storm of security bugs that hit procps. They actually predate my involvement with procps, likely going all the way back to the early 1990s. I ended up getting notice. I responded on the Bugzilla, correcting some misunderstandings and pointing out better ways to fix the problems. I even do software security work these days, professionally, so I would be the ultimate expert on security bug fixes for procps. My helpfulness got me blocked from looking at the Bugzilla and then Red Hat proceeded to ship slightly bone-headed patches for the security problems. BTW, last I checked there were still DoS vulnerabilities because Red Hat ignored my advice. Turning the 32-bit value into a 64-bit value may prevent an integer wrap-around exploit, but that just means the system will swap until the OOM killer strikes. The value should have stayed 32-bit, with protection added to avoid even approaching such a large value. You probably don't even need more than 17-bit. The stuff with escape expansion is also bad, differently. Instead of papering over the problem, the math should have been corrected.
Some of RedHat's dual upstream/downstream maintainers are possessively trying to take over the upstream projects while not doing very important changes.
Others are great human beings.
But on the whole, RedHat skepticism is healthy (this also applies to the NIH software that they push onto most Linux distributions).
That sounds really awful, and it's really disheartening to see the (ex)maintainer of such a core piece of Linux distributions being treated that way.
I work for a Linux distribution (SUSE) and I assure you that we don't all act this way. I've had to carry my fair share of downstream patches in openSUSE/SUSE packages, but I always make sure to submit the patches upstream in parallel. Quite a few people I know from Red Hat (and other distributions like Debian, Ubuntu, etc) do the same. It's possible that times have changed since then, or that it depends which package maintainers you are dealing with, but I hope it hasn't soured your opinion of all Linux distribution maintainers.
One thing that is a constant problem is that users of distributions keep submitting bugs to upstream bug-trackers. If there was one thing I wish I could change about user behaviour, it would be this. Aside from the fact that the bug might be in a downstream patch (resulting in needless spam of upstream), package maintainers are also usually better bug reporters than users because they are more familiar with the project and probably are already upstream contributors.
> If there was one thing I wish I could change about user behaviour, it would be this.
Speaking from a user perspective, I don't think that would be wise. If distributions would just stop feature patching, things would be a lot simpler.
I mean, if a feature is good, everyone should have it, not just the users of one distribution. So please submit the patch upstream, discuss it and wait for the next release. That way, there is no real problem when users submit to upstream bug-trackers. The only exception is (time-critical) security patching. But those patches should be removed again as soon as upstream solves the issue.
The scenario you are wishing for looks like an obvious solution to the issues you are having but will just make everything worse in the long run (slow and fragmented).
I am an upstream maintainer of quite a few projects as well as a downstream maintainer of quite a few packages, so I've seen this from both sides of the exchange.
Sometimes the user-facing issue isn't an actual patch, it's the default configuration (which might be distribution-specific) or any other host of non-code changes (default security policy and so on). These things should be reported to the distribution, but often get reported upstream which acts as spam.
And note that most downstream patches are backported bugfixes, not feature patches -- in my experience those are the exception.
> So please submit the patch upstream, discuss it and wait for the next release.
What if you're being paid for support and a customer needs a fix which you need to backport (it's not a CVE but it's a serious issue)? This is the problem we find ourselves in very often, and is why we push patches upstream but also patch our downstream packages so that the issue is resolved for our users. Some projects have release life-cycles that are ~6 months apart -- how do you explain to users that they need to wait 6 months in order for a fix which is already written and merged to be shipped to them? If we only ship it to those customers then it's not fair to the rest of our users nor the rest of the community (this is why SUSE has a Factory-first policy where all SLE changes must land in openSUSE first).
And finally (in rare cases) the upstream might not accept patches that are required in order for some distribution features to work because they fundamentally disagree with the feature. What are we supposed to do in those cases? Either way, someone will complain because the best solution (merge it upstream) is closed off.
While it would be ideal if downstream patching was unnecessary, that's not the world we live in. Again, I do submit patches upstream religiously -- but it's not as cut-and-dry as you might think.
How about using some of that revenue to pay the project maintainer to incorporate the patch upstream and make a release?
I've seen this very often in my personal experience that a company says exactly like you're saying now, "We have paying customers who demand certain patches, but the upstream project may be unable or unwilling to patch and release...so we downstream only." Alternatively, some downstream consumers simply "throw code over the wall" in the form of an upstream patch. However, those patches are sometimes duct-tape solutions which may not fit into the overall architecture/vision of the project maintainer(s). It's not fair to say, "accept this or else..." where the 'or else' is a downstream deviation (which in turn sometimes forces the upstream's hand).
The ethical way to do this work with upstream, whether it be direct compensation or more back-and-forth vice code-over-the-wall.
I think it's a incredibly unfair to assume that I'm not acting in good faith when I send patches upstream. I was contributing to the free software community long before I was getting paid to do so. I spend almost all of my working hours doing upstream maintainership work or writing patches. I have PRs that are several years old and I still am working on getting them merged upstream. I'm also just a developer, I'm not in a position to start spending money on projects on behalf of my employer. I do happen to donate to plenty of upstream projects and foundations, but that's money out of my own pocket.
The issue is that there are some (rare) cases where upstream is completely unwilling to merge a patch for philosophical reasons.
If you want me to be blunt -- the example I'm thinking of is Docker (which doesn't have a cash-flow problem), and has refused outright on many occasions to merge patches that allow for build-time secrets. On SLE we need this because in order to use a SLE image on a SLE machine you need to "register it" and the only way to sanely register it automatically is to bind-mount some files into all containers on-build as well as on-run (which you cannot do with upstream Docker today). Red Hat spent a long time trying to get a patch like this upstream, as did we.
To be clear, I'm not say you personally. I don't know you, or anything about your work so I'm speaking in purely a general sense about a position that you were representing.
I don't think all upstream contributions are in bad faith. I'm just saying there tends to be competing priorities which leads to some instances of hostility.
As for the Docker example, I don't know what the right answer is without digging deeper. My naive thought is to write a SLE/RHEL shim/plugin style component to allow functionality that's missing. This allows keeping the upstream vanilla, while not having to fork into something without the brand recognition. If that doesn't work, forking as `rhel-docker` or `sle-docker` doesn't seem that bad to me. Ubuntu does this with all the bcc-tools.
This is of course predicated on having tried all previous solutions first (paying an upstream developer, work with upstream with a good back-and-forth to incorporate a patch, etc). In the end, if the project decides something against their philosophical viewpoint, they're perfectly entitled to not accept patches. It's at that point, I think it's not the best solution to fork, downstream patch, and release as if it's the vanilla upstream.
All these problems are very easy to solve. Suse and Red Hat could stop trying to have it both ways: either you get the flexibility of a fork, or you get the brand recognition of the upstream project. But you want both - the upstream brand to get users, and the downstream flexibility so that you can “differentiate” and sell support.
This problem is entirely created by distributions and their “packagers know best” philosophy.
You say downstream patching is necessary. Go ahead and strip the upstream trademark from every package which you patch downstream, replace it with new Suse-specific trademarks, and let’s see what users prefer!
This isn't a SUSE or Red Hat caused issue, it's a users thing. Users want it both ways. Users want the flexibility to assume a project is the same as the upstream (but maybe feature frozen at a specific version), while also wanting all the security and bugs to not exist. Users are paying to have it both ways, and whether or not it's feasible, the market for it exists, and Suse/Red Hat are just trying to fill the gap.
If it was entirely the vendors, then we would see significantly more adoption in fast moving OSes, but that's still not a very popular model for production servers, even in the age of the cloud, containers, etc.
Fork the project, prefix the command with your company name and tell them to use that command with the patch applied if they need it today. This solves all your problems.
I think you're over-estimating how many users are willing to change their workflow or scripts to accommodate things like this. They will switch away from your distribution to one which patches the actual package.
I also don't agree this would result in fewer upstream bug reports -- "suse-foobar" will still result in bug reports for "foobar" (I've seen some cases of this happening). You'd need to rename the project entirely so that users don't know what the upstream GitHub repo is, and that's even more anti-community than any other suggestion.
> and that's even more anti-community than any other suggestion.
Shipping a fork that is different from the version that is created by the original author is also very anti-community. At least make it clear that you ship something different from the program that the author maintains.
I disagree that all patches are somehow ethically wrong (bugfixes and security patches are obvious counterexamples). Not to mention that if the author felt otherwise, they wouldn't have released the code under a license that allowed you to modify and redistribute your modifications.
But making massive changes to a project that are incompatible with upstream is definitely not a good thing to do without reason.
> I disagree that all patches are somehow ethically wrong (bugfixes and security patches are obvious counterexamples). Not to mention that if the author felt otherwise, they wouldn't have released the code under a license that allowed you to modify and redistribute your modifications.
The fact that the author does not forbid it does not mean that he/she wants changes or even encourages them. It just means that the author believes that the downsides of completely disallowing changes are even worse.
There are free software licenses that require you to rename the project if you modify it.
But I digress. This whole discussion is about trade-offs -- if you cannot get the patch upstream but you need to ship it what is the next best thing. I would contend that patching it is better than patching and renaming because renaming doesn't help solve the problem (unless you are very radical and rename the project entirely) and makes things less convenient for users.
And note that distribution users are part of the community of people using the software.
The solution is to either 1) patch it and rename your fork completely, or 2) ship upstream unmodified and use the same contribution process as everyone else to get patches in.
That is how everyone else does it. Only distributions are somehow exempt from fork etiquette. Hold yourself to the same standards as everyone else, and the problem goes away.
I disagree that everyone else does this except distributions.
Companies apply their own patches to projects all the time (as an upstream maintainer I've been asked several times to help debug a patch that some company has used internally). Almost every company using Linux has patches on top of it that are for their specific project (all versions of Android have a forked Linux kernel). GitHub uses patched versions of Git (though one of their engineers is also incredibly prolific upstream). And so on.
The reason why people think distributions are the only ones doing it is because we maintain all of the software that is available for a full Linux system. So instead of only having patches for just one or two projects, we have patches for (probably) ~50% of packages in our distribution (most are bugfixes but there are plenty of not-just-bugfix examples). I think some folks just like to bash distributions because no matter what decisions we make we're going to piss someone off.
But again, we don't apply downstream patches because we like it. In fact downstream patches are an outright headache because we have to rebase them on version upgrades and so on.
Of course companies patch open-source software for their own use. But they either don’t distribute it, or they do so under a different trademark.
Just to focus on your own examples:
- Github patches Linux for their own private use. They do not distribute any Linux derivatives, and they don’t profit from the Linux trademark.
- Android does distribute a Linux derivative, and it is heavily patched, but it is distributed and marketed under the trademark “Android”. Google does not profit from the Linux trademark.
And that’s the difference. People don’t buy Android phones because they’re running Linux. But they buy Suse and Red Hat distributions specifically to get Linux.
So Suse and Red Hat are the only businesses which I know of, that are allowed to fork upstream software, modify it aggressively, and still profit from the upstream trademark.
The vast majority of free software projects do not have a registered trademark. In cases where a free software project does have a trademark, distributions usually will rename the package (distributions do have lawyers and they will usually kick up a fuss in cases like this).
The case of the Linux mark is really weird, because basically all distributions are given license to use it but almost everyone still specifies that the trademark is owned by Linus.
(Also my example for GitHub was their fork of git, not Linux.)
Didn’t you mention elsewhere in this thread that Suse and Red Hat patched Docker, not just to backport fixes but to add features which upstream explicitly didn’t want merged? Surely Docker has a registered trademark. So following your reasoning, Suse and Red Hat should have stopped using the Docker trademark. Yet they didn’t. That example seems to contradict your argument that distributions are very careful to respect registered trademarks, while considering unregistered trademarks to be basically a free-for-all.
This is meant to be temporary. If it is truly crucial they will update happily. If it's not they can wait. Submit the patch upstream, if it is accepted you can then deprecate your code and create an alias, if it's not congratulations you get to maintain a fork.
This changes the culture from you clobbering the original package maintainer, to one where you can adapt when necessary but are still a good community member. The "foobar" people can point people to use "suse-foobar" as a solution until everything has been resolved.
It's not the number of bug reports that matter, it's that you can easily and quickly come to a speedy resolution as an end user.
This is also how non-standard and standards-track CSS features work . The vendors promotes CSS keys like -webkit-marquee-rainbow-animation-spectrum while waiting until all the browser vendors agree on the final spec for marquee-rainbow-animation-spectrum.
For unix tools, downstream vendors could choose to rename the binary (redhat-procps) or rename the flags (procps -- rh-c) if it wasn't incredibly ugrent to choose a single-letter flag name.
How happy would you be to use a system where 50-75% of commands start with "debian-" or "ubuntu-" or "redhat-" or "suse-" (assuming that all patches have to obey this renaming rule)? As a user, I personally wouldn't want to use a system like that and I would switch distributions to one where I don't have to repetitively type out the name of my distribution into a shell.
I think you've completely missed the point. It should be very rare to need to ship a fork of a project to end users. Patches should first be targeted at the upstream project, but in the rare case where it needs to be used today you should follow this path instead of messing up the API. By doing this you can explicitly set expectations that the changes you are making are temporary and will be deprecated immediately once you get back in sync with master.
As an end user this is ideal, I can get a fix shipped today with the tradeoff that I will have to do a bit of maintenance in the future. Any other way leads to long waits, or chaos.
If you would personally not use a system like this and would prefer one that didn't rename things, even if they were missing the patches, I think that goes to show that the patches are not that valuable.
I'm not an open source developer, but it seems like a good solution is for the original publisher of the package to maintain their vision, take whatever feedback they deem useful, and ignore what they don't feel is useful. If RedHat or the other distributions want to keep maintaining their patches, let them; that's what they're being paid for. If it ends up fragmenting the Linux ecosystem, which IMO it does, then the distributions should do more introspection and cooperate more to reduce fragmentation.
While distributions give variety and diversity - sometimes a good thing - I would love it if Linus would get all distributions in a room and force them to agree on a whole set of issues to eliminate silly differences between distributions. And if they don't/won't agree, they don't get to use the Linux trademark.
It's human nature to think your way of doing things is best. But I'm not sure the multitude of idiosyncratic differences between distributions is really advantageous to users. It does lock users into a distribution, because as you said, who wants to go through all their scripts and rename every instance of ps to rh-ps, then go back and rename everything again when the patch is accepted.
I do think the idea of paying the original maintainers (from the company, not from you personally), has a lot of merit. After all, that's where the stuff was born; it's RedHat's "raw materials" supply.
Fedora comes very close. But I'm biased, as a user of (and contributor to) it for about ten years.
Don't take my word for it, read the related documentation[1]:
"The Fedora Project focuses, as much as possible, on not deviating from upstream in the software it includes in the repository. The following guidelines are a general set of best practices, and provide reasons why this is a good idea, tips for sending your patches upstream, and potential exceptions Fedora might make. The primary goal is to share the benefits of a common codebase for end users and developers while simultaneously reducing unnecessary maintenance efforts."
The linked[1] documentation also answers, with specific examples, the question of: "What are deviations from upstream?"
I doubt it. It's unreasonable. "Upstream first, patch it if that fails" is good enough. There are widely used packages that did not have release in 5 years and distros have to carry around patches just to make them compile with openssl 1.1, or fix known CVEs, or whatever.
Distributions don't apply patches just for the sake of it or because we enjoy it -- it makes packaging more annoying for us when we have to rebase patches each version bump. But sometimes it's necessary and it would be a silly limitation to not allow yourself to patch software which is under a license that explicitly gives you permission to modify it.
I would argue any rolling-release or bleeding-edge distribution is pretty much the closest you'll get to "pure upstream". Stable distributions have more patches by necessity, and enterprise distributions even more so.
> Arch [Linux] strives to keep its packages as close to the original upstream software as possible. Patches are applied only when necessary to ensure an application compiles and runs correctly with the other packages installed on an up-to-date Arch system.
Specifically, Slack patches to get software to build, because often software will work for the original developer, but not with a different set of dependencies. Otherwise everything else stays default. I think this is the closest anyone can reasonably get to strict-upstream and still have stable releases.
As the parent of a startup and one singular demanding child, I have to ask... 10?! Wow! Nevermind why, I want to know how do you get anything done? Do you have staff? Do you have personal time to yourself, or with your wife? How many soccer games do you attend in a given week? How many bedrooms does your house have? How many of your kids are turning out to be programmers?
Forget maintaining software, I want to know how you maintain your existence. I don't think I could survive.
I only have four small kids, but you're approaching this from the single kid direction.
Once you have two, you stop caring as much about the first precious one - because you have two precious ones. So sometimes one has to wait.
Repeat that a few times, and you'll arrive at your answer.
With more kids, it becomes clear that there are things you simply cannot do, e.g. own a 11-bedroom house or drive each single kid to soccer. So the kids will have to do something else. Like playing with each other.
After they learn to walk and talk you don't have less time with two kids especially if the time diffecene is small enough to play with each other. A single kid has only his parents to interact with.
At ten kids I guess you have a - somewhat misbehaving - staff. Half of the kids are capable to help out with the other half or other chores at the house. All they need is a manager :)
I cannot say much about the personal time though - I'm still in the 'learn to walk and talk' phase and with 3 toddler in a small house there is virtually none. However I read on reddit that after 20 years every kid be at some kind of university and I finally will have free time for playing games and talking to my wife. That sounds nice:)
I also want to know how he manages to do any work with 5 or even 10 kids.
I realised I no longer had any spare time anymore once I had my first daughter. Then we had another and I realised I must have had so much spare time with just the one kid...
How lots of my family and friends manage time with 3 or more kids I don't know as you are then outnumbered.
5 or more seems impossible. You must either be a drill sergeant super efficient or the opposite, super relaxed and let the kids sort themselves out. Or rich and delegate it to nannies/au-pair etc.
I do appreciate now that by having two kids that they mostly spend their time at home playing with each other, as opposed to when we had just the one when we had to always play with them. I guess that scales well with more kids.
But I still do feel guilty that I am not always joining in. And have to be more selective of which school performance etc either one of us can attend. I am not sure my conscience could handle missing out on lots of these with more kids by allocating much less of my full time to each kid.
From zero to one child the change in regular life is astounding. You become a parent and in the process you lost a lot of your free time and lose all spontaneity from your life. From one to two children you will notice that you had some free time left with one but now you really _really_ don't have any more. No more evening movies or hour long coffees with wife. You wonder what childrenless people even do with the ultra metric shitton of time they have as you cannot imagine anymore. You simply don't have enough time for two kids and you have to learn to manage and optimize what you have to do the best you can.
From two to three children... nothing really changes anymore! You won't get more responsibilites as you already have them and no less time as you already don't have any :)
> I do appreciate now that by having two kids that they mostly spend their time at home playing with each other, as opposed to when we had just the one when we had to always play with them. I guess that scales well with more kids.
The traditional way is to delegate age-appropriate amount of caring for the younger children to the older children.
With 5 kids myself, this is indeed true. Delegation is important and also a "zone defense" became really important. "man to man" was out of the question.
But if you are already making 2 school lunches, how much longer is 5 really? You are already making breakfast lunch and dinner each day, how hard is it to add more food really. There is cost involved but making the meal isn't that much harder, at least for us.
It's 10 to 13 kids, depending on how you count. (adult dependent, unborn, miscarriage)
It's just my wife and I, her staying home, without staff or personal time. We homeschool them until they can get a 3 or better for AP Chemistry or AP Biology. My kids get funny looks going in for those tests at age 10 to 12. After that, as early as 6th grade, they can start dual-enrollment at the local college. It's free until high school graduation, which is sometimes enough time to get an AA degree. We don't do organized sports, but they all have unicycles. There are some organized activities to attend: scouting (BSA and AHG), a homeschooling group that does history/gym/art/writing together, a free band day camp in summer, and a big road trip every few years.
I only have 4 bedrooms. I ended up putting the kids all in a much bigger room, leaving the bedrooms for other purposes. More interesting is the "car", with 5 rows of seats. It is 3 tons empty, 5 tons full. We go through 2 gallons of milk per day. I can spend $1000 on 3 or 4 carts of groceries. We can finish a pair of chickens or a mid-size turkey in one sitting.
I haven't had much luck turning kids into programmers. At one point I got several to enjoy Scratch, but then I found that the computers were severely abused to waste time on junk like the "Tanki Online" video game and the "Annoying Orange" videos. I had to take away the computers. Recently kid #2, age 17, decided to choose the career. I've mostly taught him C now. He does that on Ubuntu and for his TI-84 Plus CE. I think this has created a programming aversion in some other kids, because they saw how excited I got to be finally teaching C and didn't want to spend all their time with me. For the oldest 5 the career choices are unknown (hurry up...), programmer, lawyer (physics undergrad), unknown, and midwife. BTW, programming the TI-84 Plus CE in C is pretty wild. You get a 24-bit int.
It helps to be close to work. I'm less than a mile away, 3 minutes by car or maybe 16 if walking. It helps to work only 40 hours per week, with an extremely flexible schedule.
Right now the main source of stress is kids wandering away from homework and chores. I don't want to have to stand over them in one room, waiting and watching as they work. I want to go do other things.
I know... A lot of people who come from huge families, and from what their parents tell me, when you have more kids they are a bit easier to handle because they sort of take care of themselves. And of course you don't drive them to 10 football games every weekend, because you're not made of time :)
One of the things I really like about Arch Linux is their policy of not making unrequired changes to upstream software. The only changes usually made are those necessary to keep the software compatible with the other software on a rolling release system, and these changes are usually temporary.
Ditto. Arch has the right attitude. I still have a bitter taste from a 0 day hack of one of my Debian servers, due to what I believed was a Debian only patch. I wish more distros followed the Arch approach. Redhat is particularly bad I think - I have had problems with the centos frankenkernel that I could never reproduce on other systems.
So, maybe this is not quite the same thing, but wasn't it Arch (and only Arch?) that made python point to python3? And in doing so went against what the upstream Python project explicitly specified to do? I remember dealing with a few users who ran into this.
I believe they made this change before the official advice was contrary to it.
Also, if you download and compile Python from python.org, isn't it also just 'python' that you get as your executable? If so arch is just following upstream, and the Python devs should follow their own advice.
Arch doesn't seem to be patching this, though you can see in the PKGBUILD that arch explicitly creates 'python3' as a link to 'python' (with a comment lamenting that this isn't done by default) :
Off topic but I have to say as an occasional dabbler this is something that drives me crazy whenever I take a look. So much talk of Python 2 being obsolete and irrelevant to new devs. Yet the command 'python' points to Python 2. Seriously?
"in preparation for an eventual change in the default version of Python, Python 2 only scripts should either be updated to be source compatible with Python 3 or else to use python2 in the shebang line."
So it will change eventually. But why rush to break things?
Similar story here. And it's not just distributions. There are web dev shops out there that edit OSS to add some functionality without even bothering to change the version or the user facing information. It makes for plenty of head scratchers when their customers report a bug in a piece of software you wrote.
I'm very happy with downstream patches. They come 90% from Redhat, rarely from Debian or Suse. The Redhat patches are usually 100x better than other contributors patches. Debian is usually the worst.
When I maintained ~250 cygwin packages I acted as downstream also, and had to deal with upstream maintainers also. Usually a horror, maybe because I came from cygwin which was a crazy hack. Only postgresql said "such a nice port". For perl, gcc, python,... you were just a crazy one to be ignored.
I've been packaging for Debian for a decade and I ran into a lot of unfriendly upstreams that would refuse to make any change to allow packaging and I've never seen other Debian Developers treating upstream developers poorly.
> He missed a big one: you have no way to stop Linux distributions from hacking up your software, and you'll suffer the consequences of whatever they do.
I remember the frustration when I found that NixOS maintainers downright crippled certain build systems to force people to use Nix...
Their goal was probably to get the thing building within the constraints of their own time, and the constraints of the Nix ecosystem.
You can choose whether you want to invest the time to fix those build scripts and re-enable the other build systems, or not -- totally up to you. But it's uncharitable to suggest that the Nix maintainers have some kind of secret agenda.
Of course they have an agenda. Nix is one of the most opinionated Linux distros in existence. They have an absolute insistence on reproducible builds.
What I said was that they didn't have a secret agenda. They are completely up front about reproducibility. Code that violates this principle needs to be fixed (or disabled, as in this Rebar example).
If that doesn't meet your needs, then just move on. Nix isn't a distro that will please everyone.
rebar3 itself supports reproducible builds via lock files. Applying patches like this instead of embracing the right way that the upstream designed for this exact purpose is just ignorance.
I would be totally cool with any incompatible patches that distro makes for its needs, as long as the binary is renamed. But they shipped a broken binary and called it rebar3. Not "rebar3-nix", but "rebar3". People try using this binary for development and it doesn't work, they go to the upstream and the upstream spends time investigating someone else's hacks.
Dude, Debian does exactly this same exact thing to get reproducible builds. They generally submit things upstream, so I'm not aware of Nix attempts do the same, but I can also imagine their patches for reproducible builds might not make sense outside of Nix. This particular patch is done differently than how Debian attempts to do it (they try to make no changes to the code if possible), but this patch doesn't seem like a bad approach either (makes it obvious at least).
Something like "ps -axu" will fail the UNIX/POSIX/SysV parsing, then restart in a pure BSD mode with the "-" ignored. Something like "ps -aux" will too, unless there is a user named "x". There is also a PS_PERSONALITY environment variable to force the parser to act in a particular way.
It was needed to transition people over to a standards-compliant syntax. I couldn't support all the old syntax. Prior to 1997, something like "ps -ef" would parse as "ps ef" does today, which is not standards-compliant. People were unwilling to just switch over without the compatibility hacks. There were instances of things like "ps -aux" all over the place, including in people's private shell scripts. People couldn't resist typing it.
I got pushback just for including the warning on stderr when somebody caused the parser to kick in to compatibility mode.
More of a general question: is there any license or addendum clause which could have remedied this? Linux distribution creators love adhering to specific license requirements. Would a choice between distributing vanilla software vs not distributing have been more appealing?
That would make the program not free/open source (e g. CC-BY-ND is not considered free), so free-only distros such as Fedora would have simply not considered it.
Trademarks can control this while remaining open source. If you ship an official package it's Firefox™ but if you ship a hacked-up version it's Iceweasel. Of course everyone will tell you you're evil if you try to do this.
Without trying to minimize this experience, I can't help but wonder if newer tools and techniques have mitigated some of this problem. For example GitHub pull requests make it much easier to collaborate on one version rather than an upstream, downstream relationship.
I hit this with procps. (a package with ps, top, vmstat, free, kill...) It was horrifically demotivating, helping to end my involvement as the maintainer from roughly 1997 to 2007. (the other big issue was real life intruding, with me joining a start-up and having 5 kids)
I had plans for command line option letters, carefully paying attention to compatibility with other UNIX-like systems, and then Red Hat would come along and patch their package to block my plans. They were changing the interface without even asking for a letter allocation. I was then sort of stuck. I could ignore them, but then my users would have all sorts of compatibility problems and Red Hat would likely keep patching in their own allocation. I could accept the allocation, letting Red Hat have control over my interface.
Red Hat sometimes added major bugs. I'd get lots of complaints in my email. These would be a mystery until I figured out that the user had a buggy Red Hat change.
Patches would often be hoarded by Linux distributions. I used to regularly download packages and open them up to look for new crazy patches. Sometimes I took the patches, sometimes I ignored the patches, and sometimes a wrote my own versions. What I could never do was reject patches. The upstream software maintainer has no ability to do that.
The backlog of unresolved troubles of this sort kept growing, making me really miserable. Eventually I just gave up on trying to put out a new release. That was painful, since I'd written ps itself and being the maintainer had become part of my identity. Letting go was not easy.
Maybe it had to happen at some point, since I now have more than twice as many kids, but I will be forever bitter about how Red Hat didn't give a damn about the maintainer relationship.