Hacker Newsnew | past | comments | ask | show | jobs | submit | more larusso's commentslogin

My first reaction: 800GB who committed that?!? This size alone screams something is wrong. To be fair even with basic dockerfiles it’s easy to build up a lot of junk. But there should be a general size limit in any workflow that just alerts when something grows out of proportion. We had this in our shop just a few weeks ago. A docker image for some ai training etc grew too big and nobody got alerted about the image final size. It got committed and pushed to jfrog. From there the image synced to a lot of machines. Jfrog informed us that something is off with our amount of data we shuffle around. So on one end this should not happen but it seems to easily end up in production without warning.


Given that Jfrog bills on egress for these container images I’m sure you guys saw an eye watering bill for the privilege of distributing your bloated container


Yes. But fair enough that we got a warning the very next day.


I feel or have the fear that the world will tumble and crack under the sheer amount of code we produce and can’t be maintained because at one point no one human can understand all the stuff that was written.

At the moment though I also code on and off with an agent. I’m not ready or willing to only vibe code my projects. For one is the fact that I had tons of examples where the agent gaslighted me only to turn around at the last stage. And in some cases the code output was to result focused and didn’t think about the broader general usage. And sure that’s in part because I hold it wrong. Don’t specify 10million markdown files etc. But it’s a feedback loop system. If I don’t trust the results I don’t jump in deeper. And I feel a lot of developers have no issue with jumping ever deeper. Write MCPs now CLIs and describe projects with custom markdown files. But I think we really need both camps. Otherwise we don’t move forward.


> I feel or have the fear that the world will tumble and crack under the sheer amount of code we produce and can’t be maintained because at one point no one human can understand all the stuff that was written.

IMO the best advice in life is try not to be fearful of things that happen to everyone and you can't change.

Good news! What you are afraid of will happen, but it'll happen to everyone all at once, and nothing you can do can change it.

So you no longer need to feel fear. You can skip right on over to resignation. (We have cookies, for we are cooked)


I generally like to use it. But I one project in the org which simply can’t work because the internal built system expects a normal .git directory at the root. Means I have to rewrite some of the build code that isn’t aware of this git feature. And yes we use a library to read from git but not the git cli or a more recent compatible one that understands that the current work tree is not the main one.


I really like to write programs in rust. But my stance has changed a bit over the years ever since other languages caught up a bit. On top of that I’m very skeptical if the rewrite of an ancient tool brings more less security. I don’t know the apt source code or how it actually works behind the cli interface so I leave this judgement to the pros. But there seems to be a very strong move to rewrite all core systems in rust. My issue with that is the fact that these tools don’t even invent anything new. Or change / improve the status co. I understand that it’s hard to introduce a new system without breaking other stuff. But our systems are still based on decisions from the telegraph age. Layers on top of layers on top of layers.


I've heard two arguments for these rewrites that don't always come up in these discussions. There are fair counterpoints to both of these but I think they add valuable dimensions to the conversation, or perhaps may explain why a rewrite may not seem justified without them.

* It's becoming increasingly difficult to find new contributors who want to work with very old code bases in languages like C or C++. Some open source projects have said they rewrote to Rust just to attract new devs.

* Reliability can be proven through years in use but security is less of a direct correlation. Reliability is a statistical distribution centered around the 'happy path' of expected use and the more times your software is used the more robust it will become or just be proven to be. But security issues are almost by definition the edgiest edge cases and aren't pruned by normal use but by direct attacks and pen testing. It's much harder to say that old software has been attacked in every possible way than that it's been used in every possible way. The consequences of CVEs may also be much higher than edge case reliability bugs, making the justification for proactive security hardening much stronger.


Yeah I get point for attracting young blood. But I wonder if the core utils which have been rewritten got rewritten by the original maintainers? And again the question why not simply write something new. With a modern architecture etc rather than drop in replacements.

On your second part. I wonder how aviation and space and car industry do it. They rely heavily on tested / proven concepts. What do they do when introducing a new type of material to replace another one. Or when a complete assembly workflow gets updated.


> And again the question why not simply write something new.

The world isn't black or white. Some people write Rust programs with the intent to be drop-in compatible programs of some other program. (And, by the way, that "some other program" might itself be a rewrite of an even older program.)

Yet others, such as myself, write Rust programs that may be similar to older programs (or not at all), but definitely not drop-in compatible programs. For example, ripgrep, xsv, fd, bat, hyperfine and more.

I don't know why you insist on a word in which Rust programs are only drop-in compatible rewrites. Embrace the grey and nuanced complexity of the real world.


> And again the question why not simply write something new.

There is a ton of new stuff getting written in Rust. But we don't have threads like this on HN when someone announces a new piece of infra written in Rust, only when there's a full or partial rewrite.

Re automotive and other legacy industries, there's heavy process around both safety and security. Performing HARAs and TARAs, assigning threat or safety levels to specific components and functions, deep system analysis, adding redundancy for safety, coding standards like MISRA, etc. You don't get a lot of assurances for "free" based on time-proven code. But in defense there's already a massive push towards memory safe languages to reduce the attack surface.


> why not simply write something new.

Because of backwards compatibility. You don’t rewrite Linux from scratch to fix old mistakes, that’s making a new system altogether. And I’m pretty sure there are some people doing just that. But still, there’s value in rewriting the things we have now in a future-proof language, so we have a better but working system until the new one is ready.


Sorry. I will answer on this because I feel people got a bit hung up on the “new” thing. Might be a language barrier. I really understand the reasons why with backwards compatibility etc. The point I tried to make is that we really spend tons of time either to maintain software that where written or “born” 50 or so years ago or rewrite things in the same spirit. I mixed my comments wit the the security aspect which might muddled a lot what I tried to say with the “new” part. One sees this also on HN. I love the UNIX philosophy and also the idea of POSIX. But it’s treated as if it is the holy grail of OS design and in case of POSIX the only true cross platform schema. Look also at the boot steps a CPU has to run through to boot up. By pretending to be 40 year old variant and then piece by piece startup features. Well I hope I cleared my point :)


Writing tools that are POSIX compatible doesn't mean one puts it on the pedestal of the "holy grail of OS design." I've certainly used POSIX to guide design aspects of things I build. Not because I think POSIX is the best. In fact, I think it's fucking awful and I very much dislike how some people use it as a hammer to whinge about portability. But POSIX is ubiquitous. So if you want your users to have less friction, you can't really ignore it.

And by the way, Rust didn't invent this "rewrite old software" idea. GNU did it long before Rust programmers did.


Yes but GNU to put them under GPL. Or that was my understanding.


So then your original comment should be amended to say, "and this is actually all fine when the authors use a license I personally like." So it's not actually the rewriting you don't like, but the licensing choices. Which you completely left out of your commentary.

You also didn't respond to my other rebuttal, which points out a number of counter-examples to your claim.

From my view, your argument seems very weak. You're leaving out critical details and ignoring counterpoints that don't confirm your bias.


Sorry I didn’t response by intention. The thing with the license I actually didn’t bring up because I totally forgot about this part of the discussion. I saw comments a few weeks back going into the fact that it’s not just a rust rewrite but also a relicense with maybe shady intend. I don’t know. I don’t know much about this. To your comment. I don’t know when I actually did any claims? Nor did I claim that a rewrite is fine when it’s changing to a license I like. Just stated that the reason back then was not to rewrite in a more modern language with better security. I wasn’t around when this happened and have no real thought if at the time I would have like or dislike the move. As it’s stands the net positive was obviously great otherwise a Linux as we know it might have been longer in the making. Or never. And yes my argument is weak because I’m actually not an expert on core utility development. I voiced just my feelings about the fact that we seem to move slowly forward or stand still in development rather than progressing to something else. And it seems that others see it differently and or have a better perspective on that.


> And yes my argument is weak because I’m actually not an expert on core utility development.

Yes, and I'm trying to point out why your argument is weak. You said things like this:

> But I wonder if the core utils which have been rewritten got rewritten by the original maintainers? And again the question why not simply write something new. With a modern architecture etc rather than drop in replacements.

And it honestly just comes across as rather rude. People do write things that are new. People also write things that re-implement older interfaces. Sometimes those people are even the same people.

Like, why are you even questioning this? What does it matter to you? People have been re-implementing older tools for decades. This isn't a new thing.

> Nor did I claim that a rewrite is fine when it’s changing to a license I like.

When I pointed it out, your response was, "oh yeah but they did this other thing that made it all okay."


Damn I wanted to write “not with intention”.


Inviting inexperienced amateurs to wide-reaching projects does not seem to be a prudent recipe. Nay, it is a recipe for disaster.


> Or change / improve the status [quo]

uutils/coreutils is MIT-licensed and primarily hosted on GitHub (with issues and PRs there) whereas GNU coreutils is GPL-licensed and hosted on gnu.org (with mailing lists).

EDIT: I'm not expressing a personal opinion, just stating how things are. The license change may indeed be of interest to some companies.


So a change to the worse.

The GPL protects the freedom of the users while MIT-licensed software can be easily rug-pulled or be co-opted by the big tech monopolists.

Using GitHub is unacceptable as it is banning many countries from using it. You are excluding devs around the world from contributing. Plus it is owned by Microsoft.

So we replaced a strong copyleft license and a solid decentralized workflow with a centralized repo that depends on the whims of Microsoft and the US government and that is somehow a good thing?


> The GPL protects the freedom of the users while MIT-licensed software can be easily rug-pulled or be co-opted by the big tech monopolists.

That is not at all true. If someone were to change the license of a project from MIT to something proprietary, the original will still exist and be just as available to users. No freedom is lost.


With GPL I can compile my own copy and use it with their software. They have to allow that. They also have to give me their sources, changes included.

MIT is a big joke at the expense of the open-source community.


I mean sadly even though I hate the bans the exclusion is really insignificant in the grand scheme of things and the benefits Github brings most of the considered acceptable for the tradeoff. I am sadly one of those people I am fairly young (25) and my introduction to git happened with Github so I am really used to it. Though I am also developing a codeforge as an hobby project and maybe something serious in the long term.

There is also another crowd that completely aligns with the US foreign policy and also has the same animosity towards those countrie's citizens (I 've seen considerable amount of examples of these).

For the license part I really don't get the argument how can a coreutils rewrite can get rugpulled this is not a hosted service where minio [1] [2] like situation can happen and there is always the original utils if something like that were to happen.

[1] http://news.ycombinator.com/item?id=45665452 [2] https://news.ycombinator.com/item?id=44136108


2 GNU coreutils maintainers, including myself, monitor the issues and PRs on a GitHub mirror that we have [1]. Generally the mailing list is preferred though, since more people follow it.

[1] https://github.com/coreutils/coreutils


People have to learn on some project. Why not something that’s simple to test against? You know what it should do, so let’s rewrite it!

Whether the rewrite should be adopted to replace the original is certainly a big discussion. But simply writing a replacement isn’t really worth complaining about.


I don’t get your argument here. 10 isn’t a huge number in my book but I don’t know of course what else that entails. I would opt for a secure process change over a soft local workflow restriction that may or may not be followed by all individuals. And I would definitely protect my CI system in the same way than local machines. Depending on the nature of CI these machines can have easy access rights. This really depends how you do CI and how lacks security is.


I'll do soft local workflow restriction right away.

The secure process change might take anywhere from a day to months.


I was temped to upgrade from my 2nd gen AirPodPros which I use since they launched. Still use the same set. But I wanted to wait for some real world reviews. I also have the Max and love them. But sound quality and ANC is not as great as in the In-Ears which is rather funny. I have no real need to switch yet. But I actually thought to give them a try at around Christmas. But I planned to use mine for running and air travel as well…


What you mean? How people are manage to run with noise cancelation? Or how it works that they don’t loose them?

I run with my AirPods Pro 2 and have no issues. I have some other in-ear buds where fit is also no issue but thumping sounds while running make them unusable.


Years ago I was a convert to open ear bone conduction by Shokz (then Aftershokz) but the band was a little annoying and now I use the Huawei Freeclips which I am very happy with. Bose also have an open ear product.

My priority with exercise is peripheral awareness so I would never compromise that with in-ears anymore


I understand. I think it very much depends on the environment. I usually run in parks not on the street. I also trust my eyes more than my ears when doing runs on more trafficked routes. The Apple AirPods have a great transparent mode. I tried bone conducting headphone and it wasn’t for me. I know that the new models are kind of hybrids now. But I also love the fact that I can listen to myself. I had tons of headphones over the years. And I think for me the AirPods Pro 2 are just the most versatile.


Weird POV considering your ears do 360 degrees which your eyes will never be able to do.


Well in a big city sounds can be deceiving. Also depends how trained your hearing is. I guess I would have a hard time in case I end up going blind. In any case, what I meant is, that I use my eyes, and by that also turn my head, to look over my shoulder to check for cars etc. In most cases it’s best to have eye contact with a car driver who currently takes a turn to make sure their actually seeing you.


I am blind. So no eye contact with drivers. And I am still alive, despite usually going alone as pedestrian. However, I guess I benefit a lot from the austrian "Vertrauensgrundsatz", which basically translates as "principle of trust". When acquiring a drivers license, you are drilled to take extra care of disabled or obviously incapacitated pedestrians. That basically means, if you hit a blind person, or even an obviously drunk person, you are at fault, no matter what.


I mean people have different degrees of hearing.


I think he means POSIX. Didn’t check but in some cases posix only covers some options a tool provides not all. It’s a hard lesson I learned while keeping shell scripts portable between Linux and macOS.


Yep. I was slightly incorrect in my original message, though. SUSv2 (1997) specified egrep and fgrep but marked them LEGACY. POSIX.1-2001 removed them.

The only place that that doesn't support 'grep -E' and 'grep -F' nowadays is Solaris 10. But if you are still using that you will certainly run into many other missing options.

[1] https://pubs.opengroup.org/onlinepubs/007908775/xcu/egrep.ht... [2] https://pubs.opengroup.org/onlinepubs/007908775/xcu/fgrep.ht...


"GNU grep implemented a change that breaks pre-existing scripts using a 46 year old API, but it's OK because the required workaround works everywhere but Solaris 10" seems like not a great statement of engineering design to me.


"GNU grep added a warning to inform you of the deprecation which happened 28 years ago, but only to stderr, and still works like you expect", does to me.


Meh. Look, it broke code. "Still works like you expect" is 100% false.

The deprecation argument is at least... arguable. It was indeed retired from POSIX. But needless deprecation is itself a smell in a situation where you can't audit all the code that uses it. Don't do that. It breaks stuff. It broke the updates in the linked article too. If you have an API, leave it there absent extremely strong arguments for its removal.


I also don’t get the hate. I personally prefer the older one but I also don’t see the big issue.

I have more a problem with the menu structure then the glass effect.


Same here. I don't mind it at all. If I had to choose I'd go for the previous design language, but I don't personally get the hate.


I really like the idea. I also grew up in a household with tons of physical media to explore. I still have my blue ray collection but it’s mainly sitting in the shelf because I honestly don’t know what else to put there.

But I’m wondering reading all the comments from people doing something similar with alternative products etc how they do this legally? I mean I can’t just download stuff from Apple Music and play it offline on some random player. Same with most other streaming providers. Or are you accepting the greyzone here by saying you pay for the service so it doesn’t matter? Or are you happily buying all the content on some other medium / drm free stores to put them on these alternatives players? I specifically mean solutions where one needs some form of copy of the files.


I use Apple Music and have made a very similar setup to the article. Instead of the NFC pointing to a Plex URL, I have it trigger an Automation to play the relevant album on Apple Music. Works well, plays instantly, feels magical, and most of all it's got rid of the 'what should I listen to' friction so I now find my home is filled with music way more often. Downside of this approach is it only works on my own phone.

This article (not mine) explains the Apple Music/Automation approach – https://hicks.design/journal/moo-card-player


There are still legitimate options to buy media and download mp3.


Yes I know. But audiobooks bought are way more expensive than getting them from audible with the credit system over the course of one year for instance. I used to buy my albums rather than having a streaming service subscription. But I sadly caved in. I just wonder if all who report they do this for they kids etc really go out and buy all these great records and audiobooks etc. because for me there is a reason to have a subscription. An album on iTunes costs roughly 10€. For that I can listen a whole month to whatever album. Sure the album is somewhat mine when I purchase it (definitely when bought on a physical medium). At the moment I purchase my favorite movies digital even though I could watch them on Netflix and co.


It seems to me like this is more about nostalgia than current music. I'm considering doing something similar with my cd collection which is original from the 90s/00s with some back catalogue 60s-90s stuff thrown in. I listen to most modern music via streaming, but still buy the odd new release album that really matters to me (literally one or two a year).


I’ve ripped a bunch of cds which I think is not technically legal here but I have no moral issue, an we’d bought some.

Other than that with setups like music assistant you can stream from these services, it’s just a different trigger. I know that’s not quite what you asked but it’s a clean solution to play on the speakers you’d already stream on.


Why get hung up on the legality of something like this, assuming you're just going to use it for personal use in your home?

It's not morally wrong to take music you pay for and use it in a perfectly reasonable - and fun - way.


Well because at least from a German perspective one can get in lots of troubles when going the non legal way. Of course the question is how you do it etc etc. my question was if in fact people go the greyzone or start to purchase from alternative sources instead of using streaming services.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: