Hacker Newsnew | past | comments | ask | show | jobs | submit | omnicognate's commentslogin

What point have we reached? All I see is HN drowning in insufferable, identical-sounding posts about how everything has changed forever. Meanwhile at work, in a high stakes environment where software not working as intended has actual consequences, there are... a few new tools some people like using and think they may be a bit more productive with. And the jury's still out even on that.

The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.

I think I'm done with HN at this point. It's turned into something resembling moltbook. I'll try back in a couple of years when maybe things will have changed a bit around here.


It's no coincidence HN is hosted by a VC. VC-backed tech is all about boom-bust hype cycles analogous to the lever pull of a giant slot machine.

> The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.

I am absolutely baffled by this take. I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work. Devops and livesite is a harder problem, but even there we see very promising results.

I was a skeptic too. I was decently vocal about AI working for single devs but could never scale to large, critical enterprise codebases and systems. I was very wrong.


> I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work

Please name it. If it’s that good, you shouldn’t be ashamed of doing so and we can all judge by ourselves how the quality of the service evolves.


> you shouldn’t be ashamed of doing so and we can all judge by ourselves how the quality of the service evolves.

That's kinda my bar at this point. On YouTube, there are so many talks and other videos about people using technology X to build Y software or managing Z infrastructure. But here all we got is slop, toys that should have been a shell script, or vague claims like GP.

Even ed(1) is more useful that what has been presented so far.


> I think I'm done with HN at this point.

On the bright side, this forum is gonna be great fun to read in 2 or 3 years, whether the AI dream takes off, or crashes to the ground.


I do not await the day where the public commons is trashed by everyone and their claudebot, though perhaps the segmentation of discourse will be better for us in the long run given how most social media sites operate.

Same as it was for "blockchain" and NFTs. Tech "enthusiasts" can be quite annoying, until whatever they hype is yesterday's fad. Then they jump on the next big thing. Rinse, repeat.

I am not in a high stakes environment and work on a one-person size projects.

But for months I have almost stopped writing actual lines of code myself.

Frequency and quality of my releases had improved. I got very good feedback on those releases from my customer base, and the number of bugs reported is not larger than on a code written by me personally.

The only downside is that I do not know the code inside out anymore even if i read it all, it feels like a code written by co-worker.


Feels like code written by a co-worker. No different than working on any decent sized code-base anywhere.

I've stopped writing code too. Who the fuck wants to learn yet ANOTHER new framework. So much happier with llm tools.


You have your head in the sand. Anyone making this claim in 2026 hasn’t legitimately tried these tools.

The excitement hasn't cooled off where I'm working.

Honestly, I'm personally happy to see so many naysayers online, it means I'm going to have job security a little longer than you folks.


I make mission critical software for robust multi robotic control in production flying real robots every day

16% of our production codebase is generated from claude or another LLM

Just because you can’t do it doesn’t mean other people can’t

Denial is a river


CTO at Gambit AI? How generous of you to talk your book while insulting us. At least we know what to avoid.

What does “talk my book” mean?

I don’t have a book

Edit: Apparently a financial term to mean “talk up your stock” which…if you don’t think that’s a good metric then why would you consider it talking my book lol cmon mayne


My guess: Their UASs run modified PX4 firmware.

Do we make UAS’?

Please tell me more


Yikes.

> using complementary overhangs and toehold sequences to generate a 3-way heteroduplex, ligate knick, and then remove barcode duplex

At first I thought this was about olympic figure skating, but after a bit of googling I think:

Complementary overhang - https://en.wikipedia.org/wiki/Sticky_and_blunt_ends

Toehold sequences: https://en.wikipedia.org/wiki/Toehold_mediated_strand_displa...

Ligate (ligase?) knick (nick?) - https://en.wikipedia.org/wiki/Nick_(DNA)

Barcode - https://en.wikipedia.org/wiki/DNA_barcoding

Heteroduplex - https://en.wikipedia.org/wiki/Heteroduplex


This accepts the idea that the flickering problem (which is what that comment was about) is to do with slow rendering. It isn't.

The solution to the flickering is almost certainly trivial and it's in the open source Ink library that Claude Code uses. I outlined it in [1].

Basically, Ink clears lines before rendering the page. That's not how you render TUIs if you don't want them to flicker. All you have to do is write the lines, and include a clear to end-of-line at the end of each one. That means you overwrite what's there and only erase what is removed. Where nothing changes, nothing visibly happens. My comment [1] contains links to the source that needs changing, and I think it would probably be a single-digit line PR to fix it. I'm not going to do so because I neither use Claude Code nor really approve of it.

It's hilarious that the quoted comment framed the issue as if it's rendering to a vsynced framebuffer and only has 16ms to do so, and everyone went with it. That's not how TUIs work at all. It's writing to stdout, ffs.

[1] https://news.ycombinator.com/item?id=46853395


That's what they said, but as far as I can see it makes no sense at all. It's a console app. It's outputing to stdout, not a GPU buffer.

The whole point of react is to update the real browser DOM (or rather their custom ASCII backend, presumably, in this case) only when the content actually changes. When that happens, surely you'd spurt out some ASCII escape sequences to update the display. You're not constrained to do that in 16ms and you don't have a vsync signal you could synchronise to even if you wanted to. Synchronising to the display is something the tty implementation does. (On a different machine if you're using it over ssh!)

Given their own explanation of react -> ascii -> terminal, I can't see how they could possibly have ended up attempting to render every 16ms and flickering if they don't get it done in time.

I'm genuinely curious if anybody can make this make sense, because based on what I know of react and of graphics programming (which isn't nothing) my immediate reaction to that post was "that's... not how any of this works".


Claude code is written in react and uses Ink for rendering. "Ink provides the same component-based UI building experience that React offers in the browser, but for command-line apps. It uses Yoga to build Flexbox layouts in the terminal,"

https://github.com/vadimdemedes/ink


I figured they were doing something like Ink, but interesting to know that they're actually using Ink. Do you have any evidence that's the case?

It doesn't answer the question, though. Ink throttles to at most 30fps (not 60 as the 16ms quote would suggest, though the at most is far more important). That's done to prevent it churning out vast amounts of ASCII, preventing issues like [1], not as some sort of display sync behaviour where missing the frame deadline would be expected to cause tearing/jank (let alone flickering).

I don't mean to be combative here. There must be some real explanation for the flickering, and I'm curious to know what it is. Using Ink doesn't, on it's own, explain it AFAICS.

Edit: I do see an issue about flickering on Ink [2]. If that's what's going on, the suggestion in one of the replies to use alternate screen sounds reasonable and nothing to do with having to render in 16ms. There are tons of TUI programs out there that manage to update without flickering.

[1] https://github.com/gatsbyjs/gatsby/issues/15505

[2] https://github.com/vadimdemedes/ink/issues/359


How about the ink homepage (same link as before), which lists Claude as the first entry under

Who's Using Ink?

    Claude Code - An agentic coding tool made by Anthropic.


Great, so probably a pretty straightforward fix, albeit in a dependency. Ink does indeed write ansiEscapes.clearTerminal [1], which does indeed "Clear the whole terminal, including scrollback buffer. (Not just the visible part of it)" [2]. (Edit: even the eraseLines here [4] will cause flicker.)

Using alternate screen might help, and is probably desirable anyway, but really the right approach is not to clear the screen (or erase lines) at all but just write out the lines and put a clear to end-of-line (ansiEscapes.eraseEndLine) at the end of each one, as described in [3]. That should be a pretty simple patch to Ink.

Likening this to a "small game engine" and claiming they need to render in 16ms is pretty funny. Perhaps they'll figure it out when this comment makes it into Claude's training data.

[1] https://github.com/vadimdemedes/ink/blob/e8b08e75cf272761d63...

[2] https://www.npmjs.com/package/ansi-escapes

[3] https://stackoverflow.com/a/71453783

[4] https://github.com/vadimdemedes/ink/blob/e8b08e75cf272761d63...


Idempotence of an operation means that if you perform it a second (or third, etc) time it won't do anything. The "action" all happens the first time and further goes at it do nothing. Eg. switching a light switch on could be seen as "idempotent" in a sense. You can press the bottom edge of the switch again but it's not going to click again and the light isn't going to become any more on.

The concept originates in maths, where it's functions that can be idempotent. The canonical example is projection operators: if you project a vector onto a subspace and then apply that same projection operator again you get the same vector again. In computing the term is sometimes used fairly loosely/analogistically like in the light switch example above. Sometimes, though, there is a mathematical function involved that is idempotent in the mathematical sense.

A form of idempotence is implied in "retries ... can't produce duplicate work" in the quote, but it isn't the whole story. Atomicity, for example, is also implied by the whole quote: the idea that an operation always either completes in its entirety or doesn't happen at all. That's independent of idempotence.


I agree the title should be changed, but as I commented on the dupe of this submission learning is not something that happens as a beginner, student or "junior" programmer and then stops. The job is learning, and after 25 years of doing it I learn more per day than ever.


The study doesn't argue that you stopped learning.


I didn't say it did. I just pointed out that learning effectively isn't only a concern for "inexperienced developers still gaining knowledge".


An important aspect of this for professional programmers is that learning is not something that happens as a beginner, student or "junior" and then stops. The job is learning, and after 25 years of doing it I learn more per day than ever.


I've reached a steady state where the rate of learning matches the rate of forgetting


How old are you? At 39 (20 years of professional experience) I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens, which have been replaced by nonsense like Kubernetes and aligning content with CSS Grid.

And I must admit my appetite in learning new technologies has lessened dramatically in the past decade; to be fair, it gets to a point that most new ideas are just rehashing of older ones. When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.


> I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens

I'm a bit younger (33) but you'd be surprised how fast it comes back. I hadn't touched x86 assembly for probably 10 years at one point. Then someone asked a question in a modding community for an ancient game and after spending a few hours it mostly came back to me.

I'm sure if you had to reverse engineer some win32 applications, it'd come back quickly.


SoftICE gang represent :-)

That's a skill onto itself, and I mean the general stuff does not fade or at least come back quickly. But there's a lot of the tail end that's just difficult to recall because it's obscure.

How exactly did I hook Delphi apps' TForm handling system instead of breakpointing GetWindowTextA and friends? I mean... I just cannot remember. It wasn't super easy either.


I want to second this. I'm 38 and I used to do some debugging and reverse engineering during my university days (2006-2011). Since then I've mainly avoided looking at assembly since I mostly work in C++ systems or HLSL.

These last few months, however, I've had to spend a lot of time debugging via disassembly for my work. It felt really slow at first, but then it came back to me and now it's really natural again.


You can’t keep infinite knowledge in your brain. You forget skills you don’t use. Barring some pathology, if you’re doing something every day you won’t forget it.

If you’ve forgotten your Win32 reverse engineering skills I’m guessing you haven’t done much of that in a long time.

That said, it’s hard to truly forget something once you’ve learned it. If you had to start doing it again today, you’d learn it much faster this time than the first.


> You can’t keep infinite knowledge in your brain.

For what it’s worth—it’s not entirely clear that this is true: https://en.wikipedia.org/wiki/Hyperthymesia

The human brain seemingly has the capability to remember (virtually?) infinite amounts of information. It’s just that most of us… don’t.


You can't store an infinite amount of entropy in a finite amount of space outside of a singularity, well or at least attempting to do that will cause a singularity.

Compression/algorithms don't save you here either. The algorithm for pi is very short, pulling up any particular randomm digit of pi still requires the expenditure of some particular amount of entropy.


It's entirely possible for this to be literally false, but practically true

The important question is can you learn enough in a standard human lifetime to "fill up your knowledge bank"?


1) That's not infinite, just vast

2) Hyperthymesia is about remembering specific events in your past, not about retaining conceptual knowledge.


https://www.youtube.com/watch?v=8kUQWuK1L4w

APL inventor says that he was developing not a programming language, but notation to express as much problems as one can. He found that expressing more and more problems with the notation first made notation grow, then notation size started to shrink.

To develop conceptual knowledge (when one's "notation" starts to shrink) one has to have some good memory (re-expressing more and more problems).


The point is that this particular type of exceptional memory has nothing to do with conceptual knowledge, it's all about experiences. This particular condition also makes you focus on your own past to an excessive amount, which would distract you from learning new technologies.

You can't model systems in your mind using past experiences, at least not reliably and repeatedly.


You can model systems in your mind using past experience with different systems, reliably and repetetively.


No you can't.

Your lived experience is not a systematic model of anything, what this type of memory gives you is a vivid set of anecdotes describing personally important events.


> It’s just that most of us… don’t.

Ok, so my statement is essentially correct.

Most of us can not keep infinite information in our brain.


It's not that you forget, it's more that it gets archived.

If you moved back to a country you hadn't lived or spoken its language in for 10 years, you would find yourself that you don't have to relearn it, and it would come back quickly.

Also information is supposedly almost infinite, as with increased efficiency as you learn, it makes volume limits redundant.


I do take your point. But the point I’m trying to emphasize is that the brain isn’t like a hard drive that fills up. It’s a muscle that can potentially hold more.

I’m not sure if this is in the Wikipedia article, but when I last read about this, years ago, there seemed to be a link between Hyperthymesia and OCD. Brain scans suggested the key was in how these individuals organize the information in their brain, so that it’s easy for them retrieve.

Before the printing press was common, it was common for scholars to memorize entire books. I absolutely cannot do this. When technology made memorization less necessary, our memories shrank. Actually shrank, not merely changing what facts to focus on.

And to be clear, I would never advocate going back to the middle ages! But we did lose something.


There must be some physical limit to our cognitive capacity.

We can “store” infinite numbers by using our numeral system as a generator of sorts for whatever the next number must be without actually having to remember infinite numbers, but I do not believe it would be physically possible to literally remember every item in some infinite set.

Sure, maybe we’ve gotten lazy about memorizing things and our true capacity is higher (maybe very much so), but there is still some limit.

Additionally, the practical limit will be very different for different people. Our brains are not all the same.


I agree, it must not be literally infinite, I shouldn’t have said that. But it may be effectively infinite. My strong suspicion is that most of us are nowhere close to whatever the limit is.

Think about how we talk about exercise. Yes, there probably is a theoretical limit to how fast any human could run, and maybe Olympic athletes are close to that, but most of us aren’t. Also, if you want your arms to get stronger, it isn’t bad to also exercise your legs; your leg muscles don’t somehow pull strength away from your arm muscles.


> your leg muscles don’t somehow pull strength away from your arm muscles.

No, but the limiting factor is the amount of stored energy available in your body. You could exhaust your energy stores using only your legs and left barely able to use your arms (or anything else).

If we’ve offloaded our memory capacity to external means of rapid recall (ex. the internet) then what have we gained in response? Breadth of knowledge? Increased reasoning abilities? More energy for other kinds of mental work? Because there’s no cheating thermodynamics, even thinking uses energy. Or are we just simply radiating away that unused energy as heat and wasting that potential?


It is also a matter of choice. I don’t remember any news trivia, I don’t engage with "people news" and, to be honest, I forget a lot of what people tell me about random subject.

It has two huge benefits: nearly infinite memory for truly interesting stuff and still looking friendly to people who tell me the same stuff all the times.

Side-effect: my wife is not always happy that I forgot about "non-interesting" stuff which are still important ;-)


  > When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.
Learn yourself relational algebra. It invariantly will lead you to optimization problems and these will also invariantly lead you to equality saturation that is most effectively implemented with... generalized join from relational algebra!

Also, relational algebra implements content-addressable storage (CAS), which is essential for data flow computing paradigm. Thus, you will have a window into CPU design.

At 54 (36 years of professional experience) I find these rondos fascinating.


> I must admit my appetite in learning new technologies has lessened dramatically in the past decade;

I felt like that for a while, but I seem to be finding new challenges again. Lately I've been deep-diving on data pipelines and embedded systems. Sometimes I find problems that are easy enough to solve by brute force, but elegant solutions are not obvious at all. It's a lot of fun.

It could be that you're way ahead of me and I'll wind up feeling like that again.


That's one of several possibilities. I've reached a different steady state - one where the velocity of work exceeds the rate at which I can learn enough to fully understand the task at hand.


But just think, there's a whole new framework that isn't better but is trendy. You can recycle a lot of your knowledge and "learn new things" that won't matter in five years. Isn't that great?


I use spaced repetition for stuff I care for.

I use remnote for that.

I write cards and quizzes for all kind of stuff, and I tend to retain it for years after having it practiced with the low friction of spaced repetition.


to fix that you basically need to switch specialty or focus. A difficult thing to do if you are employed of course.


I worked as an "advisor" for programmers in a large company. Our mantra there was that programming and development of software is mainly acquiring knowledge (ie learning?).

One take-away for us from that viewpoint was that knowledge in fact is more important than the lines of code in the repo. We'd rather lose the source code than the knowledge of our workers, so to speak.

Another point is that when you use consultants, you get lines of codes, whereas the consultancy company ends up with the knowledge!

... And so on.

So, I wholeheartedly agree that programming is learning!


>One take-away for us from that viewpoint was that knowledge in fact is more important than the lines of code in the repo. We'd rather lose the source code than the knowledge of our workers, so to speak.

Isn't this the opposite of how large tech companies operate? They can churn develops in/out very quickly, hire-to-fire, etc... but the code base lives on. There is little incentive to keep institutional knowledge. The incentives are PRs pushed and value landed.


That might be the case for USA, but this was in a country with practically no firing.


> We'd rather lose the source code than the knowledge of our workers, so to speak.

Isn't large amounts of required institutional knowledge typically a problem?


It was a "high tech domain", so institutional knowledge was required, problem or not.

We had domain specialists with decades of experience and knowledge, and we looked at our developers as the "glue" between domain knowledge and computation (modelling, planning and optimization software).

You can try to make this glue have little knowledge, or lots of knowledge. We chose the latter and it worked well for us.

But I was only in that one company, so I can't really tell.



Very cool! Thanks


It can be I guess, but I think it's more about solving problems. You can fix a lot of peoples' problems by shipping different flavors of the same stuff that's been done before. It feels more like a trade.

People naturally try to use what they've learned but sometimes end up making things more complicated than they really needed to be. It's a regular problem even excluding the people intentionally over-complicating things for their resume to get higher paying jobs.


> The job is learning...

I could have sworn I was meant to be shipping all this time...


Have you been nothing more than a junior contributor all this time? Because as you mature professionally your knowledge of the system should also be growing


It seems to me that now days software engineers move a lot more. Either within a company or to other companies. Furthermore, companies do not seem to care and they are always stuck on a learning loop where engineers are competent enough to make modifications and able to add new code but without deep insights where they can improve the fundamental abstractions of the system. Meanwhile even seniors with 25+ years of experience are noobs when they approaching a new system.


> AI can code as well as Torvalds

He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.


It's like watching the standards-based web commit suicide.


As the immediate responder to this comment, I claim to be the next guy. I love systemd.


I don't like few pieces and Mr. Lennarts attitude to some bugs/obvious flaws, but by far much better than old sysv or really any alternative we have.

Doing complex flows like "run app to load keys from remote server to unlock encrypted partition" is far easier under systemd and it have dependency system robust enough to trigger that mount automatically if app needing it starts


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: