Hacker Newsnew | past | comments | ask | show | jobs | submit | acedTrex's commentslogin

I love this take, i feel similarly. LLMs can never rob me of my personal enjoyment of computing.

Sure they might make my work life hell, but i've always considering my programming hobby to be completely separate from work.


Seeing projects with first commits from 3-4 years ago feels like finding pre nuclear testing steel. No strong proof exists that this project was not conceived as slop.

LLM bots are gonna start back dating commits to look more legit.

Yep, i absolutely expect this to happen, the quality signals that humans use are going to be forever in flux now as the humans try to stay ahead of the bots.

Compiled code is meant for the machine, Written code is for other humans.

For better or worse, a lot of people seem to disagree with this, and believe that humans reading code is only necessary at the margins, similarly to debugging compiler outputs. Personally I don't believe we're there yet (and may not get there for some time) but this is where comments like GP's come from: human legibility is a secondary or tertiary concern and it's fine to give it up if the code meets its requirements and can be maintained effectively by LLMs.

I rarely see LLMs generate code that is less readable than the rest of the codebase it's been created for. I've seen humans who are short on time or economic incentive produce some truly unreadable code.

Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.


The main coding agent failure modes I've seen:

- Proliferation of utils/helpers when there are already ones defined in the codebase. Particularly a problem for larger codebases

- Tests with bad mocks and bail-outs due to missing things in the agent's runtime environment ("I see that X isn't available, let me just stub around that...")

- Overly defensive off-happy-path handling, returning null or the semantic "empty" response when the correct behavior is to throw an exception that will be properly handled somewhere up the call chain

- Locally optimal design choices with very little "thought" given to ownership or separation of concerns

All of these can pretty quickly turn into a maintainability problem if you aren't keeping a close eye on things. But broadly I agree that line-per-line frontier LLM code is generally better than what humans write and miles better than what a stressed-out human developer with a short deadline usually produces.


Oh god, the bad mocks are the worst. Try adding instructions not to make mocks and it creates "placeholders", ask it to not create mocks or placeholders and it creates "stubs". Drives me mad...

To add to this list:

- Duplicate functions when you've asked for a slight change of functionality (eg. write_to_database and write_to_database_with_cache), never actually updating all the calls to the old function so you have a split codebase.

- On a similar vein, the backup code path of "else: do a stupid static default" instead of erroring, which would be much more helpful for debugging.

- Strong desires to follow architecture choices it was trained on, regardless of instruction. It might have been trained on some presumably high quality, large and enterprise-y codebases, but I'm just trying to write a short little throwaway program which doesn't need the complexity. KISS seems anathema to coding agents.


I'm sort of happy to see all these things I run into listed out as issues people have so I know it's not just me experiencing and being bothered by these behaviors.

All of these bother me, but the null/default-value returns drive me insane. It makes the code more verbose and difficult to follow, and in many cases makes the code force its way through problems that should be making it stop. Please, LLM, please just throw an exception!

And Sturgeon tells us 90% of people are wrong, so what can you do.

Compiled natural language is meant for the machine, Written natural language is for other humans.

If AI is the key to compiling natural language into machine code like so many claim, then the AI should output machine code directly.

But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.


Ya i hate the idea that theres a difference, Code to me has always been as expressive about a person as normal prose. LLMs you lose a lot of vital information about the programmers personality. It leads to worse outcomes because it makes the failures less predictable.

Code _can_ be expressive but it also can not, it depends on its purpose.

Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.

I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.


All code is expressive, if a person emitted it, it is expressive about their state of mind, their values and their context.

Phooey.

I am perfectly willing to take that risk. Hell i'll even throw ten bucks on it while we are here.

I love this trend of posting vibe coded in a day slop apps to every place on the internet.

Really makes me want to be here.


We're all hustling to stay relevant because we're caught in a cutthroat game of musical chairs as our industry gets automated, and there's little incentive to polish anything before releasing since 90% of the time you'll just get ignored regardless, so you might as well see if your project/landing page hooks before burning a bunch of time.

I polish stuff before promoting because I'm averse to reputation damage, but every day that I see people not doing that and getting upvotes anyhow makes the practice harder to justify.


Man yall are in it for all the wrong reasons. Though i suppose this is a YCOMBINATOR site after all lmao.

These projects would rather miss out on a few good people to stop the bad ones over the alternative.

I think there are better alternatives, we'll let the market weed things out

For example, I will keep making them spin wheels and burn tokens / money, a sort of honeypot, adversarial shadowban. This is even better for disincentivizing them.

Will automate it if it ever gets bad


You can already hardcode the sha of a given workflow in the ref, and arguably should do that anyways.

It doesn't work for transitive dependencies, so you're reliant on third party composite actions doing their own SHA locking.

You can also configure a policy for it [0] and there are many oss tools for auto converting your workflow into a pinned hash ones. I guess OP is upset it’s not in gh CLI? Maybe a valid feature to have there even if it’s just a nicety

[0] https://github.blog/changelog/2025-08-15-github-actions-poli...


I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period.

And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.

I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.

It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.


> I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period. And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER. I'm fundamentally convinced that my deep long term understanding of a project will allow me to surpass primarily LLM projects over the long term.

I have never thought of that aspect! This is a solid point!


This is exactly what I’m doing expressed much more succinctly than I could have done myself. Thanks!

I love that take and sympathise deeply with it. I also have come to the conclusion to focus my manual work on those areas where I can get learning from and try to automate the rest away as much as possible.

I often find myself in the role of the old guy advising a team to slow down a bit, and invest in doing things better now.

I generally frame this as: Are you optimizing for where you will be in 6 months, or 2 years?


Yea, using agents and having them do work means not only a lot of context switching, I actually don't have any context.

Idk what the median lifespan of a piece of code / project / employee tenure is but probably way less than 10 years, which makes that "long term investment" pretty pointless in most cases.

Unsuccessful projects: way less than 10 years

Successful projects: quite often much longer than 10 years

Code quality doesn't matter until lots of people start using what you wrote and you need to maintain/extend/change it

God it's a depressing thought that whatever work you do is just a throwaway no-one will use. That shouldn't be your end goal


> God it's a depressing thought that whatever work you do is just a throwaway no-one will use

I didn't say that.

In fact if your code doesn't significantly change over time it probably means your project wasn't successful.


Maybe we're talking about different things?

That's one of the biggest benefits of software quality and the long-term investment: how easy is your thing to change?


Right, but that usually means higher quality software design, and less so the exact low level details of function A or function B (in most cases)

If anything I'd claim using LLMs can actually free up your time to really focus on the proper design of the software.

I think the disconnect here is that people bashing LLMs don't understand that any decent engineer isn't just going around vibe coding, but instead creating a well thought design (with or without AI) and using LLMs to speed up the implementation.


This is the way. I think we’re in for some rough years at first but then what you described will settle in to the “best practice” (I hate that term). I look forward to the really bizarre bugs and incidents that make the news in the next 2-3 years. …Well as long as they’re not from my teams hah :)

> really bizarre bugs and incidents that make the news in the next 2-3 years

I take it that you are not using Windows 11


If you can't deliver features faster with AI assistance then you're either using it wrong or working on very specialized software that AI can't handle yet.

I haven't seen any evidence yet that using AI is improving developer performance, just a bunch of people who "feel" like it does.

I'm still on the fence about codegen but it's certainly helping explain code quickly without manually step through and providing quick access to docs

I've built a SaaS (with paying customers) in a month that would have taken me easily 6 months to build with this level of quality and features. AI wrote I'd say 99.9% of code. Without AI I wouldn't even have done this because it would have been too large of a task.

In addition, for my old product which is 5+ years old, AI now writes 95%+ of code for me. Now the programming itself takes a small percentage of my time, freeing me time for other tasks.


No-one serious is claiming 6x productivity improvements for close to equal quality

This is proving GP's point that you're going off feels and/or exaggerating


Quality is better both from a user and a code perspective.

From a user perspective I often implement a feature and then just throw it away no worries because I can reimplement it in an hour again based on my findings. No sunken cost. Also I can implement very small details that otherwise I'd have to backlog. This leads to a higher quality product for the user.

From a code standpoint I frequently do large refactors that also would never have been worth it by hand. I have a level of test coverage that would be infeasible for a one man show.


> I have a level of test coverage that would be infeasible for a one man show.

When a metric becomes a target, it ceases to be a good metric.


Cool. What's the product? Like, do you have a link to it or something.

It's boring glorified CRUD for SMBs of a certain industry focused on compliance and workflows specific to my country. Think your typical inventory, ticketing, CRM + industry specific features.

Boring stuff from a programming standpoint but stuff that helps businesses so they pay for it.


Okay, but where's the product? You described the product, but didn't share it.

> Anthropic is successfully coding Claude using Claude.

Claude is one of the buggiest pieces of shit I have ever used. They had to BUY the creators of bun to fix the damn thing. It is not a good example of your thesis.


You and the GP are conflating Claude, the company or its flagship model Claude Opus, with Claude Code, a state of the art coding assistant that has admittedly a slow and buggy React-based TUI (output quality is still very competitive)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: