Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do this. But the killer usecase for me is writing all boilerplate and implementing some half-working stuff keeps my attention on the issue which makes me able to complete more complex things.

A recent example is when I implemented a (Kubernetes) CSI driver that makes /nix available in a container so you can run an empty image and skip a lot of infra to manage.

I talked to it a bit and eventually it wrote a Nix derivation that runs the CSI codegen for Python and packages it so I could import it. Then I asked it to implement the gRPC interface it had generated and managed to get a "Hello World' when mounting this volume (just an empty dir). I also asked it to generate the YAML for the StorageClass, CSIDriver, Deployment and DaemonSet.

So LLM left me with a CSI driver that does nothing in Python (rather than Go which is what everything Kubernetes is implemented in) that I could then rewrite to run a Nix build and copy storepaths into a folder that's mounted into the container.

Sure implementing a gRPC interface might not be the hardest thing in hindsight, but I've never done it before and it's now a fully functional(ish) implementation of what i described.

It even managed to switch gRPC implementations because the Python one was funky with protoc versions in Nix(Python bundles the grpc codegen it's so stupid) so i asked it to do the same thing for grpclib instead which worked.



I hear a lot of people talk about LLMs writing the "boilerplate" and wonder why they haven't abstracted that away in the first place.

Maybe my brain has been permanently altered by hacking Lisp.


The problem with Lisp (or at least Clojure) is that abstracting away the boilerplate requires you to correctly identify the boilerplate.

It’s nontrivial to structure your entire AST so that the parts you abstract away are the parts you’re not going to need direct access to three months later. And I never really figured out, or saw anyone else figure out, how to do that in a way which establishes a clear pattern for the rest of your team to follow.

Especially when it comes to that last part, I’ve found pragmatic OOP with functional elements, like Ruby, or task-specific FP, like Elm, to be more useful than Clojure at work or various Lisps for hobby projects. Because patterns for identifying boilerplate are built in. Personal opinion, of course.


Yes, good tooling shouldn't have boilerplate. Minimizing loc (within reason, not code golf) is the best thing you can do for maintainability. Unfortunately things like Java are popular too.


I hear you. But removing boilerplate via abstraction (Lisp) is very different from generating it on demand (LLMs). The former is obviously qualitatively better. But it requires up front design, implementation testing etc. The latter is qualitatively insufficient, but it gets you there with very little effort plus some manual fixes.


> The latter is qualitatively insufficient, but it gets you there with very little effort plus some manual fixes.

I remember years ago, when I worked at a large PC OEM, I had a conversation with one of our quality managers -- if an updated business process consumes half the resources, but fails twice as often, have you improved your efficiency, or just broken even?

"Qualitatively insufficient, but gets you there" sounds like a contradiction in terms, assuming "there" is a well-defined end state you're trying to achieve within equally well-defined control limits.


But LLMs can help with the former too.


Boiler plate that is primary (not generated from a concise generation) is counterproductive: instant tech debt.

The way to to use AI is to get help writing that generation logic, not to just get it to crank out boilerplate.

You're not winning just because AI is taking the manual work out of cranking out primary boilerplate.


There’s necessary complexity like error handling, authz, some observability things, etc. which can’t be trivially abstracted away and needs to be present and adjusted for each capability/feature.


Boilerplate can be a slog to write but a breeze to read.


i’ve stopped writing “real” code for the most part, i just bang out some pseudo code like:

    read all files in directory ending in .tmpl
     render these as go templates 
    if any with kind: deployment
      add annotation blah: bar
    publish to local kubeapi using sa account foo
 
and tell it to translate it to x lang.

so i control the logic, it handles the syntax.

asking it to solve problems for you never seems to really work, but it remembers syntax and if i need some kinda reader interface over another or whatever.

can’t help me with code reviews tho, so i spent most of my time reading code instead of remembering syntax. i’m ok with it.


It can solve problems, as long as they’re practical, or things done before.


LLMs are more advanced than this as shown by my example, I don't think you'll find many CSI drivers in Python on the internet.


yeah, that’s what works for me also. LLMs are a nightmare for debugging but a breeze for this.

another good use case: have it read a ton of code and summarize it. if you’re dealing with a new (to you) area in a legacy application, and trying to fix a problem in how it interacts with a complex open-source library, have the LLM read them both and summarize how they work. (while fact-checking it along the way.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: