Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you ask two humans to explain a problem to you, and human 1 takes an hour to explain what human 2 explained in 5 minutes… everyone would consider human 1 LESS ‘productive’ than human 2.

But what if human 2 was wrong?

What if both were wrong and human 3 simply said ‘I don’t know’.

LoC is a measure ripe for ignorance driven managerial abuse.

We’ve all seen senior devs explain concepts to junior devs, increasing their understanding and productivity while they themselves ‘produced’ zero lines of code.

Yes zero LoC maybe point to laziness; or to proper preparation.

All this is so obvious. LoC are easy to count but otherwise have hardly any value



LOC is a bad quality metric, but its a reasonable proxy in practice..

Teams generally don't keep merging code that "doesn't work" for long... prod will brake, users will push back fast. So unless the "wrongness" of the AI-generated code is buried so deeply that it only shows up way later, higher merged LOC probably does mean more real output.

Its just not directly correlated there is some bloat associated too.

So that caveat applies to human-written code too, which we tend to forget. There's bloat and noise in the metric, but its not meaningless


Agreed, there is some correlation between productivity and LoC. That said the correlation it’s weak; and does not say anything about quality (if anything quality might be inversely correlated; which too would be a very weak signal)


For instance if I push 10kloc that are in a lib I would have used if I were not using AI, yes, I have pushed much more code, but I was not more productive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: