Hacker Newsnew | past | comments | ask | show | jobs | submit | brianjeong's commentslogin

question with the article's main thesis. If agents were trained on human-generated data and "subtracted" human conventions (markdown, file systems, git), is there really such a thing as "native" agent UX that's different from human UX? Or have we just discovering which human conventions happen to align well with transformer attention patterns?


It's a fair question - I think the fact that they hold abilities (read 200k tokens instantly, can clone themselves, ...) that we don't would suggest they will have quirks and differecnes.

What downstream implication that will have on a AX sense is certainly arguable, but I would put forward that we're already seeing it with effective harnesses such as Claude Code. The experience the agent has there is quite different to how you'd build an IDE for a human.


You could say the same about Pokemon - the models still struggle quite a bit.


I think there's a somewhat valid perspective that the Nth+1 model can simply clean up the previous models mess.

Essentially a bet that the rate of model improvement is going to be faster than the rate of decay from bad coding.

Now this hurts me personally to see as someone who actually enjoys having quality code but I don't see why it doesn't have a decent chance of holding


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: