Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an interesting threads. There are many instances of "this is bad, doesn't work, don't like it", and many instances of "it works reasonably well here, look: <url>".

Seems like a consistent pattern.



It’s a propaganda and psyop operation on HN if you ask me. This stuff is laughably bad and I wonder who would actually use it for real work beyond a “huh this is cool” at first glance.

HN is super susceptible to propaganda in the AI age unfortunately; I think at this point a lot of the comments and posts on here are from bots as well


There was some article here on how llm's are like gambling, in that sometimes you get great payouts and oftentimes not, and as psych 101 taught us, that kind of intermittent reward is addictive.


Interesting point, never thought of it like that, and I think there is some truth to that view. On the other hand, IIRC, this works best in instances where it's pure chance (you have no control over the likelihood of reward) and the probability is within some range (optimal is not 50%, I think, could be wrong).

I don't think either of this is true of LLMs. You obviously can improve its results with the right prompt + context + model choice, to a pretty large degree. The probability...hard to quantify, so I won't try. Let's just say that you wouldn't say you are addicted to your car because you have a 1% chance of being stuck in the middle of nowhere if it breaks down and 99% chance of a reward. The threshold I'm not sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: