Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel this article should be paired with this other one [1] that was on the frontpage a few days ago.

My impression is, there is currently one tendency to "over-anthropomorphize" LLMs and treat them like conscious or even superhuman entities (encouraged by AI tech leaders and AGI/Singularity folks) and another to oversimplify them and view them as literal Markov chains that just got lots of training data.

Maybe those articles could help guarding against both extremes.

[1] https://www.verysane.ai/p/do-we-understand-how-neural-networ...



Previously when someone called out the tendency to over-anthropomorphize LLMs, a lot of the answers amounted to, “but I like doing it, therefore we should!”

I’ll be the first to say one should pick their battles. But hearing that over and over from a crowd like this that can be quite pedantic is very telling.


This very comment thread demonstrates how utterly hopeless it is trying to educate the believers. It has developed into a full-blown religion by now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: