Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I havent read it, but Blindsight did a very good job of explaining LLMs. https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)

I am saying that LLMs DONT process information, not the way people are implying.

The best I can suggest is to please try building with an LLM. Its very hard to explain the nuance, and I am not the only one struggling with it.



I understand the nuance, I think you're just wrong. LLMs augmented with memory are Turing complete [1], so they do process information. This distinction you're trying to draw is a mirage.

[1] https://pub.towardsai.net/llms-and-memory-is-definitely-all-...


Augmented with memory and perhaps a bit more—-recursive control of their own attentional mechanisms.


You gotta read it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: