Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm actually in the first camp. For I believe that our brains is really LLM on steroids and logic rules are just in our "prompt".

What we need is a LLM that will iterate over its output until it feels that it's correct. Right now LLM output is like random thought in my mind. Which might be true or not. Before writing forum post I'd think it twice. And may be I'll rewrite the post before submitting it. And when I'm solving a complex problem, it might take weeks and thousands of iterations. Even reading math proof might take a lot of effort. LLM should learn to do it. I think that's the key to imitating human intelligence.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: