Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's what the "reasoning" models do, effectively. Some LLM services hide or summarize that part for you, other return it verbatim, and ofc. you get the full thing if you're using a local reasoning model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: