Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
TeMPOraL
7 months ago
|
parent
|
context
|
favorite
| on:
Eleven v3
That's what the "reasoning" models do, effectively. Some LLM services hide or summarize that part for you, other return it verbatim, and ofc. you get the full thing if you're using a local reasoning model.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: