Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I want that for privacy reasons and for resource reasons.

And having this as a small hardware device should not add relevant latency to it.



Privacy isn't a concern when everything is local


Yes it is.

Malware, bugs etc can happen.

And I also might not want to disable it for every guest either.


If the AI is local, it doesn't need to be on an internet connected device. At that point, malware and bugs in that stack don't add extra privacy risks* — but malware and bugs in all your other devices with microphones etc. remain a risk, even if the LLM is absolutely perfect by whatever standard that means for you.

* unless you put the AI on a robot body, but that's then your own new and exciting problem.


There is no privacy difference between a local LLM listening versus a local wake word model listening.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: