Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost?

I've been running a programming LLM locally, with a 200k context length with using system ram.

Its also an abliterated model, so I get none of the moralizing or forced ethics either. I ask, and it answers.

I even have it hooked up to my HomeAssistant, and can trigger complex actions from there.



What model are you using and what kind of hardware are you running it on?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: