Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I run the docker container locally. As far as I can tell, it doesn't call home or anything (from reading the source and from opensnitch). It is just a cgo wrapped llama.cpp that provides an HTTP API. It CAN fetch models from their library, but you can just as easily load in your own GGUF formatted llama models. They implement a docker-like layers mechanism for model configuration that is pretty useful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: