Huggingface has many, many fine tunes of Llama from different people, which don't ask that.
The requirement has always been more of a fig leaf than anything else. It allows Facebook to say everyone who downloaded from them agreed to some legal/ethics nonsense. Like all click-through license agreements nobody reads it; it's there so Facebook can pretend to be shocked when misuse of the model comes to light.
This repo doesn't seem to say anything about what it does with anything you pass it. There's no privacy policy or anything? It being open source doesn't necessarily mean it isn't passing all my data somewhere. I didn't see anything about this in the repo that everything definitely stays local.
I would be much more concerned if it had a privacy policy, to the point where just having one means I probably wouldn't use it. That is not common practice for Free Software that runs on your machine. The only network operations that Ollama has is managing LLMs (ie: download Mistral from their server).
> I didn't see anything about this in the repo that everything definitely stays local.
If this is your concern, I'd encourage you to read the code yourself. If you find it meets the bar you're expecting, then I'd suggest you submit a PR which updates the README to answer your question.
I run the docker container locally. As far as I can tell, it doesn't call home or anything (from reading the source and from opensnitch). It is just a cgo wrapped llama.cpp that provides an HTTP API. It CAN fetch models from their library, but you can just as easily load in your own GGUF formatted llama models. They implement a docker-like layers mechanism for model configuration that is pretty useful.
A privacy policy isn't typical for code that you build and run yourself on your own machine; it's something that services publish to explain what they will do with the data you send them.
Some will do it anyway for PR/marketing, but if the creator does not interact with, have access to, or collect your data they have no obligation to have a privacy policy.
brew install ollama
brew services start ollama
ollama pull mistral
Ollama you can query via http. It provides a consistent interface for prompting, regardless of model.
https://github.com/ollama/ollama/blob/main/docs/api.md#reque...