Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ollama is really easy.

brew install ollama

brew services start ollama

ollama pull mistral

Ollama you can query via http. It provides a consistent interface for prompting, regardless of model.

https://github.com/ollama/ollama/blob/main/docs/api.md#reque...



For Windows users without brew, there's a Windows installer:

https://ollama.com/download/windows

WinGet and Scoop apparently also have it. Chocolatey doesn't seem to.


Works very well in WSL2 as well. I prefer that as you have to start the server manually, so it doesn't sit in the background.


It lacks batching support (n>1), unfortunately, which is necessary for Loom-like applications



How does Ollama distribute these models? The downloads page on the Llama website, https://llama.meta.com/llama-downloads/, has a heading that reads:

> Request access to Llama

Which to me gives the impression that access is gated and by application only. But Ollama downloaded it without so much as a "y".

Is that just Meta's website UI? Registration isn't actually required?


It doesn't distribute. The app pulls from hugging face.


Hugging Face's Llama 2 page says, "You need to share contact information with Meta to access this model":

https://huggingface.co/meta-llama/Llama-2-7b

Ollama's installer didn't ask me for any contact info.


Huggingface has many, many fine tunes of Llama from different people, which don't ask that.

The requirement has always been more of a fig leaf than anything else. It allows Facebook to say everyone who downloaded from them agreed to some legal/ethics nonsense. Like all click-through license agreements nobody reads it; it's there so Facebook can pretend to be shocked when misuse of the model comes to light.


I mean Llama itself, not a fine tune. Ollama seemed to download and run Llama 2 with no agreement.


because this field is a baby, noncompliance is rampant


This repo doesn't seem to say anything about what it does with anything you pass it. There's no privacy policy or anything? It being open source doesn't necessarily mean it isn't passing all my data somewhere. I didn't see anything about this in the repo that everything definitely stays local.


I would be much more concerned if it had a privacy policy, to the point where just having one means I probably wouldn't use it. That is not common practice for Free Software that runs on your machine. The only network operations that Ollama has is managing LLMs (ie: download Mistral from their server).


> I didn't see anything about this in the repo that everything definitely stays local.

If this is your concern, I'd encourage you to read the code yourself. If you find it meets the bar you're expecting, then I'd suggest you submit a PR which updates the README to answer your question.


I run the docker container locally. As far as I can tell, it doesn't call home or anything (from reading the source and from opensnitch). It is just a cgo wrapped llama.cpp that provides an HTTP API. It CAN fetch models from their library, but you can just as easily load in your own GGUF formatted llama models. They implement a docker-like layers mechanism for model configuration that is pretty useful.


It is free.. So why not just audit the source personally to put your mind at ease and build locally if that’s a major concern of yours?


And since you did the audit yourself, you could help out the maintainers by adding a privacy policy.


A privacy policy isn't typical for code that you build and run yourself on your own machine; it's something that services publish to explain what they will do with the data you send them.


This.

Some will do it anyway for PR/marketing, but if the creator does not interact with, have access to, or collect your data they have no obligation to have a privacy policy.


I associate brew with macos (where a 3090 would not venture)

But it seems like there's a linux brew.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: