From 5eae393fd12ea7d0f30b7b8eb823b97a68e2af69 Mon Sep 17 00:00:00 2001 From: Mirko Lenz Date: Tue, 9 Apr 2024 13:55:37 +0200 Subject: [PATCH] docs: update ollama instructions --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 68eac20..44a4bed 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ Nix wrapper for running LLMs behind an OpenAI-compatible API proxy. ## Ollama Usage ```shell -# make sure to pull the models first -nix run github:recap-utr/nixllm#ollama -- pull MODEL_NAME CUDA_VISIBLE_DEVICES=0 OLLAMA_HOST=0.0.0.0:6060 nix run github:recap-utr/nixllm#ollama -- serve +# then, in another terminal pull the models before performing API requests +OLLAMA_HOST=0.0.0.0:6060 nix run github:recap-utr/nixllm#ollama -- pull MODEL_NAME ```