Siri LLama is apple shortcut that access locally running LLMs through Siri or the shortcut UI on any apple device connected to the same network of your host machine. It uses Langchain 🦜🔗 and supports open source models from both Ollama 🦙 or Fireworks AI 🎆
Download Shortcut from HERE
pip install -r requirements.txt
-
Install Ollama for your machine, you have to run
ollama serve
in the terminal to start the server -
pull the models you want to use, for example
ollama run llama3 # chat model
ollama run llava # multimodal
- in
config.py
setOLLAMA_CHAT
,OLLAMA_VISUAL_CHAT
, andOLLAMA_EMBEDDINGS_MODEL
to the models you pulled from Ollama
-
get your Fireworks API Key and put it in
fireworks_models.py
-
in
config.py
setFIREWORKS_CHAT
,FIREWORKS_VISUAL_CHAT
andFIREWORKS_EMBEDDINGS_MODEL
to the models you want to use from Fireworks AI. and set your andFIREWORKS_API_KEY
in confing.py
set MEMORY_SIZE
(How many previous messages to remember) and ANSWER_SIZE_WORDS
(How many words to generate in the answer)
-
Download or clone the repo
-
set the provider (Ollama / Fireworks) in
app.py
-
Run the flask app using
>>> python3 app.py
-
On your Apple device, Download the shortcut from here Note that you must run the shortcut through Siri to "talk" to it, otherwise it will prompt you to type text.
-
Run the shortcut through Siri or the shortcut UI, in first time you run the shortcut you will be asked to enter your IP address and the port number showing in terminal
>>> python app.py
...
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5001
* Running on http://192.168.1.134:5001
Press CTRL+C to quit
In the example above, the IP address is 192.168.1.134
and the port is 5001
(default port is specified by Flask, change the line in main.py if needed)
- If you are using Siri to interact with the shortcut, saying "Good Bye" will stop Siri.
- Even we access the flask app (not Ollama server directly), Some windows users who have Ollama installed using
WSL
have to make sure ollama servere is exposed to the network, Check this issue for more details - When running the shortcut for the first time from Siri, it should ask for permission to send data to the Flask server. If it doesn't work (especially on iOS 17.4), first try running the shortcut + sending a message from the iOS Shortcuts app to trigger the permissions dialog, then try running it through Siri again.
Supposedly SiriLLama should work with any LLMs that including OpenAI, Claude, etc. but make sure first you installed the corresponding Langchain packages and set the models in config.py
- Running SiriLLama outside your local network is possible with a tool called ngrok. It's going to expose one or multiple ports on your local machine. Step by step tutorial:
- Start the ngrok server from cmd/terminal with the following command:
ngrok http localhost:5001
- It will give you a https link, something like https://xyzz-xxx-xxx-xxx-xxx.ngrok-free.app
- In the shortcut you downloaded earlier insert the link from ngrok without https:// and leave the port number field empty
- Now you should be able to run SiriLLama outside from your network. (In case you are unable to get valid response or something went wrong, try paste the ngrok link into safari and allow the connection within the browser)
- Using the multimodel feature is only possible with images that arent in HEIF format. You can change this in your camera settings (it wont affect your existing photos) under formats choose most compatible and you are good to go.