Skip to content

Use locally running LLMs directly from Siri 🦙🟣

Notifications You must be signed in to change notification settings

0ssamaak0/SiriLLama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Siri LLama

Siri LLama is apple shortcut that access locally running LLMs through Siri or the shortcut UI on any apple device connected to the same network of your host machine. It uses Langchain 🦜🔗 and supports open source models from both Ollama 🦙 or Fireworks AI 🎆

Download Shortcut from HERE

🟣 Simple Chat Video🎬

🟣 Multimodal Video 🎬

🟣 RAG Video 🎬

Getting Started

Requirements

pip install -r requirements.txt

Ollama Installation🦙

  1. Install Ollama for your machine, you have to run ollama serve in the terminal to start the server

  2. pull the models you want to use, for example

ollama run llama3 # chat model
ollama run llava # multimodal
  1. in config.py set OLLAMA_CHAT, OLLAMA_VISUAL_CHAT, and OLLAMA_EMBEDDINGS_MODEL to the models you pulled from Ollama

Fireworks AI Installation🎆

  1. get your Fireworks API Key and put it in fireworks_models.py

  2. in config.py set FIREWORKS_CHAT, FIREWORKS_VISUAL_CHAT and FIREWORKS_EMBEDDINGS_MODEL to the models you want to use from Fireworks AI. and set your and FIREWORKS_API_KEY

Config

in confing.py set MEMORY_SIZE (How many previous messages to remember) and ANSWER_SIZE_WORDS (How many words to generate in the answer)

Running SiriLlama 🟣🦙

  1. Download or clone the repo

  2. set the provider (Ollama / Fireworks) in app.py

  3. Run the flask app using

>>> python3 app.py
  1. On your Apple device, Download the shortcut from here Note that you must run the shortcut through Siri to "talk" to it, otherwise it will prompt you to type text.

  2. Run the shortcut through Siri or the shortcut UI, in first time you run the shortcut you will be asked to enter your IP address and the port number showing in terminal

>>> python app.py
...
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:5001
 * Running on http://192.168.1.134:5001
Press CTRL+C to quit

In the example above, the IP address is 192.168.1.134 and the port is 5001 (default port is specified by Flask, change the line in main.py if needed)

  1. If you are using Siri to interact with the shortcut, saying "Good Bye" will stop Siri.

Common Issues 🐞

  • Even we access the flask app (not Ollama server directly), Some windows users who have Ollama installed using WSL have to make sure ollama servere is exposed to the network, Check this issue for more details
  • When running the shortcut for the first time from Siri, it should ask for permission to send data to the Flask server. If it doesn't work (especially on iOS 17.4), first try running the shortcut + sending a message from the iOS Shortcuts app to trigger the permissions dialog, then try running it through Siri again.

Other LLM Providers 🤖🤖

Supposedly SiriLLama should work with any LLMs that including OpenAI, Claude, etc. but make sure first you installed the corresponding Langchain packages and set the models in config.py

SiriLLama on public network 🌎

  • Running SiriLLama outside your local network is possible with a tool called ngrok. It's going to expose one or multiple ports on your local machine. Step by step tutorial:
    1. Start the ngrok server from cmd/terminal with the following command:
ngrok http localhost:5001
  1. It will give you a https link, something like https://xyzz-xxx-xxx-xxx-xxx.ngrok-free.app
  2. In the shortcut you downloaded earlier insert the link from ngrok without https:// and leave the port number field empty
  3. Now you should be able to run SiriLLama outside from your network. (In case you are unable to get valid response or something went wrong, try paste the ngrok link into safari and allow the connection within the browser)

Good to know 💡💡

  • Using the multimodel feature is only possible with images that arent in HEIF format. You can change this in your camera settings (it wont affect your existing photos) under formats choose most compatible and you are good to go.

About

Use locally running LLMs directly from Siri 🦙🟣

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages