This is a repository template that sets up Flowise with llama.cpp. The idea is you develop LLM logic using Flowise, and then build an app to connect to the generated API.
Create a repository from this template, then clone with submodules:
git clone --recurse-submodules -j8 git://github.com/you/your_app.git
Create a .env
file and then start docker:
cp .env.example .env
docker compose up
This will download the models in models.txt
and then spin up Flowise and a llama.cpp inferencing server. Connect to localhost:3369
and then watch these videos to learn how to use it.