From f5836eca66f5c6185beae9be3b5b0f21c35c16f5 Mon Sep 17 00:00:00 2001 From: WeberJulian Date: Wed, 13 Dec 2023 12:07:26 +0100 Subject: [PATCH] Update readme --- README.md | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 48313ae..b525352 100644 --- a/README.md +++ b/README.md @@ -2,12 +2,12 @@ ## 1) Run the server -### Recommended: use a pre-built container +### Use a pre-built image CUDA 12.1: ```bash -$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 +$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 ``` CUDA 11.8 (for older cards): @@ -16,6 +16,12 @@ CUDA 11.8 (for older cards): $ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest ``` +CPU (not recommended): + +```bash +$ docker run -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpu +``` + Run with a fine-tuned model: Make sure the model folder `/path/to/model/folder` contains the following files: @@ -27,14 +33,18 @@ Make sure the model folder `/path/to/model/folder` contains the following files $ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest` ``` -## Not Recommended: Build the container yourself +Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to +the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml)) + +## Build the image yourself To build the Docker container Pytorch 2.1 and CUDA 11.8 : `DOCKERFILE` may be `Dockerfile`, `Dockerfile.cpu`, `Dockerfile.cuda121`, or your own custom Dockerfile. ```bash -$ cd server +$ git clone git@github.com:coqui-ai/xtts-streaming-server.git +$ cd xtts-streaming-server/server $ docker build -t xtts-stream . -f DOCKERFILE $ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream ``` @@ -46,7 +56,7 @@ the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal. -### Clone `xtts-streaming-server` +### Clone `xtts-streaming-server` if you haven't already ```bash $ git clone git@github.com:coqui-ai/xtts-streaming-server.git @@ -63,8 +73,7 @@ $ python demo.py ### Using the test script ```bash -$ cd xtts-streaming-server -$ cd test +$ cd xtts-streaming-server/test $ python -m pip install -r requirements.txt $ python test_streaming.py ``` \ No newline at end of file