Skip to content

Commit

Permalink
Update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
WeberJulian committed Dec 13, 2023
1 parent ae5d0a8 commit f5836ec
Showing 1 changed file with 16 additions and 7 deletions.
23 changes: 16 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

## 1) Run the server

### Recommended: use a pre-built container
### Use a pre-built image

CUDA 12.1:

```bash
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121
```

CUDA 11.8 (for older cards):
Expand All @@ -16,6 +16,12 @@ CUDA 11.8 (for older cards):
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest
```

CPU (not recommended):

```bash
$ docker run -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpu
```

Run with a fine-tuned model:

Make sure the model folder `/path/to/model/folder` contains the following files:
Expand All @@ -27,14 +33,18 @@ Make sure the model folder `/path/to/model/folder` contains the following files
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest`
```

## Not Recommended: Build the container yourself
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to
the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml))

## Build the image yourself

To build the Docker container Pytorch 2.1 and CUDA 11.8 :

`DOCKERFILE` may be `Dockerfile`, `Dockerfile.cpu`, `Dockerfile.cuda121`, or your own custom Dockerfile.

```bash
$ cd server
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git
$ cd xtts-streaming-server/server
$ docker build -t xtts-stream . -f DOCKERFILE
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream
```
Expand All @@ -46,7 +56,7 @@ the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models

Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal.
### Clone `xtts-streaming-server`
### Clone `xtts-streaming-server` if you haven't already

```bash
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git
Expand All @@ -63,8 +73,7 @@ $ python demo.py
### Using the test script

```bash
$ cd xtts-streaming-server
$ cd test
$ cd xtts-streaming-server/test
$ python -m pip install -r requirements.txt
$ python test_streaming.py
```

0 comments on commit f5836ec

Please sign in to comment.