-
Notifications
You must be signed in to change notification settings - Fork 89
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
ae5d0a8
commit f5836ec
Showing
1 changed file
with
16 additions
and
7 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,12 +2,12 @@ | |
|
||
## 1) Run the server | ||
|
||
### Recommended: use a pre-built container | ||
### Use a pre-built image | ||
|
||
CUDA 12.1: | ||
|
||
```bash | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 | ||
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121 | ||
``` | ||
|
||
CUDA 11.8 (for older cards): | ||
|
@@ -16,6 +16,12 @@ CUDA 11.8 (for older cards): | |
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest | ||
``` | ||
|
||
CPU (not recommended): | ||
|
||
```bash | ||
$ docker run -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpu | ||
``` | ||
|
||
Run with a fine-tuned model: | ||
|
||
Make sure the model folder `/path/to/model/folder` contains the following files: | ||
|
@@ -27,14 +33,18 @@ Make sure the model folder `/path/to/model/folder` contains the following files | |
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest` | ||
``` | ||
|
||
## Not Recommended: Build the container yourself | ||
Setting the `COQUI_TOS_AGREED` environment variable to `1` indicates you have read and agreed to | ||
the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models also are under the [CPML license](https://coqui.ai/cpml)) | ||
|
||
## Build the image yourself | ||
|
||
To build the Docker container Pytorch 2.1 and CUDA 11.8 : | ||
|
||
`DOCKERFILE` may be `Dockerfile`, `Dockerfile.cpu`, `Dockerfile.cuda121`, or your own custom Dockerfile. | ||
|
||
```bash | ||
$ cd server | ||
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git | ||
$ cd xtts-streaming-server/server | ||
$ docker build -t xtts-stream . -f DOCKERFILE | ||
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream | ||
``` | ||
|
@@ -46,7 +56,7 @@ the terms of the [CPML license](https://coqui.ai/cpml). (Fine-tuned XTTS models | |
|
||
Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal. | ||
### Clone `xtts-streaming-server` | ||
### Clone `xtts-streaming-server` if you haven't already | ||
|
||
```bash | ||
$ git clone [email protected]:coqui-ai/xtts-streaming-server.git | ||
|
@@ -63,8 +73,7 @@ $ python demo.py | |
### Using the test script | ||
|
||
```bash | ||
$ cd xtts-streaming-server | ||
$ cd test | ||
$ cd xtts-streaming-server/test | ||
$ python -m pip install -r requirements.txt | ||
$ python test_streaming.py | ||
``` |