diff --git a/README.md b/README.md index 44d42a8bb..cb94770cd 100644 --- a/README.md +++ b/README.md @@ -64,8 +64,8 @@ llama | [`sharktank/sharktank/models/llama/`](https://github.com/nod-ai/SHA ## SHARK Users -If you're looking to use SHARK check out our [User Guide](docs/README.md). +If you're looking to use SHARK check out our [User Guide](docs/user_guide.md). ## SHARK Developers -If you're looking to develop SHARK, check out our [Developer Guide](docs/developer.md). \ No newline at end of file +If you're looking to develop SHARK, check out our [Developer Guide](docs/developer_guide.md). \ No newline at end of file diff --git a/docs/developer.md b/docs/developer_guide.md similarity index 100% rename from docs/developer.md rename to docs/developer_guide.md diff --git a/docs/README.md b/docs/user_guide.md similarity index 87% rename from docs/README.md rename to docs/user_guide.md index 858780611..bdd95203c 100644 --- a/docs/README.md +++ b/docs/user_guide.md @@ -15,7 +15,7 @@ Our current user guide requires that you have: ## Set up Environment -You will need a recent version of Python. We recommend also setting up a Python environment. +This section will help you install Python and set up a Python environment with venv. Officially we support Python versions: 3.11, 3.12, 3.13, 3.13t @@ -33,7 +33,7 @@ which python3.11 ### Create a Python Environment -This guide assumes you'll be using pyenv. Setup your pyenv with the following commands: +Setup your Python environment with the following commands: ```bash # Set up a virtual environment to isolate packages from other envs. @@ -44,10 +44,7 @@ source 3.11.venv/bin/activate ## Install SHARK and its dependencies ```bash -pip install transformers -pip install dataclasses-json -pip install pillow -pip install shark-ai +pip install shark-ai[apps] ``` Temporarily, you may need an update to your `shortfin` install. @@ -64,11 +61,11 @@ python -m shortfin_apps.sd.server --help ## Quickstart -### Run the SD Server +### Run the SDXL Server -Run the [SD Server](../shortfin/python/shortfin_apps/sd/README.md#Start SD Server) +Run the [SDXL Server](../shortfin/python/shortfin_apps/sd/README.md#Start-SDXL-Server) -### Run the SD Client +### Run the SDXL Client ``` python -m shortfin_apps.sd.simple_client --interactive diff --git a/shortfin/python/shortfin_apps/sd/README.md b/shortfin/python/shortfin_apps/sd/README.md index 5ade5d27e..3397be6cf 100644 --- a/shortfin/python/shortfin_apps/sd/README.md +++ b/shortfin/python/shortfin_apps/sd/README.md @@ -1,14 +1,13 @@ -# SD Server and CLI - -This directory contains a SD inference server, CLI and support components. +# SDXL Server and CLI +This directory contains a [SDXL](https://stablediffusionxl.com/) inference server, CLI and support components. More information about SDXL on [huggingface](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). ## Install For [nightly releases](../../../../docs/nightly_releases.md) -For our [stable release](../../../../docs/README.md) +For our [stable release](../../../../docs/user_guide.md) -## Start SD Servier +## Start SDXL Server The server will prepare runtime artifacts for you. By default, the port is set to 8000. If you would like to change this, use `--port` in each of the following commands. @@ -23,7 +22,7 @@ python -m shortfin_apps.sd.server --device=amdgpu --device_ids=0 --build_prefere INFO - Application startup complete. INFO - Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) ``` -## Run the SD Client +## Run the SDXL Client - Run a CLI client in a separate shell: ```