Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SHARK user guide to root of docs directory #528

Merged
merged 4 commits into from
Nov 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 4 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,68 +61,11 @@ Model name | Model recipes | Serving apps
SDXL | [`sharktank/sharktank/models/punet/`](https://github.com/nod-ai/SHARK-Platform/tree/main/sharktank/sharktank/models/punet) | [`shortfin/python/shortfin_apps/sd/`](https://github.com/nod-ai/SHARK-Platform/tree/main/shortfin/python/shortfin_apps/sd)
llama | [`sharktank/sharktank/models/llama/`](https://github.com/nod-ai/SHARK-Platform/tree/main/sharktank/sharktank/models/llama) | [`shortfin/python/shortfin_apps/llm/`](https://github.com/nod-ai/SHARK-Platform/tree/main/shortfin/python/shortfin_apps/llm)

## Development tips

Each sub-project has its own developer guide. If you would like to work across
projects, these instructions should help you get started:
## SHARK Users

### Setup a venv
If you're looking to use SHARK check out our [User Guide](docs/user_guide.md).

We recommend setting up a Python
[virtual environment (venv)](https://docs.python.org/3/library/venv.html).
The project is configured to ignore `.venv` directories, and editors like
VSCode pick them up by default.
## SHARK Developers

```bash
python -m venv .venv
source .venv/bin/activate
```

### Install PyTorch for your system

If no explicit action is taken, the default PyTorch version will be installed.
This will give you a current CUDA-based version, which takes longer to download
and includes other dependencies that SHARK does not require. To install a
different variant, run one of these commands first:

* *CPU:*

```bash
pip install -r pytorch-cpu-requirements.txt
```

* *ROCM:*

```bash
pip install -r pytorch-rocm-requirements.txt
```

* *Other:* see instructions at <https://pytorch.org/get-started/locally/>.

### Install development packages

```bash
# Install editable local projects.
pip install -r requirements.txt -e sharktank/ shortfin/

# Optionally clone and install the latest editable iree-turbine dep in deps/,
# along with nightly versions of iree-base-compiler and iree-base-runtime.
pip install -f https://iree.dev/pip-release-links.html --upgrade --pre \
iree-base-compiler iree-base-runtime --src deps \
-e "git+https://github.com/iree-org/iree-turbine.git#egg=iree-turbine"
```

See also: [`docs/nightly_releases.md`](./docs/nightly_releases.md).

### Running tests

```bash
pytest sharktank
pytest shortfin
```

### Optional: pre-commits and developer settings

This project is set up to use the `pre-commit` tooling. To install it in
your local repo, run: `pre-commit install`. After this point, when making
commits locally, hooks will run. See https://pre-commit.com/
If you're looking to develop SHARK, check out our [Developer Guide](docs/developer_guide.md).
65 changes: 65 additions & 0 deletions docs/developer_guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# SHARK Developer Guide

Each sub-project has its own developer guide. If you would like to work across
projects, these instructions should help you get started:

### Setup a venv

We recommend setting up a Python
[virtual environment (venv)](https://docs.python.org/3/library/venv.html).
The project is configured to ignore `.venv` directories, and editors like
VSCode pick them up by default.

```bash
python -m venv .venv
source .venv/bin/activate
```

### Install PyTorch for your system

If no explicit action is taken, the default PyTorch version will be installed.
This will give you a current CUDA-based version, which takes longer to download
and includes other dependencies that SHARK does not require. To install a
different variant, run one of these commands first:

* *CPU:*

```bash
pip install -r pytorch-cpu-requirements.txt
```

* *ROCM:*

```bash
pip install -r pytorch-rocm-requirements.txt
```

* *Other:* see instructions at <https://pytorch.org/get-started/locally/>.

### Install development packages

```bash
# Install editable local projects.
pip install -r requirements.txt -e sharktank/ shortfin/

# Optionally clone and install the latest editable iree-turbine dep in deps/,
# along with nightly versions of iree-base-compiler and iree-base-runtime.
pip install -f https://iree.dev/pip-release-links.html --upgrade --pre \
iree-base-compiler iree-base-runtime --src deps \
-e "git+https://github.com/iree-org/iree-turbine.git#egg=iree-turbine"
```

See also: [nightly_releases.md](nightly_releases.md).

### Running tests

```bash
pytest sharktank
pytest shortfin
```

### Optional: pre-commits and developer settings

This project is set up to use the `pre-commit` tooling. To install it in
your local repo, run: `pre-commit install`. After this point, when making
commits locally, hooks will run. See https://pre-commit.com/
115 changes: 115 additions & 0 deletions docs/user_guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# SHARK User Guide

> [!WARNING]
> This is still pre-release so the artifacts listed here may be broken
>

These instructions cover the usage of the latest stable release of SHARK. For a more bleeding edge release please install the [nightly releases](nightly_releases.md).

## Prerequisites

Our current user guide requires that you have:
- Access to a computer with an installed AMD Instinct™ MI300x Series Accelerator
- Installed a compatible version of Linux and ROCm on the computer (see the [ROCm compatability matrix](https://rocm.docs.amd.com/en/latest/compatibility/compatibility-matrix.html))


## Set up Environment

This section will help you install Python and set up a Python environment with venv.

Officially we support Python versions: 3.11, 3.12, 3.13, 3.13t

The rest of this guide assumes you are using Python 3.11.

### Install Python
To install Python 3.11 on Ubuntu:

```bash
sudo apt install python3.11 python3.11-dev python3.11-venv

which python3.11
# /usr/bin/python3.11
```

### Create a Python Environment

Setup your Python environment with the following commands:

```bash
# Set up a virtual environment to isolate packages from other envs.
python3.11 -m venv 3.11.venv
source 3.11.venv/bin/activate
```

## Install SHARK and its dependencies

```bash
pip install shark-ai[apps]
```

Temporarily, you may need an update to your `shortfin` install.
Install the latest pre-release with:
```
pip install shortfin --upgrade --pre -f https://github.com/nod-ai/SHARK-Platform/releases/expanded_assets/dev-wheels
```

### Test the installation.

```
python -m shortfin_apps.sd.server --help
```

## Quickstart

### Run the SDXL Server

Run the [SDXL Server](../shortfin/python/shortfin_apps/sd/README.md#Start-SDXL-Server)

### Run the SDXL Client

```
python -m shortfin_apps.sd.simple_client --interactive
```

Congratulations!!! At this point you can play around with the server and client based on your usage.

### Update flags

Please see --help for both the server and client for usage instructions. Here's a quick snapshot.

#### Update server options:

| Flags | options |
|---|---|
|--host HOST |
|--port PORT | server port |
|--root-path ROOT_PATH |
|--timeout-keep-alive |
|--device | local-task,hip,amdgpu | amdgpu only supported in this release
|--target | gfx942,gfx1100 | gfx942 only supported in this release
|--device_ids |
|--tokenizers |
|--model_config |
| --workers_per_device |
| --fibers_per_device |
| --isolation | per_fiber, per_call, none |
| --show_progress |
| --trace_execution |
| --amdgpu_async_allocations |
| --splat |
| --build_preference | compile,precompiled |
| --compile_flags |
| --flagfile FLAGFILE |
| --artifacts_dir ARTIFACTS_DIR | Where to store cached artifacts from the Cloud |

#### Update client with different options:

| Flags |options|
|---|---
|--file |
|--reps |
|--save | Whether to save image generated by the server |
|--outputdir| output directory to store images generated by SDXL |
|--steps |
|--interactive |
|--port| port to interact with server |
31 changes: 8 additions & 23 deletions shortfin/python/shortfin_apps/sd/README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,13 @@
# SD Server and CLI
# SDXL Server and CLI

This directory contains a SD inference server, CLI and support components.
This directory contains a [SDXL](https://stablediffusionxl.com/) inference server, CLI and support components. More information about SDXL on [huggingface](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).

## Install

## Quick start
For [nightly releases](../../../../docs/nightly_releases.md)
For our [stable release](../../../../docs/user_guide.md)

In your shortfin environment,
```
pip install transformers
pip install dataclasses-json
pip install pillow
pip install shark-ai

```

Temporarily, you may need an update to your `shortfin` install.
Install the latest pre-release with:
```
pip install shortfin --upgrade --pre -f https://github.com/nod-ai/SHARK-Platform/releases/expanded_assets/dev-wheels
```

```
python -m shortfin_apps.sd.server --help
```

# Run on MI300x
## Start SDXL Server
The server will prepare runtime artifacts for you.

By default, the port is set to 8000. If you would like to change this, use `--port` in each of the following commands.
Expand All @@ -39,6 +22,8 @@ python -m shortfin_apps.sd.server --device=amdgpu --device_ids=0 --build_prefere
INFO - Application startup complete.
INFO - Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
```
## Run the SDXL Client

- Run a CLI client in a separate shell:
```
python -m shortfin_apps.sd.simple_client --interactive
Expand Down
Loading