From f5be8a7f03310e9d2f884d0a716c4b2046d2a56f Mon Sep 17 00:00:00 2001 From: dzier Date: Wed, 27 May 2020 16:40:06 -0700 Subject: [PATCH] Update version in documents to 20.03.1 --- docs/build.rst | 4 ++-- docs/client.rst | 4 ++-- docs/install.rst | 4 ++-- docs/quickstart.rst | 4 ++-- docs/run.rst | 4 ++-- 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/build.rst b/docs/build.rst index 6b2a7aa0e6..c17b0f652e 100644 --- a/docs/build.rst +++ b/docs/build.rst @@ -59,7 +59,7 @@ change directory to the root of the repo and checkout the release version of the branch that you want to build (or the master branch if you want to build the under-development version):: - $ git checkout r20.03 + $ git checkout r20.03.1 Then use docker to build:: @@ -106,7 +106,7 @@ CMake, change directory to the root of the repo and checkout the release version of the branch that you want to build (or the master branch if you want to build the under-development version):: - $ git checkout r20.03 + $ git checkout r20.03.1 Next you must build or install each framework backend you want to enable in the inference server, configure the inference server to diff --git a/docs/client.rst b/docs/client.rst index 5e3b80c652..dc576bb4d9 100644 --- a/docs/client.rst +++ b/docs/client.rst @@ -74,7 +74,7 @@ want to build (or the master branch if you want to build the under-development version). The branch you use for the client build should match the version of the inference server you are using:: - $ git checkout r20.03 + $ git checkout r20.03.1 Then, issue the following command to build the C++ client library and a Python wheel file for the Python client library:: @@ -112,7 +112,7 @@ of the repo and checkout the release version of the branch that you want to build (or the master branch if you want to build the under-development version):: - $ git checkout r20.03 + $ git checkout r20.03.1 Ubuntu 16.04 / Ubuntu 18.04 ........................... diff --git a/docs/install.rst b/docs/install.rst index 99669ea9a3..7ebec4c3fe 100644 --- a/docs/install.rst +++ b/docs/install.rst @@ -51,7 +51,7 @@ the most recent version of CUDA, Docker, and nvidia-docker. After performing the above setup, you can pull the Triton Inference Server container using the following command:: - docker pull nvcr.io/nvidia/tritonserver:20.03-py3 + docker pull nvcr.io/nvidia/tritonserver:20.03.1-py3 -Replace *20.03* with the version of inference server that you want to +Replace *20.03.1* with the version of inference server that you want to pull. diff --git a/docs/quickstart.rst b/docs/quickstart.rst index 4c443e71cd..dd19adb0bf 100644 --- a/docs/quickstart.rst +++ b/docs/quickstart.rst @@ -58,7 +58,7 @@ following prerequisite steps: sure to select the r release branch that corresponds to the version of the server you want to use:: - $ git checkout r20.03 + $ git checkout r20.03.1 * Create a model repository containing one or more models that you want the inference server to serve. An example model repository is @@ -112,7 +112,7 @@ GitHub repo and checkout the release version of the branch that you want to build (or the master branch if you want to build the under-development version):: - $ git checkout r20.03 + $ git checkout r20.03.1 Then use docker to build:: diff --git a/docs/run.rst b/docs/run.rst index 2336ea3dcb..734db80cf0 100644 --- a/docs/run.rst +++ b/docs/run.rst @@ -62,7 +62,7 @@ sure to checkout the release version of the branch that corresponds to the server you are using (or the master branch if you are using a server build from master):: - $ git checkout r20.03 + $ git checkout r20.03.1 $ cd docs/examples $ ./fetch_models.sh @@ -103,7 +103,7 @@ you pulled from NGC or built locally:: $ nvidia-docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -v/path/to/model/repository:/models tritonserver --model-repository=/models Where ** will be something like -**nvcr.io/nvidia/tritonserver:20.03-py3** if you :ref:`pulled the +**nvcr.io/nvidia/tritonserver:20.03.1-py3** if you :ref:`pulled the container from the NGC registry `, or **tritonserver** if you :ref:`built it from source `.