Skip to content

Commit

Permalink
Update version in documents to 20.03.1
Browse files Browse the repository at this point in the history
  • Loading branch information
dzier committed May 27, 2020
1 parent 221ee61 commit f5be8a7
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 10 deletions.
4 changes: 2 additions & 2 deletions docs/build.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ change directory to the root of the repo and checkout the release
version of the branch that you want to build (or the master branch if
you want to build the under-development version)::

$ git checkout r20.03
$ git checkout r20.03.1

Then use docker to build::

Expand Down Expand Up @@ -106,7 +106,7 @@ CMake, change directory to the root of the repo and checkout the
release version of the branch that you want to build (or the master
branch if you want to build the under-development version)::

$ git checkout r20.03
$ git checkout r20.03.1

Next you must build or install each framework backend you want to
enable in the inference server, configure the inference server to
Expand Down
4 changes: 2 additions & 2 deletions docs/client.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ want to build (or the master branch if you want to build the
under-development version). The branch you use for the client build
should match the version of the inference server you are using::

$ git checkout r20.03
$ git checkout r20.03.1

Then, issue the following command to build the C++ client library and
a Python wheel file for the Python client library::
Expand Down Expand Up @@ -112,7 +112,7 @@ of the repo and checkout the release version of the branch that you
want to build (or the master branch if you want to build the
under-development version)::

$ git checkout r20.03
$ git checkout r20.03.1

Ubuntu 16.04 / Ubuntu 18.04
...........................
Expand Down
4 changes: 2 additions & 2 deletions docs/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ the most recent version of CUDA, Docker, and nvidia-docker.
After performing the above setup, you can pull the Triton Inference
Server container using the following command::

docker pull nvcr.io/nvidia/tritonserver:20.03-py3
docker pull nvcr.io/nvidia/tritonserver:20.03.1-py3

Replace *20.03* with the version of inference server that you want to
Replace *20.03.1* with the version of inference server that you want to
pull.
4 changes: 2 additions & 2 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ following prerequisite steps:
sure to select the r<xx.yy> release branch that corresponds to the
version of the server you want to use::

$ git checkout r20.03
$ git checkout r20.03.1

* Create a model repository containing one or more models that you
want the inference server to serve. An example model repository is
Expand Down Expand Up @@ -112,7 +112,7 @@ GitHub repo and checkout the release version of the branch that you
want to build (or the master branch if you want to build the
under-development version)::

$ git checkout r20.03
$ git checkout r20.03.1

Then use docker to build::

Expand Down
4 changes: 2 additions & 2 deletions docs/run.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ sure to checkout the release version of the branch that corresponds to
the server you are using (or the master branch if you are using a
server build from master)::

$ git checkout r20.03
$ git checkout r20.03.1
$ cd docs/examples
$ ./fetch_models.sh

Expand Down Expand Up @@ -103,7 +103,7 @@ you pulled from NGC or built locally::
$ nvidia-docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -v/path/to/model/repository:/models <tritonserver image name> tritonserver --model-repository=/models

Where *<tritonserver image name>* will be something like
**nvcr.io/nvidia/tritonserver:20.03-py3** if you :ref:`pulled the
**nvcr.io/nvidia/tritonserver:20.03.1-py3** if you :ref:`pulled the
container from the NGC registry
<section-installing-prebuilt-containers>`, or **tritonserver** if
you :ref:`built it from source <section-building>`.
Expand Down

0 comments on commit f5be8a7

Please sign in to comment.