Skip to content

Commit

Permalink
Fix typo
Browse files Browse the repository at this point in the history
  • Loading branch information
nv-hwoo committed Oct 14, 2023
1 parent e61c16e commit f1113b3
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/c++/perf_analyzer/docs/llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ python profile.py -m vllm --prompt-size-range 100 500 200 --max-tokens 256 --ign
>
> This benchmark relies on the feature that will be available from `23.10` release
> which is on its way soon. You can either wait until the `23.10` container
> is ready or build Perf Analyzer from the latest `main` branch (see [build from source instructions](install.md#build-from-source).
> is ready or build Perf Analyzer from the latest `main` branch (see [build from source instructions](install.md#build-from-source)).
In this benchmarking scenario, we want to measure the effect of continuous
batch size on token-to-token latency. We systematically issue requests to the
Expand Down

0 comments on commit f1113b3

Please sign in to comment.