Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ A higher output tokens throughput indicates a higher throughput of the LLM infer
|
|
30 |
|
31 |
testscript [token_benchmark_ray.py](https://github.com/ray-project/llmperf/blob/main/token_benchmark_ray.py)
|
32 |
|
33 |
-
|
34 |
For each provider, we perform:
|
35 |
- Total number of requests: 100
|
36 |
- Concurrency: 1
|
@@ -38,7 +38,8 @@ For each provider, we perform:
|
|
38 |
- Expected output length: 1024
|
39 |
- Tested models: claude-instant-v1-100k
|
40 |
|
41 |
-
|
|
|
42 |
--model bedrock/anthropic.claude-instant-v1 \
|
43 |
--mean-input-tokens 1024 \
|
44 |
--stddev-input-tokens 0 \
|
|
|
30 |
|
31 |
testscript [token_benchmark_ray.py](https://github.com/ray-project/llmperf/blob/main/token_benchmark_ray.py)
|
32 |
|
33 |
+
|
34 |
For each provider, we perform:
|
35 |
- Total number of requests: 100
|
36 |
- Concurrency: 1
|
|
|
38 |
- Expected output length: 1024
|
39 |
- Tested models: claude-instant-v1-100k
|
40 |
|
41 |
+
```
|
42 |
+
python token_benchmark_ray.py \
|
43 |
--model bedrock/anthropic.claude-instant-v1 \
|
44 |
--mean-input-tokens 1024 \
|
45 |
--stddev-input-tokens 0 \
|