Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ A higher output tokens throughput indicates a higher throughput of the LLM infer
|
|
25 |
|
26 |
#### Run Configurations
|
27 |
|
28 |
-
testscript[token_benchmark_ray.py](https://github.com/ray-project/llmperf/blob/main/token_benchmark_ray.py)
|
29 |
|
30 |
```
|
31 |
For each provider, we perform:
|
|
|
25 |
|
26 |
#### Run Configurations
|
27 |
|
28 |
+
testscript [token_benchmark_ray.py](https://github.com/ray-project/llmperf/blob/main/token_benchmark_ray.py)
|
29 |
|
30 |
```
|
31 |
For each provider, we perform:
|