Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,6 @@
|
|
|
|
|
|
|
|
1 |
Utilizing the [LLMPerf](https://github.com/ray-project/llmperf), we have benchmarked a selection of LLM inference providers.
|
2 |
Our analysis focuses on evaluating their performance, reliability, and efficiency under the following key metrics:
|
3 |
- Output tokens throughput, which represents the average number of output tokens returned per second. This metric is important for applications that require high throughput, such as summarization and translation, and easy to compare across different models and providers.
|
@@ -46,14 +49,13 @@ For each provider, we perform:
|
|
46 |
--llm-api litellm
|
47 |
```
|
48 |
|
49 |
-
|
|
|
50 |
|
51 |
#### Caveats and Disclaimers
|
52 |
|
53 |
- The endpoints provider backend might vary widely, so this is not a reflection on how the software runs on a particular hardware.
|
54 |
- The results may vary with time of day.
|
55 |
-
- The results (e.g. measurement of TTFT) depend on client location, and can also be biased by some providers lagging on the first token in order to increase ITL.
|
56 |
- The results is only a proxy of the system capabilities and is also impacted by the existing system load and provider traffic.
|
57 |
-
- The results may not correlate with users’ workloads.
|
58 |
-
|
59 |
-
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
Utilizing the [LLMPerf](https://github.com/ray-project/llmperf), we have benchmarked a selection of LLM inference providers.
|
5 |
Our analysis focuses on evaluating their performance, reliability, and efficiency under the following key metrics:
|
6 |
- Output tokens throughput, which represents the average number of output tokens returned per second. This metric is important for applications that require high throughput, such as summarization and translation, and easy to compare across different models and providers.
|
|
|
49 |
--llm-api litellm
|
50 |
```
|
51 |
|
52 |
+
We ran the LLMPerf clients on an AWS EC2 (Instance type: i4i.large) from an on-premise Kubernetes Bastion host.
|
53 |
+
The results were up-to-date of January 19, 2023, 3pm KST. You could find the detailed results in the [raw_data](raw_data) folder.
|
54 |
|
55 |
#### Caveats and Disclaimers
|
56 |
|
57 |
- The endpoints provider backend might vary widely, so this is not a reflection on how the software runs on a particular hardware.
|
58 |
- The results may vary with time of day.
|
59 |
+
- The results (e.g. measurement of TTFT) depend on client location, and can also be biased by some providers lagging on the first token in order to increase ITL.
|
60 |
- The results is only a proxy of the system capabilities and is also impacted by the existing system load and provider traffic.
|
61 |
+
- The results may not correlate with users’ workloads.
|
|
|
|