llmperf-bedrock / README.md
ssong1's picture
Update README.md
60678b2 verified
|
raw
history blame
3.63 kB
metadata
license: apache-2.0

Utilizing the LLMPerf, we have benchmarked a selection of LLM inference providers. Our analysis focuses on evaluating their performance, reliability, and efficiency under the following key metrics:

  • Output tokens throughput, which represents the average number of output tokens returned per second. This metric is important for applications that require high throughput, such as summarization and translation, and easy to compare across different models and providers.
  • Time to first token (TTFT), which represents the duration of time that LLM returns the first token. TTFT is especially important for streaming applications, such as chatbots.

Time to First Token (seconds)

For streaming applications, the TTFT is how long before the LLM returns the first token.

Framework Model Median Mean Min Max P25 P75 P95 P99
bedrock claude-instant-v1 1.21 1.29 1.12 2.19 1.17 1.27 1.89 2.17

Output Tokens Throughput (tokens/s)

The output tokens throughput is measured as the average number of output tokens returned per second. We collect results by sending 100 requests to each LLM inference provider, and calculate the mean output tokens throughput based on 100 requests. A higher output tokens throughput indicates a higher throughput of the LLM inference provider.

Framework Model Median Mean Min Max P25 P75 P95 P99
bedrock claude-instant-v1 65.64 65.98 16.05 110.38 57.29 75.57 99.73 106.42

Run Configurations

testscript token_benchmark_ray.py

For each provider, we perform:
- Total number of requests:     100
- Concurrency:                  1 
- Prompt's token length:        1024
- Expected output length:       1024
- Tested models:                claude-instant-v1-100k

   python token_benchmark_ray.py \
    --model bedrock/anthropic.claude-instant-v1 \
    --mean-input-tokens 1024 \
    --stddev-input-tokens 0 \
    --mean-output-tokens 1024 \
    --stddev-output-tokens 100 \
    --max-num-completed-requests 100 \
    --num-concurrent-requests 1 \
    --llm-api litellm 

We ran the LLMPerf clients from an on-premise Kubernetes Bastion host. The results were up-to-date of January 19, 2023, 3pm KST. You could find the detailed results in the raw_data folder.

Caveats and Disclaimers

  • The endpoints provider backend might vary widely, so this is not a reflection on how the software runs on a particular hardware.
  • The results may vary with time of day.
  • The results (e.g. measurement of TTFT) depend on client location, and can also be biased by some providers lagging on the first token in order to increase ITL.
  • The results is only a proxy of the system capabilities and is also impacted by the existing system load and provider traffic.
  • The results may not correlate with users’ workloads.