Model
stringclasses 9
values | #Model Parameters (B)
int64 1
70
| Draft (Assistant)
stringclasses 10
values | #Draft Parameters (B)
float64 0.27
13
⌀ | Task
stringclasses 1
value | Total Parameter Size (B)
float64 1
83
| Speculative
Average time per input (ms)
float64 1.2k
6.01k
| Speculative
Average time per token (ms)
float64 9.96
55
| Original
Average time per input (ms)
float64 2.15k
12.4k
| Original
Average time per token (ms)
float64 17.9
114
| Speedup
float64 1
2.84
| Command
stringlengths 93
109
|
---|---|---|---|---|---|---|---|---|---|---|---|
meta-llama/Llama-2-7b-hf | 7 | TinyLlama/TinyLlama_v1.1 | 1 | summarization | 8 | 2,771.54 | 21.65 | 3,368.48 | 26.32 | 1.22 | python benchmark_decoder_summ.py meta-llama/Llama-2-7b-hf --aux-model TinyLlama/TinyLlama_v1.1 --dtype fp16 |
meta-llama/Llama-2-7b-hf | 7 | apple/OpenELM-270M | 0.27 | summarization | 7.27 | 2,607.82 | 20.37 | 4,221.14 | 32.98 | 1.62 | python benchmark_decoder_summ.py meta-llama/Llama-2-7b-hf --aux-model apple/OpenELM-270M --dtype fp16 |
meta-llama/Llama-2-7b-hf | 7 | apple/OpenELM-450M | 0.45 | summarization | 7.45 | 3,324.68 | 25.97 | 4,178.66 | 32.65 | 1.26 | python benchmark_decoder_summ.py meta-llama/Llama-2-7b-hf --aux-model apple/OpenELM-450M --dtype fp16 |
facebook/layerskip-llama2-7B | 7 | Early Exit @ Layer 4 | null | summarization | 7 | 2,548.4 | 19.91 | 3,306.73 | 25.83 | 1.297338 | python benchmark_decoder_summ.py facebook/layerskip-llama2-7B --aux-early-exit 4 --dtype fp16 |
meta-llama/Llama-2-13b-hf | 13 | meta-llama/Llama-2-7b-hf | 7 | summarization | 20 | 3,557.07 | 27.79 | 4,088.48 | 31.94 | 1.149334 | python benchmark_decoder_summ.py meta-llama/Llama-2-13b-hf --aux-model meta-llama/Llama-2-7b-hf --dtype fp16 |
meta-llama/Llama-2-13b-hf | 13 | TinyLlama/TinyLlama_v1.1 | 1 | summarization | 14 | 2,901.92 | 22.67 | 4,190.42 | 32.74 | 1.444199 | python benchmark_decoder_summ.py meta-llama/Llama-2-13b-hf --aux-model TinyLlama/TinyLlama_v1.1 --dtype fp16 |
meta-llama/Llama-2-13b-hf | 13 | apple/OpenELM-270M | 0.27 | summarization | 13.27 | 2,883.33 | 22.53 | 4,521.12 | 35.32 | 1.567688 | python benchmark_decoder_summ.py meta-llama/Llama-2-13b-hf --aux-model apple/OpenELM-270M --dtype fp16 |
meta-llama/Llama-2-13b-hf | 13 | apple/OpenELM-450M | 0.45 | summarization | 13.45 | 3,267.69 | 25.53 | 4,321.75 | 33.76 | 1.322366 | python benchmark_decoder_summ.py meta-llama/Llama-2-13b-hf --aux-model apple/OpenELM-450M --dtype fp16 |
facebook/layerskip-llama2-13B | 13 | Early Exit @ Layer 4 | null | summarization | 13 | 4,238.45 | 33.11 | 4,217.78 | 32.95 | 0.995168 | python benchmark_decoder_summ.py facebook/layerskip-llama2-13B --aux-early-exit 4 --dtype fp16 |
facebook/layerskip-llama2-13B | 13 | Early Exit @ Layer 8 | null | summarization | 13 | 2,459.61 | 19.22 | 4,294.98 | 33.55 | 1.745578 | python benchmark_decoder_summ.py facebook/layerskip-llama2-13B --aux-early-exit 8 --dtype fp16 |
facebook/layerskip-llama3.2-1B | 1 | Early Exit @ Layer 4 | null | summarization | 1 | 1,195.28 | 9.96 | 2,147.7 | 17.9 | 1.8 | python benchmark_decoder_summ.py facebook/layerskip-llama3.2-1B --aux-early-exit 4 --dtype fp16 |
meta-llama/Meta-Llama-3-8B | 8 | meta-llama/Llama-3.2-1B | 1 | summarization | 9 | 1,872.46 | 19.04 | 2,859.35 | 29.08 | 1.53 | python benchmark_decoder_summ.py meta-llama/Meta-Llama-3-8B --aux-model meta-llama/Llama-3.2-1B --dtype fp16 |
meta-llama/Meta-Llama-3-8B | 8 | meta-llama/Llama-3.2-3B | 3 | summarization | 11 | 2,814.82 | 28.63 | 2,825.36 | 28.73 | 1 | python benchmark_decoder_summ.py meta-llama/Meta-Llama-3-8B --aux-model meta-llama/Llama-3.2-3B --dtype fp16 |
facebook/layerskip-llama3-8B | 8 | Early Exit @ Layer 4 | null | summarization | 8 | 1,949.02 | 15.75 | 3,571.81 | 28.87 | 1.83 | python benchmark_decoder_summ.py facebook/layerskip-llama3-8B --aux-early-exit 4 --dtype fp16 |
meta-llama/Llama-2-70b-hf | 70 | meta-llama/Llama-2-13b-hf | 13 | summarization | 83 | 5,036.54 | 46.3 | 12,289.01 | 112.97 | 2.439957 | python benchmark_decoder_summ.py meta-llama/Llama-2-70b-hf --aux-model meta-llama/Llama-2-13b-hf --dtype fp16 |
meta-llama/Llama-2-70b-hf | 70 | meta-llama/Llama-2-7b-hf | 7 | summarization | 77 | 4,357.55 | 40.06 | 12,324.19 | 113.3 | 2.828258 | python benchmark_decoder_summ.py meta-llama/Llama-2-70b-hf --aux-model meta-llama/Llama-2-7b-hf --dtype fp16 |
meta-llama/Llama-2-70b-hf | 70 | TinyLlama/TinyLlama_v1.1 | 1 | summarization | 71 | 4,356.21 | 40.05 | 12,363.22 | 113.66 | 2.837953 | python benchmark_decoder_summ.py meta-llama/Llama-2-70b-hf --aux-model TinyLlama/TinyLlama_v1.1 --dtype fp16 |
facebook/layerskip-llama2-70B | 70 | Early Exit @ Layer 10 | null | summarization | 70 | 6,012.04 | 54.96 | 12,383.34 | 113.2 | 2.06 | python benchmark_decoder_summ.py facebook/layerskip-llama2-70B --aux-early-exit 10 --dtype fp16 |
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
LayerSkip Assets
This dataset holds some of the assets for the blog post on LayerSkip.
PR: https://github.com/huggingface/blog/pull/2459
Contents:
- early_exit_self_speculative_decoding.ipynb: Notebook that deeps dive into the working of LayerSkip
- summarization.csv: A CSV containing the benchmark results for (self) speculative-decoding strategies.
Thanks to Mostafa (the first author of LayerSkip) for the assets 🤗
- Downloads last month
- 26