JRosenkranz commited on
Commit
c8b3371
1 Parent(s): f4c8757

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -4,19 +4,18 @@ license: llama2
4
 
5
  ## Description
6
 
7
- This model as intended to be used as an accelerator for llama 13B (chat).
8
-
9
- It takes inspiration from the Medusa architecture and modifies the MLP into a multi-stage MLP,
10
- where each stage predicts a single token in the draft. Each stage takes as input both a state
11
- vector and sampled token embedding from the prior stage (the base model can be considered
12
- stage 0). The inputs are projected and passed through a LayerNorm/GeLU activation, forming a
13
- new state vector. This state vector is used to predict the next draft token, which, with the new
14
- state vector, acts as input for the next stage of prediction. We sample multiple tokens at each
15
- stage, and emit a tree of candidate suffixes to evaluate in parallel.
16
-
17
-
18
- Undlerlying implementation of Paged Attention KV-Cached and speculator can be found in https://github.com/foundation-model-stack/fms-extras
19
- Production implementation using `fms-extras` implementation can be found in https://github.com/tdoublep/text-generation-inference/tree/speculative-decoding
20
 
21
  ## Samples
22
 
 
4
 
5
  ## Description
6
 
7
+ This model is intended to be used as an accelerator for llama 13B (chat) and takes inspiration
8
+ from the Medusa architecture and modifies the MLP into a multi-stage MLP, where each stage predicts
9
+ a single token in the draft. Each stage takes as input both a state vector and sampled token embedding
10
+ from the prior stage (the base model can be considered stage 0). The inputs are projected and passed
11
+ through a LayerNorm/GeLU activation, forming a new state vector. This state vector is used to predict
12
+ the next draft token, which, with the new state vector, acts as input for the next stage of prediction.
13
+ We sample multiple tokens at each stage, and emit a tree of candidate suffixes to evaluate in parallel.
14
+
15
+ ## Code
16
+
17
+ - Paged Attention KV-Cache / Speculator Implementations: https://github.com/foundation-model-stack/fms-extras
18
+ - Production Server with speculative decoding implementation: https://github.com/tdoublep/text-generation-inference/tree/speculative-decoding
 
19
 
20
  ## Samples
21