daviswer commited on
Commit
714de30
1 Parent(s): 89dd337

Update new note

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -11,8 +11,8 @@ from the prior stage (the base model can be considered stage 0).
11
  The state vector from the base model provides contextual information to the accelerator,
12
  while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
13
 
14
- Note: The underlying speculative model (untrained) is a generic model that could be used with any generative model to accelerate inference. Training
15
- is quite light-weight and may only require a few days to be fully pre-trained.
16
 
17
  ## Repository Links
18
 
 
11
  The state vector from the base model provides contextual information to the accelerator,
12
  while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
13
 
14
+ Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
15
+ Training is light-weight and can be completed in only a few days depending on base model size and speed.
16
 
17
  ## Repository Links
18