Text Generation
Transformers
Safetensors
English
stripedhyena
custom_code
Zymrael commited on
Commit
e4713f6
1 Parent(s): 79167ac

chore: more info in the readme

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -6,10 +6,18 @@ language:
6
 
7
  ## StripedHyena-Hessian-7B (SH-7B)
8
 
 
 
 
 
 
 
 
9
 
10
  ### Model Architecture
11
 
12
  StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers.
13
  - Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters.
14
- - Lower latency to preprocess long prompts.
15
- - Improvements to training and inference compute-optimal scaling laws, compared to Transformers.
 
 
6
 
7
  ## StripedHyena-Hessian-7B (SH-7B)
8
 
9
+ ### About
10
+
11
+ One of the focus areas at Together Research is new architectures for long context, improved training, and inference performance over the Transformer architecture. Spinning out of a research program from our team and academic collaborators, with roots in signal processing-inspired sequence models, we are excited to introduce the StripedHyena models. StripedHyena is the first alternative model competitive with the best open-source Transformers of similar sizes in short and long-context evaluations.
12
+
13
+ - Read more here in [our blog](https://together-ai.webflow.io/blog/stripedhyena-7b)
14
+ - Play with the model on our playground!
15
+ - Dive into the details of our [Standalone implementation](https://github.com/togethercomputer/stripedhyena)
16
 
17
  ### Model Architecture
18
 
19
  StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers.
20
  - Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters.
21
+ - Low latency, faster decoding and higher throughput than Transformers.
22
+ - Improvement to training and inference-optimal scaling laws, compared to Transformers.
23
+ - Trained on sequences of up to 32k, allowing it to process longer prompts.