Text Generation
Transformers
PyTorch
Safetensors
English
stripedhyena
custom_code
File size: 1,984 Bytes
230c4b6
 
 
 
 
 
141d825
230c4b6
141d825
9fee428
141d825
 
 
 
 
 
 
 
 
 
 
230c4b6
 
 
 
 
141d825
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: apache-2.0
language:
- en
---

## StripedHyena-Nous-7B (SH-7B)

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/Bfjh77emDsWOY-VmfvU9C.png" width="60%" />
</p>

### About 

One of the focus areas at Together Research is new architectures for long context, improved training, and inference performance over the Transformer architecture. Spinning out of a research program from our team and academic collaborators, with roots in **signal processing-inspired sequence models**, we are excited to introduce the **StripedHyena** models. StripedHyena is the **first alternative model competitive with the best open-source Transformers** of similar sizes in short and long-context evaluations.

**StripedHyena-Nous-7B (SH 7B)** is our **chat model** for this release, and was developed with our collaborators at [Nous Research](https://nousresearch.com/).

- Read more here in [our blog](https://www.together.ai/blog/stripedhyena-7b).
- Play with the model on our playground!
- Dive into the details of our [standalone implementation](https://github.com/togethercomputer/stripedhyena), and our related research: [1](https://arxiv.org/abs/2302.10866), [2](https://arxiv.org/abs/2310.18780), [3](https://arxiv.org/abs/2311.05908).

### Model Architecture

StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers.  
  - Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters.
  - Low latency, faster decoding and higher throughput than Transformers. 
  - Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2.
  - Trained on sequences of up to 32k, allowing it to process longer prompts.