File size: 2,714 Bytes
5198e7f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
base_model: meta-llama/Llama-2-7b-hf
inference: true
model_type: llama
datasets:
  - cerebras/SlimPajama-627B
tags:
- sparse
---

# Llama-2-7b-pruned50-retrained

This repo contains model files for a [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) model that has had 50% of the parameters pruned in one-shot with [SparseGPT](https://arxiv.org/abs/2301.00774), then retrained by [Cerebras](https://huggingface.co/cerebras) with 45B tokens from SlimPajama while maintaining sparsity.

**Authors**: Neural Magic, Cerebras

## Usage

Below we share some code snippets on how to get quickly started with running the model.

### Fine-tuning examples

Coming soon.

### Running the model

```python
# pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained")
model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-pruned50-retrained", device_map="auto")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```

## Evaluation Benchmark Results

Model evaluation metrics and results.

| Benchmark                                      | Metric        | Llama-2-7b  | Llama-2-7b-pruned50-retrained |
|------------------------------------------------|---------------|-------------|-------------------------------|
| [MMLU](https://arxiv.org/abs/2009.03300)       | 5-shot, top-1 | xxxx        | xxxx                          |
| [HellaSwag](https://arxiv.org/abs/1905.07830)  | 0-shot        | xxxx        | xxxx                          |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | xxxx        | xxxx                          |
| [ARC-c](https://arxiv.org/abs/1911.01547)      |               | xxxx        | xxxx                          |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot        | xxxx        | xxxx                          |
| [HumanEval](https://arxiv.org/abs/2107.03374)  | pass@1        | xxxx        | xxxx                          |
| [GSM8K](https://arxiv.org/abs/2110.14168)      | maj@1         | xxxx        | xxxx                          |
| ------------------------------                 | ------------- | ----------- | ---------                     |
| **Average**                                    |               | xxxx        | xxxx                          |

## Model Training Data

Coming soon.

## Sparsification

This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).