Text Generation
Transformers
PyTorch
Safetensors
code
Eval Results
Inference Endpoints
File size: 2,879 Bytes
14e6c10
 
 
 
 
 
 
 
 
 
26c238e
14e6c10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1278b8
 
14e6c10
 
 
 
 
 
 
 
e1278b8
 
14e6c10
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
  example_title: Hello world
  group: Python
license: bigcode-openrail-m
datasets:
- bigcode/commitpackft
- bigcode/oasst-octopack
metrics:
- code_eval
library_name: transformers
tags:
- code
model-index:
- name: OctoCoder
  results:
  - task:
      type: text-generation
    dataset:
      type: bigcode/humanevalpack
      name: HumanEvalSynthesize Python
    metrics:
    - name: pass@1
      type: pass@1
      value: 46.2
      verified: false
  - task:
      type: text-generation
    dataset:
      type: bigcode/humanevalpack
      name: HumanEvalSynthesize JavaScript
    metrics:
    - name: pass@1
      type: pass@1
      value: 39.2
      verified: false
---

![Octopack](https://github.com/bigcode-project/octopack/blob/31f3320f098703c7910e43492c39366eeea68d83/banner.png?raw=true)

# OctoCoder

Play with the model on the [TODO Playground](https://huggingface.co/spaces/bigcode/bigcode-playground).

##  Table of Contents

1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)

## Model Summary

OctoCoder is ...

- **Repository:** [bigcode/octopack](https://github.com/bigcode-project/octopack)
- **Paper:** [TODO]()
- **Languages:** 80+ Programming languages

## Use

### Intended use

The model follows instructions provided in the input. We recommend prefacing your input with "Question: " and finishing with "Answer:", for example: "Question: Please write a function in Python that performs bubble sort.\n\nAnswer:"

**Feel free to share your generations in the Community tab!**

### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "bigcode/octocoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)

inputs = tokenizer.encode("Question: Please write a function in Python that performs bubble sort.\n\nAnswer:", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```

# Training

## Model

- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Steps:** 250k pretraining & 30 instruction tuning
- **Pretraining tokens:** 1 trillion pretraining & 2M instruction tuning
- **Precision:** bfloat16

## Hardware

- **Pretraining:**
  - **GPUs:** 512 Tesla A100
  - **Training time:** 24 days
- **Instruction tuning:**
  - **GPUs:** 8 Tesla A100
  - **Training time:** 4 hours

## Software

- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM) & TODO
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)

# Citation

TODO