๐ Repository-Level Pre-Trained OpenCoder ๐งฉ
Collection
All the checkpoints from Table 3 of the paper โOn Pretraining for Project-Level Code Completion.โ
โข
33 items
โข
Updated
โข
3
This model is derived from OpenCoder-1.5B-Base by applying additional context extension fine-tuning. The repository context is composed using the Random .py
composer, more details on which, along with others, can be found in the On Pretraining for Project-Level Code Completion paper (arxiv). Specifically, Section A.1 of the Appendix describes the context composition method, and Table 3 provides a comparison with other composers from the same collection.
We publish this checkpoint to support the reproducibility and accessibility of our research results.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "JetBrains-Research/OpenCoder-1.5B-Random-Py"
tokenizer_name = "infly/OpenCoder-1.5B-Base"
model = AutoModelForCausalLM.from_pretrained(model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True)
inputs = tokenizer("# write a quick sort algorithm", return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=256)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Base model
infly/OpenCoder-1.5B-Base