|
--- |
|
license: mit |
|
license_link: https://huggingface.co/Vezora/WaveCoder-6.7b-Ultra-bf16/blob/main/LICENSE |
|
language: |
|
- en |
|
library_name: transformers |
|
datasets: |
|
- humaneval |
|
pipeline_tag: text-generation |
|
tags: |
|
- code |
|
metrics: |
|
- code_eval |
|
--- |
|
|
|
|
|
## This is a Re-Upload of Wave-Coder-Ultra in bf16 since original model was uploaded in fp32 and there are none others available. Licensing remains the same as original base model. |
|
|
|
|
|
<h1 align="center"> |
|
🌊 WaveCoder: Widespread And Versatile Enhanced Code LLM |
|
</h1> |
|
|
|
<p align="center"> |
|
<a href="https://arxiv.org/abs/2312.14187"><b>[📜 Paper]</b></a> • |
|
<!-- <a href=""><b>[🤗 HF Models]</b></a> • --> |
|
<a href="https://github.com/microsoft/WaveCoder"><b>[🐱 GitHub]</b></a> |
|
<br> |
|
<a href="https://twitter.com/TeamCodeLLM_AI"><b>[🐦 Twitter]</b></a> • |
|
<a href="https://www.reddit.com/r/LocalLLaMA/comments/19a1scy/wavecoderultra67b_claims_to_be_the_2nd_best_model/"><b>[💬 Reddit]</b></a> • |
|
<a href="https://www.analyticsvidhya.com/blog/2024/01/microsofts-wavecoder-and-codeocean-revolutionize-instruction-tuning/">[🍀 Unofficial Blog]</a> |
|
<!-- <a href="#-quick-start">Quick Start</a> • --> |
|
<!-- <a href="#%EF%B8%8F-citation">Citation</a> --> |
|
</p> |
|
|
|
<p align="center"> |
|
Repo for "<a href="https://arxiv.org/abs/2312.14187" target="_blank">WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation</a>" |
|
</p> |
|
|
|
## 🔥 News |
|
|
|
- [2024/04/10] 🔥🔥🔥 WaveCoder repo, models released at [🤗 HuggingFace](https://huggingface.co/microsoft/wavecoder-ultra-6.7b)! |
|
- [2023/12/26] WaveCoder paper released. |
|
|
|
## 💡 Introduction |
|
|
|
WaveCoder 🌊 is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair. |
|
|
|
| Model | HumanEval | MBPP(500) | HumanEval<br>Fix(Avg.) | HumanEval<br>Explain(Avg.) | |
|
| -------------------------------------------------------------------------------- | --------- | --------- | ---------------------- | -------------------------- | |
|
| GPT-4 | 85.4 | - | 47.8 | 52.1 | |
|
| [🌊 WaveCoder-DS-6.7B](https://huggingface.co/microsoft/wavecoder-ds-6.7b) | 65.8 | 63.0 | 49.5 | 40.8 | |
|
| [🌊 WaveCoder-Pro-6.7B](https://huggingface.co/microsoft/wavecoder-pro-6.7b) | 74.4 | 63.4 | 52.1 | 43.0 | |
|
| [🌊 WaveCoder-Ultra-6.7B](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) | 79.9 | 64.6 | 52.3 | 45.7 | |
|
|
|
## 🪁 Evaluation |
|
|
|
Please refer to WaveCoder's [GitHub repo](https://github.com/microsoft/WaveCoder) for inference, evaluation, and training code. |
|
|
|
```python |
|
# Load model directly |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
tokenizer = AutoTokenizer.from_pretrained("microsoft/wavecoder-ultra-6.7b") |
|
model = AutoModelForCausalLM.from_pretrained("microsoft/wavecoder-ultra-6.7b") |
|
``` |
|
|
|
## 📖 License |
|
|
|
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the its [License](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL). |
|
|
|
## ☕️ Citation |
|
|
|
If you find this repository helpful, please consider citing our paper: |
|
|
|
``` |
|
@article{yu2023wavecoder, |
|
title={Wavecoder: Widespread and versatile enhanced instruction tuning with refined data generation}, |
|
author={Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng}, |
|
journal={arXiv preprint arXiv:2312.14187}, |
|
year={2023} |
|
} |
|
``` |
|
|
|
## Note |
|
|
|
WaveCoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. |
|
|