|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- ajibawa-2023/OpenHermes-2.5-Code-290k |
|
language: |
|
- en |
|
tags: |
|
- code |
|
- finetune |
|
- synthetic data |
|
- text-generation-inference |
|
- conversational |
|
--- |
|
|
|
**OpenHermes-2.5-Code-290k-13B** |
|
|
|
OpenHermes-2.5-Code-290k-13B is a state of the art Llama-2 Fine-tune, which is trained on additional code dataset. |
|
This model is trained on my existing dataset [OpenHermes-2.5-Code-290k](https://huggingface.co/datasets/ajibawa-2023/OpenHermes-2.5-Code-290k). |
|
This dataset is amalgamation of two datasets. I have used [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) a super quality dataset made avaliable by teknium. Other datset is my own [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT). |
|
Dataset is in Vicuna/ShareGPT format. There are around **1.29 million** set of conversations. I have cleaned the dataset provided by Teknium and removed metadata such as "source" & "category" etc. This dataset has primarily synthetically generated instruction and chat samples. |
|
|
|
This model has enhanced coding capabilities besides other capabilities such as **Blogging, story generation, Q&A and many more**. |
|
|
|
**Training:** |
|
|
|
Entire model was trained on 4 x A100 80GB. For 2 epoch, training took **21 Days**. Fschat & DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. |
|
|
|
|
|
This is a full fine tuned model. Links for quantized models will be updated soon. |
|
|
|
|
|
**GPTQ, GGUF, AWQ & Exllama** |
|
|
|
GPTQ: TBA |
|
|
|
GGUF: TBA |
|
|
|
AWQ: TBA |
|
|
|
Exllama v2: TBA |
|
|
|
|
|
|
|
|
|
|
|
**Example Prompt:** |
|
``` |
|
This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. It can generate Story, Blogs ..... |
|
|
|
Context |
|
You are a helpful AI assistant. |
|
|
|
USER: <prompt> |
|
ASSISTANT: |
|
``` |
|
|
|
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . |
|
|
|
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. |
|
|
|
Thank you for your love & support. |
|
|
|
**Example Output** |
|
|
|
I will update soon. |