license: other
datasets:
- allenai/ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
- nvidia/HelpSteer
- Intel/orca_dpo_pairs
- unalignment/toxic-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- allenai/ultrafeedback_binarized_cleaned
- Squish42/bluemoon-fandom-1-1-rp-cleaned
- LDJnr/Capybara
- JULIELab/EmoBank
- kingbri/PIPPA-shareGPT
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
A bagel, with everything
Overview
An experimental fine-tune of yi-34b-200k using bagel
This version underwent a subset of DPO, but is fairly censored. For a less censored version, try bagel-dpo-34b-v0.2
SFT data sources
Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check
- ai2_arc
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- airoboros
- Variety of categories of synthetic instructions generated by gpt-4.
- apps
- Python coding dataset with 10k problems.
- belebele
- Multi-lingual reading comprehension dataset.
- bluemoon
- Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT.
- boolq
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- capybara
- Multi-turn dataset used to create the capybara models.
- cinematika (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- drop
- More reading comprehension.
- emobank
- Emotion annotations using the Valence-Arousal-Domninance scheme.
- gutenberg (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by chapterize
- lmsys_chat_1m (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- mathinstruct
- Composite dataset with a variety of math-related tasks and problem/question formats.
- mmlu
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- natural_instructions
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- openbookqa
- Question answering dataset.
- pippa
- Deduped version of PIPPA in ShareGPT format.
- piqa
- Phyiscal interaction question answering.
- python_alpaca
- Python instruction response pairs, validated as functional.
- rosetta_code
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- slimorca
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- spider
- SQL-targeted dataset.
- squad_v2
- Contextual question answering (RAG).
- synthia
- GPT-4 generated data using advanced prompting from Migel Tissera.
- winogrande
- Fill in the blank style prompts.
DPO data sources
- airoboros 3.1 vs airoboros 2.2.1
- The creative/writing tasks from airoboros-2.2.1 were re-generated using gpt4-0314 and a custom prompt to get longer, more creative, less clichè responses for airoboros 3.1, so we can use the shorter/boring version as the "rejected" value and the rerolled response as "chosen"
- helpsteer
- Really neat dataset provided by the folks at NVidia with human annotation across a variety of metrics. Only items with the highest "correctness" value were used for DPO here, with the highest scoring output as "chosen" and random lower scoring value as "rejected"
- orca_dpo_pairs
- Another interesting dataset by Intel, which provides various DPO pairs generated from prompts included in the SlimOrca dataset.
- toxic-dpo
- highly toxic and potentially illegal content! De-censorship, for academic and lawful purposes only, of course. Generated by llama-2-70b via prompt engineering.
- truthy
- DPO pairs meant to increase truthfulness of the model, e.g. common misconceptions, differentiate between AI assistants and roleplayed human in terms of corporeal awareness/locality/etc.
- ultrafeedback
- One of the bits of magic behind the Zephyr model. Only the items with a chosen score of 8 or higher were included.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta). I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
Alpaca (sort of)
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an ### Input:
block, so the inputs are just in the instruction section.
Vicuna
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
ChatML (sort of)
I don't really understand the point of having special tokens for <|im_start|>
and <|im_end|>
, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
I just changed it to:
{bos}{role}
{text}
{eos}
If you really want to use <|im_start|>
and <|im_end|>
, just update your tokenizer_config.json
to use <|im_start|>
instead of <s>
and <|im_end|>
instead of </s>
and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
Llama-2 chat
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]