--- license: wtfpl datasets: - teknium/openhermes pipeline_tag: text-generation --- # MAMBA (2.8B) 🐍 fine-tuned on OpenHerms
mamba-hermes logo
Model Card is still WIP! ## Base model info Mamba is a new state space model architecture showing promising performance on information-dense data such as language modeling, where previous subquadratic models fall short of Transformers. It is based on the line of progress on [structured state space models](https://github.com/state-spaces/s4), with an efficient hardware-aware design and implementation in the spirit of [FlashAttention](https://github.com/Dao-AILab/flash-attention). ## Dataset info The OpenHermes dataset is composed of 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including: OpenHermes 13B is the first fine tune of the Hermes dataset that has a fully open source dataset! OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including: - GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium - WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan - Airoboros GPT-4 (v1.0), by JonDurbin - Camel-AI's domain expert datasets, by the Camel-AI Team - CodeAlpaca, by Sahil2801 - GPT4-LLM and Unnatural Instructions, by Microsoft Filtering included removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more The base dataset mix is identical to the original Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets. ## Usage ```sh pip install transformers pip install causal-conv1d<=1.0.2 pip install mamba-ssm ``` ```py import torch from transformers import AutoTokenizer, AutoModelForCausalLM from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel CHAT_TEMPLATE_ID = "HuggingFaceH4/zephyr-7b-beta" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name = "clibrain/mamba-2.8b-instruct-openhermes" eos_token = "<|endoftext|>" tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.eos_token = eos_token tokenizer.pad_token = tokenizer.eos_token tokenizer.chat_template = AutoTokenizer.from_pretrained(CHAT_TEMPLATE_ID).chat_template model = MambaLMHeadModel.from_pretrained( model_name, device=device, dtype=torch.float16) history_dict: list[dict[str, str]] = [] prompt = "Tell me 5 sites to visit in Spain" history_dict.append(dict(role="user", content=prompt)) input_ids = tokenizer.apply_chat_template( history_dict, return_tensors="pt", add_generation_prompt=True ).to(device) out = model.generate( input_ids=input_ids, max_length=2000, temperature=0.9, top_p=0.7, eos_token_id=tokenizer.eos_token_id, ) decoded = tokenizer.batch_decode(out) assistant_message = ( decoded[0].split("<|assistant|>\n")[-1].replace(eos, "") ) print(assistant_message) ``` ## Gradio Demo ```sh git clone https://github.com/mrm8488/mamba-chat.git cd mamba-chat pip install -r requirements.txt pip install -q gradio==4.8.0 python app.py \ --model clibrain/mamba-2.8b-instruct-openhermes \ --share ``` ## Evaluations Coming soon! ## Acknowledgments Thanks to [mamba-chat](https://github.com/havenhq/mamba-chat/tree/main) for heavily inspiring our work