--- license: apache-2.0 library_name: transformers --- # Laser-Dolphin-Mixtral-2x7b-dpo ![laser_dolphin_image](./dolphin_moe.png) **New Version will be uploaded soon** Credit to Fernando Fernandes and Eric Hartford for their project [laserRMT](https://github.com/cognitivecomputations/laserRMT) This model is a medium-sized MoE implementation based on [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) A 2x7b configuration offers better performance than a standard 7b model even if loaded in 4 bit. (9G VRAM) If this 2x7b model is loaded in 4 bit the hellaswag score is .8270 which is higher than the base model achieves on its own in full precision. The process is outlined in this [notebook](https://github.com/cognitivecomputations/laserRMT/blob/main/examples/laser-dolphin-mixtral-2x7b.ipynb) **These Quants will result in unpredicted behavior and I am working on new Quants as I have updated the model** Quatizations provided by [TheBloke](https://huggingface.co/TheBloke/laser-dolphin-mixtral-2x7b-dpo-GGUF) ## Code Example Switch the commented model definition to use in 4-bit. Should work with 9GB and still exceed the single 7B model by 5-6 points roughly ```python from transformers import AutoModelForCausalLM, AutoTokenizer def generate_response(prompt): """ Generate a response from the model based on the input prompt. Args: prompt (str): Prompt for the model. Returns: str: The generated response from the model. """ # Tokenize the input prompt inputs = tokenizer(prompt, return_tensors="pt") # Generate output tokens outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id) # Decode the generated tokens to a string response = tokenizer.decode(outputs[0], skip_special_tokens=True) return response # Load the model and tokenizer model_id = "macadeliccc/laser-dolphin-mixtral-2x7b-dpo" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True) prompt = "Write a quicksort algorithm in python" # Generate and print responses for each language print("Response:") print(generate_response(prompt), "\n") ``` [colab](https://colab.research.google.com/drive/1cmRhAkDWItV7utHNqNANVZnqDqQNsTUr?usp=sharing) with usage example ## Eval evaluation [colab](https://colab.research.google.com/drive/1FpwgsGzCR4tORTxAwUxpN3PcP22En2xk?usp=sharing) ## Citations Fernando Fernandes Neto and Eric Hartford. "Optimizing Large Language Models Using Layer-Selective Rank Reduction and Random Matrix Theory." 2024. ```bibtex @article{sharma2023truth, title={The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction}, author={Sharma, Pratyusha and Ash, Jordan T and Misra, Dipendra}, journal={arXiv preprint arXiv:2312.13558}, year={2023} } ``` ```bibtex @article{gao2021framework, title={A framework for few-shot language model evaluation}, author={Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and others}, journal={Version v0. 0.1. Sept}, year={2021} } ```