metadata
license: apache-2.0
tags:
- moe
- merge
- mergekit
- lazymergekit
- epfl-llm/meditron-7b
- microsoft/Orca-2-7b
Medstral-7B
Medstral-7B is a Mixure of Experts (MoE) made with the following models using LazyMergekit:
🧩 Configuration
base_model: microsoft/Orca-2-7b
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: epfl-llm/meditron-7b
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "Could a plant-based diet improve arthritis symptoms?"
- "A patient comes in with symptoms of dizziness and nausea..."
- "When discussing diabetes management, the key factors to consider are..."
- "The differential diagnosis for a headache with visual aura could include..."
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "Give an overview of the French Revolution."
- "Explain how a digital camera captures an image."
- "What are the environmental impacts of deforestation?"
- "The recent advancements in artificial intelligence have led to developments in..."
- "The fundamental concepts in economics include ideas like supply and demand, which explain..."
- source_model: microsoft/Orca-2-7b
positive_prompts:
- "Here is a funny joke for you -"
- "When considering the ethical implications of artificial intelligence, one must take into account..."
- "In strategic planning, a company must analyze its strengths and weaknesses, which involves..."
- "Understanding consumer behavior in marketing requires considering factors like..."
- "The debate on climate change solutions hinges on arguments that..."
negative_prompts:
- "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize..."
- "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for..."
- "Explaining the importance of vaccination, a healthcare professional should highlight..."
💻 Usage
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Medstral-7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])