--- language: - en license: mit library_name: transformers tags: - chat - phi - phi3 - phi3.5 - finetune base_model: microsoft/Phi-3.5-mini-instruct datasets: - MaziyarPanahi/truthy-dpo-v0.1-axolotl model_name: calme-2.1-phi3.5-4b pipeline_tag: text-generation inference: false model_creator: MaziyarPanahi quantized_by: MaziyarPanahi model-index: - name: calme-2.1-phi3.5-4b results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 56.59 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-phi3.5-4b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 36.11 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-phi3.5-4b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 14.43 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-phi3.5-4b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 12.53 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-phi3.5-4b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 9.77 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-phi3.5-4b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 32.61 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-phi3.5-4b name: Open LLM Leaderboard --- ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) # QuantFactory/calme-2.1-phi3.5-4b-GGUF This is quantized version of [MaziyarPanahi/calme-2.1-phi3.5-4b](https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b) created using llama.cpp # Original Model Card Calme-2 Models # MaziyarPanahi/calme-2.1-phi3.5-4b This model is a fine-tuned version of the `microsoft/Phi-3.5-mini-instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications. ## Use Cases This model is suitable for a wide range of applications, including but not limited to: - Advanced question-answering systems - Intelligent chatbots and virtual assistants - Content generation and summarization - Code generation and analysis - Complex problem-solving and decision support # ⚡ Quantized GGUF Here are the quants: [calme-2.1-phi3.5-4b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b-GGUF) # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Coming soon! # Prompt Template This model uses `ChatML` prompt template: ``` <|system|> You are a helpful assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ```` # How to use ```python # Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.1-phi3.5-4b") pipe(messages) # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.1-phi3.5-4b") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.1-phi3.5-4b") ``` # Ethical Considerations As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.1-phi3.5-4b) | Metric |Value| |-------------------|----:| |Avg. |27.01| |IFEval (0-Shot) |56.59| |BBH (3-Shot) |36.11| |MATH Lvl 5 (4-Shot)|14.43| |GPQA (0-shot) |12.53| |MuSR (0-shot) | 9.77| |MMLU-PRO (5-shot) |32.61|