metadata
license_name: apache-2.0
language:
- en
base_model: louisbrulenaudet/Maxine-34B-stock
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
library_name: transformers
tags:
- mlx
- merge
- mergekit
- louisbrulenaudet/Maxine-34B-stock
- ConvexAI/Luminex-34B-v0.2
- fblgit/UNA-34BeagleSimpleMath-32K-v1
- chemistry
- biology
- math
pipeline_tag: text-generation
model-index:
- name: Maxine-34B-stock
results:
- task:
type: text-generation
metrics:
- name: Average
type: Average
value: 77.28
- name: ARC
type: ARC
value: 74.06
- name: GSM8K
type: GSM8K
value: 72.18
- name: Winogrande
type: Winogrande
value: 83.9
- name: TruthfulQA
type: TruthfulQA
value: 70.18
- name: HellaSwag
type: HellaSwag
value: 86.74
source:
name: Open LLM Leaderboard
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
mlx-community/Maxine-34B-stock
This model was converted to MLX format from louisbrulenaudet/Maxine-34B-stock
using mlx-vlm version 0.15.2.
Refer to the original model card for more details on the model.
Use with mlx
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/Maxine-34B-stock --max-tokens 100 --temp 0.0
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Maxine-34B-stock")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
@misc{louisbrulenaudet2024,
author = {Louis Brulé Naudet},
title = {Maxine-34B-stock, an xtraordinary 34B model},
year = {2024}
howpublished = {\url{https://huggingface.co/mlx-community/Maxine-34B-stock}},
}
Feedback
If you have any feedback, please reach out at louisbrulenaudet@icloud.com.