File size: 2,339 Bytes
2bd2cae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license_name: apache-2.0
language:
- en
base_model: louisbrulenaudet/Pearl-7B-0211-ties
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
library_name: transformers
tags:
- mlx
- merge
- mergekit
- louisbrulenaudet/Pearl-7B-slerp
- WizardLM/WizardMath-7B-V1.1
- cognitivecomputations/WestLake-7B-v2-laser
- CultriX/NeuralTrix-7B-dpo
- chemistry
- biology
- math
pipeline_tag: text-generation
model-index:
- name: Pearl-7B-0211-ties
  results:
  - task:
      type: text-generation
    metrics:
    - name: Average
      type: Average
      value: 75.11
    - name: ARC
      type: ARC
      value: 71.42
    - name: GSM8K
      type: GSM8K
      value: 70.66
    - name: Winogrande
      type: Winogrande
      value: 84.37
    - name: TruthfulQA
      type: TruthfulQA
      value: 71.46
    - name: HellaSwag
      type: HellaSwag
      value: 88.86
    source:
      name: Open LLM Leaderboard
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
---

<center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>

# mlx-community/Pearl-7B

This model was converted to MLX format from [`louisbrulenaudet/Pearl-7B-0211-ties`]() using mlx-vlm version **0.15.2**.
Refer to the [original model card](louisbrulenaudet/Pearl-7B-0211-ties) for more details on the model.

## Use with mlx

```bash
pip install -U mlx-vlm
python -m mlx_vlm.generate --model  mlx-community/Pearl-7B --max-tokens 100 --temp 0.0
```

```python
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Pearl-7B")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```

## Citing & Authors

If you use this code in your research, please use the following BibTeX entry.

```BibTeX
@misc{louisbrulenaudet2024,
  author =       {Louis Brulé Naudet},
  title =        {Pearl-7B-0211-ties, an xtraordinary 7B model},
  year =         {2024}
  howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-7B-0211-ties}},
}
```

## Feedback

If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).