File size: 8,107 Bytes
6084483
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
---
base_model: ibm-granite/granite-3.0-3b-a800m-instruct-GGUF 
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.0
quantized_model: AliNemati
inference: false
model-index:
- name: granite-3.0-2b-instruct
  results:
  - task:
      type: text-generation
    dataset:
      name: IFEval
      type: instruction-following
    metrics:
    - type: pass@1
      value: 52.27
      name: pass@1
    - type: pass@1
      value: 8.22
      name: pass@1
  - task:
      type: text-generation
    dataset:
      name: AGI-Eval
      type: human-exams
    metrics:
    - type: pass@1
      value: 40.52
      name: pass@1
    - type: pass@1
      value: 65.82
      name: pass@1
    - type: pass@1
      value: 34.45
      name: pass@1
  - task:
      type: text-generation
    dataset:
      name: OBQA
      type: commonsense
    metrics:
    - type: pass@1
      value: 46.6
      name: pass@1
    - type: pass@1
      value: 71.21
      name: pass@1
    - type: pass@1
      value: 82.61
      name: pass@1
    - type: pass@1
      value: 77.51
      name: pass@1
    - type: pass@1
      value: 60.32
      name: pass@1
  - task:
      type: text-generation
    dataset:
      name: BoolQ
      type: reading-comprehension
    metrics:
    - type: pass@1
      value: 88.65
      name: pass@1
    - type: pass@1
      value: 21.58
      name: pass@1
  - task:
      type: text-generation
    dataset:
      name: ARC-C
      type: reasoning
    metrics:
    - type: pass@1
      value: 64.16
      name: pass@1
    - type: pass@1
      value: 33.81
      name: pass@1
    - type: pass@1
      value: 51.55
      name: pass@1
  - task:
      type: text-generation
    dataset:
      name: HumanEvalSynthesis
      type: code
    metrics:
    - type: pass@1
      value: 64.63
      name: pass@1
    - type: pass@1
      value: 57.16
      name: pass@1
    - type: pass@1
      value: 65.85
      name: pass@1
    - type: pass@1
      value: 49.6
      name: pass@1
  - task:
      type: text-generation
    dataset:
      name: GSM8K
      type: math
    metrics:
    - type: pass@1
      value: 68.99
      name: pass@1
    - type: pass@1
      value: 30.94
      name: pass@1
  - task:
      type: text-generation
    dataset:
      name: PAWS-X (7 langs)
      type: multilingual
    metrics:
    - type: pass@1
      value: 64.94
      name: pass@1
    - type: pass@1
      value: 48.2
      name: pass@1
---

**osllm.ai  Models Highlights Program**

**We believe there's no need to pay a token if you have a GPU on your computer.**

Highlighting new and noteworthy models from the community. Join the conversation on Discord.


**Model creator**: ibm-granite

**Original model**: granite-3.0-3b-a800m-instruct


[**README**:](https://huggingface.co/ibm-granite/granite-3.0-8b-instruct/edit/main/README.md)

<p align="center">
  <a href="https://osllm.ai">Official Website</a> &bull; <a href="https://docs.osllm.ai/index.html">Documentation</a> &bull; <a href="https://discord.gg/2fftQauwDD">Discord</a>
</p>



<p align="center">
  <b>NEW:</b> <a href="https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill">Subscribe to our mailing list</a> for updates and news!
</p>


Email: support@osllm.ai


**Model Summary**:

Granite-3.0-8B-Instruct is an 8B parameter model finetuned from Granite-3.0-8B-Base using a combination of open-source instruction datasets with permissive licenses and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.



**Model Summary:**
Granite-3.0-8B-Instruct is a 8B parameter model finetuned from *Granite-3.0-8B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.

- **Developers:** Granite Team, IBM
- **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models)
- **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
- **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf)
- **Release Date**: October 21st, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)

**Supported Languages:** 
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.0 models for languages beyond these 12 languages.

**Intended use:** 
The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.

*Capabilities*
* Summarization
* Text classification
* Text extraction
* Question-answering
* Retrieval Augmented Generation (RAG)
* Code related tasks
* Function-calling tasks
* Multilingual dialog use cases




**About [osllm.ai](https://osllm.ai)**:

[osllm.ai](https://osllm.ai) is a community-driven platform that provides access to a wide range of open-source language models.

1. **[IndoxJudge](https://github.com/indoxJudge)**: A free, open-source tool for evaluating large language models (LLMs).  
It provides key metrics to assess performance, reliability, and risks like bias and toxicity, helping ensure model safety.

1. **[inDox](https://github.com/inDox)**: An open-source retrieval augmentation tool for extracting data from various  
document formats (text, PDFs, HTML, Markdown, LaTeX). It handles structured and unstructured data and supports both  
online and offline LLMs.

1. **[IndoxGen](https://github.com/IndoxGen)**: A framework for generating high-fidelity synthetic data using LLMs and  
human feedback, designed for enterprise use with high flexibility and precision.

1. **[Phoenix](https://github.com/Phoenix)**: A multi-platform, open-source chatbot that interacts with documents  
locally, without internet or GPU. It integrates inDox and IndoxJudge to improve accuracy and prevent hallucinations,  
ideal for sensitive fields like healthcare.

1. **[Phoenix_cli](https://github.com/Phoenix_cli)**: A multi-platform command-line tool that runs LLaMA models locally,  
supporting up to eight concurrent tasks through multithreading, eliminating the need for cloud-based services.




**Special thanks**

🙏 Special thanks to [**Georgi Gerganov**](https://github.com/ggerganov) and the whole team working on [**llama.cpp**](https://github.com/ggerganov/llama.cpp) for making all of this possible.



**Disclaimers**

[osllm.ai](https://osllm.ai) is not the creator, originator, or owner of any Model featured in the Community Model Program.  
Each Community Model is created and provided by third parties. osllm.ai does not endorse, support, represent,  
or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand  
that Community Models can produce content that might be offensive, harmful, inaccurate, or otherwise  
inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who  
originated such Model. osllm.ai may not monitor or control the Community Models and cannot, and does not, take  
responsibility for any such Model. osllm.ai disclaims all warranties or guarantees about the accuracy,  
reliability, or benefits of the Community Models. osllm.ai further disclaims any warranty that the Community  
Model will meet your requirements, be secure, uninterrupted, or available at any time or location, or  
error-free, virus-free, or that any errors will be corrected, or otherwise. You will be solely responsible for  
any damage resulting from your use of or access to the Community Models, your downloading of any Community  
Model, or use of any other Community Model provided by or through [osllm.ai](https://osllm.ai).