amezasor commited on
Commit
59bd5eb
β€’
1 Parent(s): c9eb010
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -16,16 +16,16 @@ base_model:
16
  Granite-3.1-8B-Instruct is a 8B parameter model finetuned from *Granite-3.1-8B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
17
 
18
  - **Developers:** Granite Team, IBM
19
- - **GitHub Repository:** [ibm-granite/granite-3.1-8b-instruct]()
20
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
21
- - **Paper:** [Granite 3.1 Language Models]()
22
  - **Release Date**: December 18th, 2024
23
  - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
24
 
25
  **Supported Languages:**
26
  English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages.
27
 
28
- **Intended use:**
29
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.
30
 
31
  *Capabilities*
@@ -88,20 +88,20 @@ Granite-3.1-8B-Instruct is based on a decoder-only dense transformer architectur
88
  | Number of KV heads | 8 | **8** | 8 | 8 |
89
  | MLP hidden size | 8192 | **12800** | 512 | 512 |
90
  | MLP activation | SwiGLU | **SwiGLU** | SwiGLU | SwiGLU |
91
- | Number of Experts | β€” | **β€”** | 32 | 40 |
92
  | MoE TopK | β€” | **β€”** | 8 | 8 |
93
  | Initialization std | 0.1 | **0.1** | 0.1 | 0.1 |
94
- | Sequence Length | 4096 | **4096** | 4096 | 4096 |
95
- | Position Embedding | RoPE | **RoPE** | RoPE | RoPE |
96
  | # Parameters | 2.5B | **8.1B** | 1.3B | 3.3B |
97
- | # Active Parameters | 2.5B | **8.1B** | 400M | 800M |
98
  | # Training tokens | 12T | **12T** | 10T | 10T |
99
 
100
  **Training Data:**
101
  Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the [Granite Technical Report]() and [Accompanying Author List]().
102
 
103
  **Infrastructure:**
104
- We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs while minimizing environmental impact by utilizing 100% renewable energy sources.
105
 
106
  **Ethical Considerations and Limitations:**
107
  Granite 3.1 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.
 
16
  Granite-3.1-8B-Instruct is a 8B parameter model finetuned from *Granite-3.1-8B-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set of techniques with a structured chat format, including supervised finetuning, model alignment using reinforcement learning, and model merging.
17
 
18
  - **Developers:** Granite Team, IBM
19
+ - **GitHub Repository:** [ibm-granite/granite-3.1-language-models](https://github.com/ibm-granite/granite-3.1-language-models)
20
  - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
21
+ - **Paper:** [Granite 3.1 Language Models (coming soon)](https://huggingface.co/collections/ibm-granite/granite-31-language-models-6751dbbf2f3389bec5c6f02d)
22
  - **Release Date**: December 18th, 2024
23
  - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
24
 
25
  **Supported Languages:**
26
  English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.1 models for languages beyond these 12 languages.
27
 
28
+ **Intended Use:**
29
  The model is designed to respond to general instructions and can be used to build AI assistants for multiple domains, including business applications.
30
 
31
  *Capabilities*
 
88
  | Number of KV heads | 8 | **8** | 8 | 8 |
89
  | MLP hidden size | 8192 | **12800** | 512 | 512 |
90
  | MLP activation | SwiGLU | **SwiGLU** | SwiGLU | SwiGLU |
91
+ | Number of experts | β€” | **β€”** | 32 | 40 |
92
  | MoE TopK | β€” | **β€”** | 8 | 8 |
93
  | Initialization std | 0.1 | **0.1** | 0.1 | 0.1 |
94
+ | Sequence length | 128K | **128K** | 128K | 128K |
95
+ | Position embedding | RoPE | **RoPE** | RoPE | RoPE |
96
  | # Parameters | 2.5B | **8.1B** | 1.3B | 3.3B |
97
+ | # Active parameters | 2.5B | **8.1B** | 400M | 800M |
98
  | # Training tokens | 12T | **12T** | 10T | 10T |
99
 
100
  **Training Data:**
101
  Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) very small amounts of human-curated data. A detailed attribution of datasets can be found in the [Granite Technical Report]() and [Accompanying Author List]().
102
 
103
  **Infrastructure:**
104
+ We train Granite 3.1 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
105
 
106
  **Ethical Considerations and Limitations:**
107
  Granite 3.1 Instruct Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering eleven languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such case, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. So we urge the community to use this model with proper safety testing and tuning tailored for their specific tasks.