File size: 12,149 Bytes
1dde6e0 10ffa9c 28373e4 1dde6e0 28373e4 1dde6e0 17e9d8b 1dde6e0 382f21a e714105 382f21a e714105 382f21a e714105 17e9d8b 382f21a 17e9d8b e714105 b2d2c6b 1dde6e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 |
---
language:
- ko
- en
- zh
- ja
license: other
library_name: transformers
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
tags:
- pytorch
---
# Gemma-Mling: Multilingual Gemma
> Update @ 2024.04.15: First release of Gemma-Mling 7B model
**Original Gemma Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the **Gemma-Mling** model,
continual pretrained on mainly Korean/English/Chinese/Japanese + 500 multilingual corpus.
**Resources and Technical Documentation**:
* [Original Google's Gemma-7B](https://huggingface.co/google/gemma-7b)
* [Training Code @ Github: Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Citation**
```bibtex
@misc {gemma_mling_7b,
author = { {Junbum Lee, Taekyoon Choi} },
title = { gemma-mling-7b },
year = 2024,
url = { https://huggingface.co/beomi/gemma-mling-7b },
publisher = { Hugging Face }
}
```
**Model Developers**: Junbum Lee (Beomi) & Taekyoon Choi (Taekyoon)
## Model Information
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-mling-7b")
model = AutoModelForCausalLM.from_pretrained("beomi/gemma-mling-7b")
input_text = "머신러닝과 딥러닝의 차이는"
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-mling-7b")
model = AutoModelForCausalLM.from_pretrained("beomi/gemma-mling-7b", device_map="auto")
input_text = "머신러닝과 딥러닝의 차이는"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated Multilingual-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Implementation Information
Details about the model internals.
### Software
Training was done using [beomi/Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM).
### Dataset
We trained a mixture of multiple language datasets and trained until 100B.
The released model is the best performance model based on our Evaluation below from model checkpoints.
For Korean and English datasets, we utilized sampled llama2ko training dataset which combined 1:1 ratio in each language.
| Dataset | Jsonl (GB) | Sampled |
|--------------------------|------------|---------|
| range3/cc100-ja | 96.39 | No |
| Skywork/SkyPile-150B | 100.57 | Yes |
| llama2ko dataset (ko/en) | 108.5 | Yes |
| cis-lmu/Glot500 | 181.24 | No |
| Total | 486.7 | . |
## Evaluation
Model evaluation metrics and results.
### Evaluation Scripts
- For Knowledge / KoBest / XCOPA / XWinograd
- [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) v0.4.2
```bash
!git clone https://github.com/EleutherAI/lm-evaluation-harness.git
!cd lm-evaluation-harness && pip install -r requirements.txt && pip install -e .
!lm_eval --model hf \
--model_args pretrained=beomi/gemma-mling-7b,dtype="float16" \
--tasks "haerae,kobest,kmmlu_direct,cmmlu,ceval-valid,mmlu,xwinograd,xcopa \
--num_fewshot "0,5,5,5,5,5,0,5" \
--device cuda
```
- For JP Eval Harness
- [Stability-AI/lm-evaluation-harness (`jp-stable` branch)](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable)
```bash
!git clone -b jp-stable https://github.com/Stability-AI/lm-evaluation-harness.git
!cd lm-evaluation-harness && pip install -e ".[ja]"
!pip install 'fugashi[unidic]' && python -m unidic download
!cd lm-evaluation-harness && python main.py \
--model hf-causal \
--model_args pretrained=beomi/gemma-mling-7b,torch_dtype='auto'"
--tasks "jcommonsenseqa-1.1-0.3,jnli-1.3-0.3,marc_ja-1.1-0.3,jsquad-1.1-0.3,jaqket_v2-0.2-0.3,xlsum_ja,mgsm"
--num_fewshot "3,3,3,2,1,1,5"
```
### Benchmark Results
| Category | Metric | Shots | Score |
|----------------------------------|----------------------|------------|--------|
| **Default Metric** | **ACC** | | |
| **Knowledge (5-shot)** | MMLU | | 61.76 |
| | KMMLU (Exact Match) | | 42.75 |
| | CMLU | | 50.93 |
| | JMLU | | |
| | C-EVAL | | 50.07 |
| | HAERAE | 0-shot | 63.89 |
| **KoBest (5-shot)** | BoolQ | | 85.47 |
| | COPA | | 83.5 |
| | Hellaswag (acc-norm) | | 63.2 |
| | Sentineg | | 97.98 |
| | WiC | | 70.95 |
| **XCOPA (5-shot)** | IT | | 72.8 |
| | ID | | 76.4 |
| | TH | | 60.2 |
| | TR | | 65.6 |
| | VI | | 77.2 |
| | ZH | | 80.2 |
| **JP Eval Harness (Prompt ver 0.3)** | JcommonsenseQA | 3-shot | 85.97 |
| | JNLI | 3-shot | 39.11 |
| | Marc_ja | 3-shot | 96.48 |
| | JSquad (Exact Match) | 2-shot | 70.69 |
| | Jaqket (Exact Match) | 1-shot | 81.53 |
| | MGSM | 5-shot | 28.8 |
| **XWinograd (0-shot)** | EN | | 89.03 |
| | FR | | 72.29 |
| | JP | | 82.69 |
| | PT | | 73.38 |
| | RU | | 68.57 |
| | ZH | | 79.17 |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
## Acknowledgement
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program. |