File size: 7,684 Bytes
3a9b369
 
0d47958
 
 
 
 
 
 
 
395d266
3a9b369
b5b58e4
3a9b369
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d71600c
3a9b369
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a8d9f62
3a9b369
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5b58e4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
---
library_name: keras-hub
license: gemma
language:
- en
tags:
- text-generation-inference
- text-classification
- text-conversation
- text-to-text-generation
pipeline_tag: text-generation
---
### Model Overview
Gemma is Google's family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models. Gemma models are available with and without instruction tuning and come in two sizes: 2 billion and 7 billion parameters. Gemma 1.1 is the latest weights refresh. See the model card below for benchmarks, data sources, and intended use cases.

Weights are released under the [Gemma License](https://www.kaggle.com/models/google/gemma/license/consent). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).

## Links

* [Gemma Quickstart Notebook](https://www.kaggle.com/code/nilaychauhan/get-started-with-gemma-using-kerasnlp)
* [Gemma API Documentation](https://keras.io/api/keras_hub/models/gemma/)
* [Gemma Model Card](https://www.kaggle.com/models/google/gemma)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)

## Installation

Keras and KerasHub can be installed with:

```
pip install -U -q keras-hub
pip install -U -q keras>=3
```

Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.

## Presets

The following model checkpoints are provided by the Keras team. Full code examples for each are available below.

| Preset name                | Parameters | Description                                                                                          |
|----------------------------------------|------------|----------------------------------------------|
| `gemma_2b_en`              | 2.51B      | 2 billion parameter, 18-layer, base Gemma model.                                                     |
| `gemma_instruct_2b_en`     | 2.51B      | 2 billion parameter, 18-layer, instruction tuned Gemma model.                                        |
| `gemma_1.1_instruct_2b_en` | 2.51B      | 2 billion parameter, 18-layer, instruction tuned Gemma model. The 1.1 update improves model quality. |
| `gemma_7b_en`              | 8.54B      | 7 billion parameter, 28-layer, base Gemma model.                                                     |
| `gemma_instruct_7b_en`     | 8.54B      | 7 billion parameter, 28-layer, instruction tuned Gemma model.                                        |
| `gemma_1.1_instruct_7b_en` | 8.54B      | 7 billion parameter, 28-layer, instruction tuned Gemma model. The 1.1 update improves model quality. |

## Prompts

Gemma models are made available both pretrained and instruction tuned on turn by turn conversations. Base pretrained models (`gemma_2b_en`, `gemma_7b_en`) will complete sentences. The following are some example prompts:
- "My favorite brownie recipe is "
- "Why is the sky blue?"

Instruction tuned versions (suffixed with `instruct`) should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. New lines do matter. See the following for an example:

```python
start_of_turn_user = "<start_of_turn>user\n"
start_of_turn_model = "<start_of_turn>model\n"
end_of_turn = "<end_of_turn>\n"
prompt = start_of_turn_user + "You are a friendly assistant. Say hi." + \
    end_of_turn + start_of_turn_model
```

## Example Usage
```python
!pip install -U keras-hub
!pip install -U keras
```

```
import keras
import keras_hub
import numpy as np
```

Use `generate()` to do text generation.
```python
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
gemma_lm.generate("Keras is a", max_length=30)

# Generate with batched prompts.
gemma_lm.generate(["Keras is a", "I want to say"], max_length=30)
```

Compile the `generate()` function with a custom sampler.
```python
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
gemma_lm.compile(sampler="top_k")
gemma_lm.generate("I want to say", max_length=30)

gemma_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
gemma_lm.generate("I want to say", max_length=30)
```

Use `generate()` without preprocessing.
```python
prompt = {
    # `2, 214064, 603` maps to the start token followed by "Keras is".
    "token_ids": np.array([[2, 214064, 603, 0, 0, 0, 0]] * 2),
    # Use `"padding_mask"` to indicate values that should not be overridden.
    "padding_mask": np.array([[1, 1, 1, 0, 0, 0, 0]] * 2),
}

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.generate(prompt)
```

Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
gemma_lm.fit(x=features, batch_size=2)
```

Call `fit()` without preprocessing.
```python
x = {
    "token_ids": np.array([[2, 214064, 603, 5271, 6044, 9581, 3, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 0]] * 2),
}
y = np.array([[214064, 603, 5271, 6044, 9581, 3, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 0, 0]] * 2)

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```

## Example Usage with Hugging Face URI

```python
!pip install -U keras-hub
!pip install -U keras
```

```
import keras
import keras_hub
import numpy as np
```

Use `generate()` to do text generation.
```python
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
gemma_lm.generate("Keras is a", max_length=30)

# Generate with batched prompts.
gemma_lm.generate(["Keras is a", "I want to say"], max_length=30)
```

Compile the `generate()` function with a custom sampler.
```python
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
gemma_lm.compile(sampler="top_k")
gemma_lm.generate("I want to say", max_length=30)

gemma_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
gemma_lm.generate("I want to say", max_length=30)
```

Use `generate()` without preprocessing.
```python
prompt = {
    # `2, 214064, 603` maps to the start token followed by "Keras is".
    "token_ids": np.array([[2, 214064, 603, 0, 0, 0, 0]] * 2),
    # Use `"padding_mask"` to indicate values that should not be overridden.
    "padding_mask": np.array([[1, 1, 1, 0, 0, 0, 0]] * 2),
}

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "hf://keras/gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.generate(prompt)
```

Call `fit()` on a single batch.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
gemma_lm.fit(x=features, batch_size=2)
```

Call `fit()` without preprocessing.
```python
x = {
    "token_ids": np.array([[2, 214064, 603, 5271, 6044, 9581, 3, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 0]] * 2),
}
y = np.array([[214064, 603, 5271, 6044, 9581, 3, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 0, 0]] * 2)

gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
    "hf://keras/gemma_1.1_instruct_7b_en",
    preprocessor=None,
)
gemma_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
```