Divyasreepat
commited on
Commit
•
3a9b369
1
Parent(s):
a32d139
Update README.md with new model card content
Browse files
README.md
ADDED
@@ -0,0 +1,196 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: keras-hub
|
3 |
+
---
|
4 |
+
### Model Overview
|
5 |
+
Gemma is Google's family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models. Gemma models are available with and without instruction tuning and come in two sizes: 2 billion and 7 billion parameters. Gemma 1.1 is the latest weights refresh. See the model card below for benchmarks, data sources, and intended use cases.
|
6 |
+
|
7 |
+
Weights are released under the [Gemma License](https://www.kaggle.com/models/google/gemma/license/consent). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
|
8 |
+
|
9 |
+
## Links
|
10 |
+
|
11 |
+
* [Gemma Quickstart Notebook](https://www.kaggle.com/code/nilaychauhan/get-started-with-gemma-using-kerasnlp)
|
12 |
+
* [Gemma API Documentation](https://keras.io/api/keras_hub/models/gemma/)
|
13 |
+
* [Gemma Model Card](https://www.kaggle.com/models/google/gemma)
|
14 |
+
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
|
15 |
+
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
|
16 |
+
|
17 |
+
## Installation
|
18 |
+
|
19 |
+
Keras and KerasHub can be installed with:
|
20 |
+
|
21 |
+
```
|
22 |
+
pip install -U -q keras-hub
|
23 |
+
pip install -U -q keras>=3
|
24 |
+
```
|
25 |
+
|
26 |
+
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
|
27 |
+
|
28 |
+
## Presets
|
29 |
+
|
30 |
+
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
|
31 |
+
|
32 |
+
| Preset name | Parameters | Description |
|
33 |
+
|----------------------------------------|------------|----------------------------------------------|
|
34 |
+
| `gemma_2b_en` | 2.51B | 2 billion parameter, 18-layer, base Gemma model. |
|
35 |
+
| `gemma_instruct_2b_en` | 2.51B | 2 billion parameter, 18-layer, instruction tuned Gemma model. |
|
36 |
+
| `gemma_1.1_instruct_2b_en` | 2.51B | 2 billion parameter, 18-layer, instruction tuned Gemma model. The 1.1 update improves model quality. |
|
37 |
+
| `gemma_7b_en` | 8.54B | 7 billion parameter, 28-layer, base Gemma model. |
|
38 |
+
| `gemma_instruct_7b_en` | 8.54B | 7 billion parameter, 28-layer, instruction tuned Gemma model. |
|
39 |
+
| `gemma_1.1_instruct_7b_en` | 8.54B | 7 billion parameter, 28-layer, instruction tuned Gemma model. The 1.1 update improves model quality. |
|
40 |
+
|
41 |
+
## Prompts
|
42 |
+
|
43 |
+
Gemma models are made available both pretrained and instruction tuned on turn by turn conversations. Base pretrained models (`gemma_2b_en`, `gemma_7b_en`) will complete sentences. The following are some example prompts:
|
44 |
+
- "My favorite brownie recipe is "
|
45 |
+
- "Why is the sky blue?"
|
46 |
+
|
47 |
+
Instruction tuned versions (suffixed with `instruct`) should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. New lines do matter. See the following for an example:
|
48 |
+
|
49 |
+
```python
|
50 |
+
start_of_turn_user = "<start_of_turn>user\n"
|
51 |
+
start_of_turn_model = "<start_of_turn>model\n"
|
52 |
+
end_of_turn = "<end_of_turn>\n"
|
53 |
+
prompt = start_of_turn_user + "You are a friendly assistant. Say hi." + \
|
54 |
+
end_of_turn + start_of_turn_model
|
55 |
+
```
|
56 |
+
|
57 |
+
### Example Usage
|
58 |
+
```python
|
59 |
+
!pip install -U keras-hub
|
60 |
+
!pip install -U keras
|
61 |
+
```
|
62 |
+
|
63 |
+
```
|
64 |
+
import keras
|
65 |
+
import keras_hub
|
66 |
+
import numpy as np
|
67 |
+
```
|
68 |
+
|
69 |
+
Use `generate()` to do text generation.
|
70 |
+
```python
|
71 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
|
72 |
+
gemma_lm.generate("Keras is a", max_length=30)
|
73 |
+
|
74 |
+
# Generate with batched prompts.
|
75 |
+
gemma_lm.generate(["Keras is a", "I want to say"], max_length=30)
|
76 |
+
```
|
77 |
+
|
78 |
+
Compile the `generate()` function with a custom sampler.
|
79 |
+
```python
|
80 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
|
81 |
+
gemma_lm.compile(sampler="top_k")
|
82 |
+
gemma_lm.generate("I want to say", max_length=30)
|
83 |
+
|
84 |
+
gemma_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
|
85 |
+
gemma_lm.generate("I want to say", max_length=30)
|
86 |
+
```
|
87 |
+
|
88 |
+
Use `generate()` without preprocessing.
|
89 |
+
```python
|
90 |
+
prompt = {
|
91 |
+
# `2, 214064, 603` maps to the start token followed by "Keras is".
|
92 |
+
"token_ids": np.array([[2, 214064, 603, 0, 0, 0, 0]] * 2),
|
93 |
+
# Use `"padding_mask"` to indicate values that should not be overridden.
|
94 |
+
"padding_mask": np.array([[1, 1, 1, 0, 0, 0, 0]] * 2),
|
95 |
+
}
|
96 |
+
|
97 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
|
98 |
+
"gemma_1.1_instruct_7b_en",
|
99 |
+
preprocessor=None,
|
100 |
+
)
|
101 |
+
gemma_lm.generate(prompt)
|
102 |
+
```
|
103 |
+
|
104 |
+
Call `fit()` on a single batch.
|
105 |
+
```python
|
106 |
+
features = ["The quick brown fox jumped.", "I forgot my homework."]
|
107 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("gemma_1.1_instruct_7b_en")
|
108 |
+
gemma_lm.fit(x=features, batch_size=2)
|
109 |
+
```
|
110 |
+
|
111 |
+
Call `fit()` without preprocessing.
|
112 |
+
```python
|
113 |
+
x = {
|
114 |
+
"token_ids": np.array([[2, 214064, 603, 5271, 6044, 9581, 3, 0]] * 2),
|
115 |
+
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 0]] * 2),
|
116 |
+
}
|
117 |
+
y = np.array([[214064, 603, 5271, 6044, 9581, 3, 0, 0]] * 2)
|
118 |
+
sw = np.array([[1, 1, 1, 1, 1, 1, 0, 0]] * 2)
|
119 |
+
|
120 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
|
121 |
+
"gemma_1.1_instruct_7b_en",
|
122 |
+
preprocessor=None,
|
123 |
+
)
|
124 |
+
gemma_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
125 |
+
```
|
126 |
+
|
127 |
+
## Example Usage with Hugging Face URI
|
128 |
+
|
129 |
+
```python
|
130 |
+
!pip install -U keras-hub
|
131 |
+
!pip install -U keras
|
132 |
+
```
|
133 |
+
|
134 |
+
```
|
135 |
+
import keras
|
136 |
+
import keras_hub
|
137 |
+
import numpy as np
|
138 |
+
```
|
139 |
+
|
140 |
+
Use `generate()` to do text generation.
|
141 |
+
```python
|
142 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
|
143 |
+
gemma_lm.generate("Keras is a", max_length=30)
|
144 |
+
|
145 |
+
# Generate with batched prompts.
|
146 |
+
gemma_lm.generate(["Keras is a", "I want to say"], max_length=30)
|
147 |
+
```
|
148 |
+
|
149 |
+
Compile the `generate()` function with a custom sampler.
|
150 |
+
```python
|
151 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
|
152 |
+
gemma_lm.compile(sampler="top_k")
|
153 |
+
gemma_lm.generate("I want to say", max_length=30)
|
154 |
+
|
155 |
+
gemma_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
|
156 |
+
gemma_lm.generate("I want to say", max_length=30)
|
157 |
+
```
|
158 |
+
|
159 |
+
Use `generate()` without preprocessing.
|
160 |
+
```python
|
161 |
+
prompt = {
|
162 |
+
# `2, 214064, 603` maps to the start token followed by "Keras is".
|
163 |
+
"token_ids": np.array([[2, 214064, 603, 0, 0, 0, 0]] * 2),
|
164 |
+
# Use `"padding_mask"` to indicate values that should not be overridden.
|
165 |
+
"padding_mask": np.array([[1, 1, 1, 0, 0, 0, 0]] * 2),
|
166 |
+
}
|
167 |
+
|
168 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
|
169 |
+
"hf://keras/gemma_1.1_instruct_7b_en",
|
170 |
+
preprocessor=None,
|
171 |
+
)
|
172 |
+
gemma_lm.generate(prompt)
|
173 |
+
```
|
174 |
+
|
175 |
+
Call `fit()` on a single batch.
|
176 |
+
```python
|
177 |
+
features = ["The quick brown fox jumped.", "I forgot my homework."]
|
178 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset("hf://keras/gemma_1.1_instruct_7b_en")
|
179 |
+
gemma_lm.fit(x=features, batch_size=2)
|
180 |
+
```
|
181 |
+
|
182 |
+
Call `fit()` without preprocessing.
|
183 |
+
```python
|
184 |
+
x = {
|
185 |
+
"token_ids": np.array([[2, 214064, 603, 5271, 6044, 9581, 3, 0]] * 2),
|
186 |
+
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 0]] * 2),
|
187 |
+
}
|
188 |
+
y = np.array([[214064, 603, 5271, 6044, 9581, 3, 0, 0]] * 2)
|
189 |
+
sw = np.array([[1, 1, 1, 1, 1, 1, 0, 0]] * 2)
|
190 |
+
|
191 |
+
gemma_lm = keras_hub.models.GemmaCausalLM.from_preset(
|
192 |
+
"hf://keras/gemma_1.1_instruct_7b_en",
|
193 |
+
preprocessor=None,
|
194 |
+
)
|
195 |
+
gemma_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
|
196 |
+
```
|