andrijdavid
commited on
Commit
•
0c30b2f
1
Parent(s):
1886e3b
Upload folder using huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,582 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: gemma
|
3 |
+
library_name: transformers
|
4 |
+
tags:
|
5 |
+
- GGUF
|
6 |
+
extra_gated_heading: Access Gemma on Hugging Face
|
7 |
+
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
|
8 |
+
agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging
|
9 |
+
Face and click below. Requests are processed immediately.
|
10 |
+
extra_gated_button_content: Acknowledge license
|
11 |
+
quantized_by: andrijdavid
|
12 |
+
---
|
13 |
+
# gemma-7b-GGUF
|
14 |
+
- Original model: [gemma-7b](https://huggingface.co/google/gemma-7b)
|
15 |
+
|
16 |
+
<!-- description start -->
|
17 |
+
## Description
|
18 |
+
|
19 |
+
This repo contains GGUF format model files for [gemma-7b](https://huggingface.co/google/gemma-7b).
|
20 |
+
|
21 |
+
<!-- description end -->
|
22 |
+
<!-- README_GGUF.md-about-gguf start -->
|
23 |
+
### About GGUF
|
24 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
25 |
+
Here is an incomplete list of clients and libraries that are known to support GGUF:
|
26 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
|
27 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
|
28 |
+
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications
|
29 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
|
30 |
+
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
|
31 |
+
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
|
32 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
|
33 |
+
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
|
34 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
|
35 |
+
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
|
36 |
+
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
|
37 |
+
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
|
38 |
+
<!-- README_GGUF.md-about-gguf end -->
|
39 |
+
|
40 |
+
<!-- compatibility_gguf start -->
|
41 |
+
## Explanation of quantisation methods
|
42 |
+
<details>
|
43 |
+
<summary>Click to see details</summary>
|
44 |
+
The new methods available are:
|
45 |
+
|
46 |
+
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
47 |
+
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
48 |
+
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
49 |
+
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
50 |
+
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
|
51 |
+
</details>
|
52 |
+
<!-- compatibility_gguf end -->
|
53 |
+
|
54 |
+
<!-- README_GGUF.md-how-to-download start -->
|
55 |
+
## How to download GGUF files
|
56 |
+
|
57 |
+
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
|
58 |
+
|
59 |
+
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
|
60 |
+
|
61 |
+
* LM Studio
|
62 |
+
* LoLLMS Web UI
|
63 |
+
* Faraday.dev
|
64 |
+
|
65 |
+
### In `text-generation-webui`
|
66 |
+
|
67 |
+
Under Download Model, you can enter the model repo: LiteLLMs/gemma-7b-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf.
|
68 |
+
|
69 |
+
Then click Download.
|
70 |
+
|
71 |
+
### On the command line, including multiple files at once
|
72 |
+
|
73 |
+
I recommend using the `huggingface-hub` Python library:
|
74 |
+
|
75 |
+
```shell
|
76 |
+
pip3 install huggingface-hub
|
77 |
+
```
|
78 |
+
|
79 |
+
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
80 |
+
|
81 |
+
```shell
|
82 |
+
huggingface-cli download LiteLLMs/gemma-7b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
|
83 |
+
```
|
84 |
+
|
85 |
+
<details>
|
86 |
+
<summary>More advanced huggingface-cli download usage (click to read)</summary>
|
87 |
+
|
88 |
+
You can also download multiple files at once with a pattern:
|
89 |
+
|
90 |
+
```shell
|
91 |
+
huggingface-cli download LiteLLMs/gemma-7b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
92 |
+
```
|
93 |
+
|
94 |
+
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
95 |
+
|
96 |
+
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
|
97 |
+
|
98 |
+
```shell
|
99 |
+
pip3 install huggingface_hub[hf_transfer]
|
100 |
+
```
|
101 |
+
|
102 |
+
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
103 |
+
|
104 |
+
```shell
|
105 |
+
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/gemma-7b-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False
|
106 |
+
```
|
107 |
+
|
108 |
+
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
109 |
+
</details>
|
110 |
+
<!-- README_GGUF.md-how-to-download end -->
|
111 |
+
<!-- README_GGUF.md-how-to-run start -->
|
112 |
+
## Example `llama.cpp` command
|
113 |
+
|
114 |
+
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
115 |
+
|
116 |
+
```shell
|
117 |
+
./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
|
118 |
+
```
|
119 |
+
|
120 |
+
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
121 |
+
|
122 |
+
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
|
123 |
+
|
124 |
+
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
125 |
+
|
126 |
+
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
|
127 |
+
|
128 |
+
## How to run in `text-generation-webui`
|
129 |
+
|
130 |
+
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
|
131 |
+
|
132 |
+
## How to run from Python code
|
133 |
+
|
134 |
+
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
|
135 |
+
|
136 |
+
### How to load this model in Python code, using llama-cpp-python
|
137 |
+
|
138 |
+
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
|
139 |
+
|
140 |
+
#### First install the package
|
141 |
+
|
142 |
+
Run one of the following commands, according to your system:
|
143 |
+
|
144 |
+
```shell
|
145 |
+
# Base ctransformers with no GPU acceleration
|
146 |
+
pip install llama-cpp-python
|
147 |
+
# With NVidia CUDA acceleration
|
148 |
+
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
|
149 |
+
# Or with OpenBLAS acceleration
|
150 |
+
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
|
151 |
+
# Or with CLBLast acceleration
|
152 |
+
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
|
153 |
+
# Or with AMD ROCm GPU acceleration (Linux only)
|
154 |
+
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
|
155 |
+
# Or with Metal GPU acceleration for macOS systems only
|
156 |
+
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
|
157 |
+
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
|
158 |
+
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
|
159 |
+
pip install llama-cpp-python
|
160 |
+
```
|
161 |
+
|
162 |
+
#### Simple llama-cpp-python example code
|
163 |
+
|
164 |
+
```python
|
165 |
+
from llama_cpp import Llama
|
166 |
+
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
167 |
+
llm = Llama(
|
168 |
+
model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first
|
169 |
+
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
|
170 |
+
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
|
171 |
+
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
|
172 |
+
)
|
173 |
+
# Simple inference example
|
174 |
+
output = llm(
|
175 |
+
"<PROMPT>", # Prompt
|
176 |
+
max_tokens=512, # Generate up to 512 tokens
|
177 |
+
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
|
178 |
+
echo=True # Whether to echo the prompt
|
179 |
+
)
|
180 |
+
# Chat Completion API
|
181 |
+
llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
|
182 |
+
llm.create_chat_completion(
|
183 |
+
messages = [
|
184 |
+
{"role": "system", "content": "You are a story writing assistant."},
|
185 |
+
{
|
186 |
+
"role": "user",
|
187 |
+
"content": "Write a story about llamas."
|
188 |
+
}
|
189 |
+
]
|
190 |
+
)
|
191 |
+
```
|
192 |
+
|
193 |
+
## How to use with LangChain
|
194 |
+
|
195 |
+
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
196 |
+
|
197 |
+
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
198 |
+
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
199 |
+
|
200 |
+
<!-- README_GGUF.md-how-to-run end -->
|
201 |
+
|
202 |
+
<!-- footer end -->
|
203 |
+
|
204 |
+
<!-- original-model-card start -->
|
205 |
+
# Original model card: gemma-7b
|
206 |
+
|
207 |
+
|
208 |
+
# Gemma Model Card
|
209 |
+
|
210 |
+
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
|
211 |
+
|
212 |
+
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it).
|
213 |
+
|
214 |
+
**Resources and Technical Documentation**:
|
215 |
+
|
216 |
+
* [Gemma Technical Report](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf)
|
217 |
+
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
|
218 |
+
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
|
219 |
+
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-7b-gg-hf)
|
220 |
+
|
221 |
+
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
|
222 |
+
|
223 |
+
**Authors**: Google
|
224 |
+
|
225 |
+
## Model Information
|
226 |
+
|
227 |
+
Summary description and brief definition of inputs and outputs.
|
228 |
+
|
229 |
+
### Description
|
230 |
+
|
231 |
+
Gemma is a family of lightweight, state-of-the-art open models from Google,
|
232 |
+
built from the same research and technology used to create the Gemini models.
|
233 |
+
They are text-to-text, decoder-only large language models, available in English,
|
234 |
+
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
|
235 |
+
models are well-suited for a variety of text generation tasks, including
|
236 |
+
question answering, summarization, and reasoning. Their relatively small size
|
237 |
+
makes it possible to deploy them in environments with limited resources such as
|
238 |
+
a laptop, desktop or your own cloud infrastructure, democratizing access to
|
239 |
+
state of the art AI models and helping foster innovation for everyone.
|
240 |
+
|
241 |
+
### Context Length
|
242 |
+
Models are trained on a context length of 8192 tokens.
|
243 |
+
|
244 |
+
### Usage
|
245 |
+
|
246 |
+
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
|
247 |
+
|
248 |
+
#### Fine-tuning examples
|
249 |
+
|
250 |
+
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
|
251 |
+
|
252 |
+
* A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using [QLoRA](https://huggingface.co/papers/2305.14314)
|
253 |
+
* A script to perform SFT using FSDP on TPU devices
|
254 |
+
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset. You can also find the copy of the notebook [here](https://github.com/huggingface/notebooks/blob/main/peft/gemma_7b_english_quotes.ipynb).
|
255 |
+
|
256 |
+
#### Running the model on a CPU
|
257 |
+
|
258 |
+
|
259 |
+
```python
|
260 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
261 |
+
|
262 |
+
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
|
263 |
+
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
|
264 |
+
|
265 |
+
input_text = "Write me a poem about Machine Learning."
|
266 |
+
input_ids = tokenizer(input_text, return_tensors="pt")
|
267 |
+
|
268 |
+
outputs = model.generate(**input_ids)
|
269 |
+
print(tokenizer.decode(outputs[0]))
|
270 |
+
```
|
271 |
+
|
272 |
+
|
273 |
+
#### Running the model on a single / multi GPU
|
274 |
+
|
275 |
+
|
276 |
+
```python
|
277 |
+
# pip install accelerate
|
278 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
279 |
+
|
280 |
+
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
|
281 |
+
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto")
|
282 |
+
|
283 |
+
input_text = "Write me a poem about Machine Learning."
|
284 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
285 |
+
|
286 |
+
outputs = model.generate(**input_ids)
|
287 |
+
print(tokenizer.decode(outputs[0]))
|
288 |
+
```
|
289 |
+
|
290 |
+
|
291 |
+
#### Running the model on a GPU using different precisions
|
292 |
+
|
293 |
+
* _Using `torch.float16`_
|
294 |
+
|
295 |
+
```python
|
296 |
+
# pip install accelerate
|
297 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
298 |
+
|
299 |
+
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
|
300 |
+
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", revision="float16")
|
301 |
+
|
302 |
+
input_text = "Write me a poem about Machine Learning."
|
303 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
304 |
+
|
305 |
+
outputs = model.generate(**input_ids)
|
306 |
+
print(tokenizer.decode(outputs[0]))
|
307 |
+
```
|
308 |
+
|
309 |
+
* _Using `torch.bfloat16`_
|
310 |
+
|
311 |
+
```python
|
312 |
+
# pip install accelerate
|
313 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
314 |
+
|
315 |
+
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
|
316 |
+
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", device_map="auto", torch_dtype=torch.bfloat16)
|
317 |
+
|
318 |
+
input_text = "Write me a poem about Machine Learning."
|
319 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
320 |
+
|
321 |
+
outputs = model.generate(**input_ids)
|
322 |
+
print(tokenizer.decode(outputs[0]))
|
323 |
+
```
|
324 |
+
|
325 |
+
#### Quantized Versions through `bitsandbytes`
|
326 |
+
|
327 |
+
* _Using 8-bit precision (int8)_
|
328 |
+
|
329 |
+
```python
|
330 |
+
# pip install bitsandbytes accelerate
|
331 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
332 |
+
|
333 |
+
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
|
334 |
+
|
335 |
+
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
|
336 |
+
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
|
337 |
+
|
338 |
+
input_text = "Write me a poem about Machine Learning."
|
339 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
340 |
+
|
341 |
+
outputs = model.generate(**input_ids)
|
342 |
+
print(tokenizer.decode(outputs[0]))
|
343 |
+
```
|
344 |
+
|
345 |
+
* _Using 4-bit precision_
|
346 |
+
|
347 |
+
```python
|
348 |
+
# pip install bitsandbytes accelerate
|
349 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
350 |
+
|
351 |
+
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
|
352 |
+
|
353 |
+
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
|
354 |
+
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b", quantization_config=quantization_config)
|
355 |
+
|
356 |
+
input_text = "Write me a poem about Machine Learning."
|
357 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
358 |
+
|
359 |
+
outputs = model.generate(**input_ids)
|
360 |
+
print(tokenizer.decode(outputs[0]))
|
361 |
+
```
|
362 |
+
|
363 |
+
|
364 |
+
#### Other optimizations
|
365 |
+
|
366 |
+
* _Flash Attention 2_
|
367 |
+
|
368 |
+
First make sure to install `flash-attn` in your environment `pip install flash-attn`
|
369 |
+
|
370 |
+
```diff
|
371 |
+
model = AutoModelForCausalLM.from_pretrained(
|
372 |
+
model_id,
|
373 |
+
torch_dtype=torch.float16,
|
374 |
+
+ attn_implementation="flash_attention_2"
|
375 |
+
).to(0)
|
376 |
+
```
|
377 |
+
|
378 |
+
### Inputs and outputs
|
379 |
+
|
380 |
+
* **Input:** Text string, such as a question, a prompt, or a document to be
|
381 |
+
summarized.
|
382 |
+
* **Output:** Generated English-language text in response to the input, such
|
383 |
+
as an answer to a question, or a summary of a document.
|
384 |
+
|
385 |
+
## Model Data
|
386 |
+
|
387 |
+
Data used for model training and how the data was processed.
|
388 |
+
|
389 |
+
### Training Dataset
|
390 |
+
|
391 |
+
These models were trained on a dataset of text data that includes a wide variety
|
392 |
+
of sources, totaling 6 trillion tokens. Here are the key components:
|
393 |
+
|
394 |
+
* Web Documents: A diverse collection of web text ensures the model is exposed
|
395 |
+
to a broad range of linguistic styles, topics, and vocabulary. Primarily
|
396 |
+
English-language content.
|
397 |
+
* Code: Exposing the model to code helps it to learn the syntax and patterns of
|
398 |
+
programming languages, which improves its ability to generate code or
|
399 |
+
understand code-related questions.
|
400 |
+
* Mathematics: Training on mathematical text helps the model learn logical
|
401 |
+
reasoning, symbolic representation, and to address mathematical queries.
|
402 |
+
|
403 |
+
The combination of these diverse data sources is crucial for training a powerful
|
404 |
+
language model that can handle a wide variety of different tasks and text
|
405 |
+
formats.
|
406 |
+
|
407 |
+
### Data Preprocessing
|
408 |
+
|
409 |
+
Here are the key data cleaning and filtering methods applied to the training
|
410 |
+
data:
|
411 |
+
|
412 |
+
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
|
413 |
+
applied at multiple stages in the data preparation process to ensure the
|
414 |
+
exclusion of harmful and illegal content
|
415 |
+
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
|
416 |
+
reliable, automated techniques were used to filter out certain personal
|
417 |
+
information and other sensitive data from training sets.
|
418 |
+
* Additional methods: Filtering based on content quality and safely in line with
|
419 |
+
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
|
420 |
+
|
421 |
+
## Implementation Information
|
422 |
+
|
423 |
+
Details about the model internals.
|
424 |
+
|
425 |
+
### Hardware
|
426 |
+
|
427 |
+
Gemma was trained using the latest generation of
|
428 |
+
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
|
429 |
+
|
430 |
+
Training large language models requires significant computational power. TPUs,
|
431 |
+
designed specifically for matrix operations common in machine learning, offer
|
432 |
+
several advantages in this domain:
|
433 |
+
|
434 |
+
* Performance: TPUs are specifically designed to handle the massive computations
|
435 |
+
involved in training LLMs. They can speed up training considerably compared to
|
436 |
+
CPUs.
|
437 |
+
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
|
438 |
+
for the handling of large models and batch sizes during training. This can
|
439 |
+
lead to better model quality.
|
440 |
+
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
|
441 |
+
handling the growing complexity of large foundation models. You can distribute
|
442 |
+
training across multiple TPU devices for faster and more efficient processing.
|
443 |
+
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
|
444 |
+
solution for training large models compared to CPU-based infrastructure,
|
445 |
+
especially when considering the time and resources saved due to faster
|
446 |
+
training.
|
447 |
+
* These advantages are aligned with
|
448 |
+
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
|
449 |
+
|
450 |
+
### Software
|
451 |
+
|
452 |
+
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture).
|
453 |
+
|
454 |
+
JAX allows researchers to take advantage of the latest generation of hardware,
|
455 |
+
including TPUs, for faster and more efficient training of large models.
|
456 |
+
|
457 |
+
ML Pathways is Google's latest effort to build artificially intelligent systems
|
458 |
+
capable of generalizing across multiple tasks. This is specially suitable for
|
459 |
+
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
|
460 |
+
these ones.
|
461 |
+
|
462 |
+
Together, JAX and ML Pathways are used as described in the
|
463 |
+
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
|
464 |
+
controller' programming model of Jax and Pathways allows a single Python
|
465 |
+
process to orchestrate the entire training run, dramatically simplifying the
|
466 |
+
development workflow."
|
467 |
+
|
468 |
+
## Evaluation
|
469 |
+
|
470 |
+
Model evaluation metrics and results.
|
471 |
+
|
472 |
+
### Benchmark Results
|
473 |
+
|
474 |
+
These models were evaluated against a large collection of different datasets and
|
475 |
+
metrics to cover different aspects of text generation:
|
476 |
+
|
477 |
+
| Benchmark | Metric | 2B Params | 7B Params |
|
478 |
+
| -- | -- | -- | -- | --- |
|
479 |
+
|
480 |
+
|
481 |
+
## Usage and Limitations
|
482 |
+
|
483 |
+
These models have certain limitations that users should be aware of.
|
484 |
+
|
485 |
+
### Intended Usage
|
486 |
+
|
487 |
+
Open Large Language Models (LLMs) have a wide range of applications across
|
488 |
+
various industries and domains. The following list of potential uses is not
|
489 |
+
comprehensive. The purpose of this list is to provide contextual information
|
490 |
+
about the possible use-cases that the model creators considered as part of model
|
491 |
+
training and development.
|
492 |
+
|
493 |
+
* Content Creation and Communication
|
494 |
+
* Text Generation: These models can be used to generate creative text formats
|
495 |
+
such as poems, scripts, code, marketing copy, and email drafts.
|
496 |
+
* Chatbots and Conversational AI: Power conversational interfaces for customer
|
497 |
+
service, virtual assistants, or interactive applications.
|
498 |
+
* Text Summarization: Generate concise summaries of a text corpus, research
|
499 |
+
papers, or reports.
|
500 |
+
* Research and Education
|
501 |
+
* Natural Language Processing (NLP) Research: These models can serve as a
|
502 |
+
foundation for researchers to experiment with NLP techniques, develop
|
503 |
+
algorithms, and contribute to the advancement of the field.
|
504 |
+
* Language Learning Tools: Support interactive language learning experiences,
|
505 |
+
aiding in grammar correction or providing writing practice.
|
506 |
+
* Knowledge Exploration: Assist researchers in exploring large bodies of text
|
507 |
+
by generating summaries or answering questions about specific topics.
|
508 |
+
|
509 |
+
### Limitations
|
510 |
+
|
511 |
+
* Training Data
|
512 |
+
* The quality and diversity of the training data significantly influence the
|
513 |
+
model's capabilities. Biases or gaps in the training data can lead to
|
514 |
+
limitations in the model's responses.
|
515 |
+
* The scope of the training dataset determines the subject areas the model can
|
516 |
+
handle effectively.
|
517 |
+
* Context and Task Complexity
|
518 |
+
* LLMs are better at tasks that can be framed with clear prompts and
|
519 |
+
instructions. Open-ended or highly complex tasks might be challenging.
|
520 |
+
* A model's performance can be influenced by the amount of context provided
|
521 |
+
(longer context generally leads to better outputs, up to a certain point).
|
522 |
+
* Language Ambiguity and Nuance
|
523 |
+
* Natural language is inherently complex. LLMs might struggle to grasp subtle
|
524 |
+
nuances, sarcasm, or figurative language.
|
525 |
+
* Factual Accuracy
|
526 |
+
* LLMs generate responses based on information they learned from their
|
527 |
+
training datasets, but they are not knowledge bases. They may generate
|
528 |
+
incorrect or outdated factual statements.
|
529 |
+
* Common Sense
|
530 |
+
* LLMs rely on statistical patterns in language. They might lack the ability
|
531 |
+
to apply common sense reasoning in certain situations.
|
532 |
+
|
533 |
+
### Ethical Considerations and Risks
|
534 |
+
|
535 |
+
The development of large language models (LLMs) raises several ethical concerns.
|
536 |
+
In creating an open model, we have carefully considered the following:
|
537 |
+
|
538 |
+
* Bias and Fairness
|
539 |
+
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
|
540 |
+
biases embedded in the training material. These models underwent careful
|
541 |
+
scrutiny, input data pre-processing described and posterior evaluations
|
542 |
+
reported in this card.
|
543 |
+
* Misinformation and Misuse
|
544 |
+
* LLMs can be misused to generate text that is false, misleading, or harmful.
|
545 |
+
* Guidelines are provided for responsible use with the model, see the
|
546 |
+
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
|
547 |
+
* Transparency and Accountability:
|
548 |
+
* This model card summarizes details on the models' architecture,
|
549 |
+
capabilities, limitations, and evaluation processes.
|
550 |
+
* A responsibly developed open model offers the opportunity to share
|
551 |
+
innovation by making LLM technology accessible to developers and researchers
|
552 |
+
across the AI ecosystem.
|
553 |
+
|
554 |
+
Risks identified and mitigations:
|
555 |
+
|
556 |
+
* Perpetuation of biases: It's encouraged to perform continuous monitoring
|
557 |
+
(using evaluation metrics, human review) and the exploration of de-biasing
|
558 |
+
techniques during model training, fine-tuning, and other use cases.
|
559 |
+
* Generation of harmful content: Mechanisms and guidelines for content safety
|
560 |
+
are essential. Developers are encouraged to exercise caution and implement
|
561 |
+
appropriate content safety safeguards based on their specific product policies
|
562 |
+
and application use cases.
|
563 |
+
* Misuse for malicious purposes: Technical limitations and developer and
|
564 |
+
end-user education can help mitigate against malicious applications of LLMs.
|
565 |
+
Educational resources and reporting mechanisms for users to flag misuse are
|
566 |
+
provided. Prohibited uses of Gemma models are outlined in the
|
567 |
+
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
|
568 |
+
* Privacy violations: Models were trained on data filtered for removal of PII
|
569 |
+
(Personally Identifiable Information). Developers are encouraged to adhere to
|
570 |
+
privacy regulations with privacy-preserving techniques.
|
571 |
+
|
572 |
+
### Benefits
|
573 |
+
|
574 |
+
At the time of release, this family of models provides high-performance open
|
575 |
+
large language model implementations designed from the ground up for Responsible
|
576 |
+
AI development compared to similarly sized models.
|
577 |
+
|
578 |
+
Using the benchmark evaluation metrics described in this document, these models
|
579 |
+
have shown to provide superior performance to other, comparably-sized open model
|
580 |
+
alternatives.
|
581 |
+
|
582 |
+
<!-- original-model-card end -->
|