Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ datasets:
|
|
20 |
- **License:** Gemma
|
21 |
- **Finetuned from model :** tomo1222/gemma-2-27b-bf16-4bit
|
22 |
|
23 |
-
tomo1222/gemma-2-27b-bf16-4bit : [google/gemma-2-27b](https://huggingface.co/google/gemma-2-27b)を[Unsloth](https://github.com/unslothai/unsloth)で直接用いるために、BitsAndBytesを用いて4bit
|
24 |
|
25 |
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
26 |
|
@@ -36,7 +36,7 @@ pip install -U ragatouille
|
|
36 |
pip install fugashi unidic-lite
|
37 |
```
|
38 |
|
39 |
-
### inference
|
40 |
```python
|
41 |
from datasets import concatenate_datasets, load_dataset
|
42 |
from unsloth import FastLanguageModel
|
|
|
20 |
- **License:** Gemma
|
21 |
- **Finetuned from model :** tomo1222/gemma-2-27b-bf16-4bit
|
22 |
|
23 |
+
tomo1222/gemma-2-27b-bf16-4bit : [google/gemma-2-27b](https://huggingface.co/google/gemma-2-27b)を[Unsloth](https://github.com/unslothai/unsloth)で直接用いるために、BitsAndBytesを用いて4bit量子化し、そのまま保存したもの。
|
24 |
|
25 |
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
26 |
|
|
|
36 |
pip install fugashi unidic-lite
|
37 |
```
|
38 |
|
39 |
+
### inference code using Google Colaboratory(L4)
|
40 |
```python
|
41 |
from datasets import concatenate_datasets, load_dataset
|
42 |
from unsloth import FastLanguageModel
|