Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,11 @@ language:
|
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
9 |
|
10 |
-
# Adapter for BERTIN-GPT-J-6B
|
11 |
|
12 |
|
13 |
## Adapter Description
|
14 |
-
This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allows the base model **
|
15 |
|
16 |
## Model Description
|
17 |
[BERTIN-GPT-J-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
|
|
|
7 |
pipeline_tag: text-generation
|
8 |
---
|
9 |
|
10 |
+
# Adapter for BERTIN-GPT-J-6B fine-tuned on Jokes for jokes generation
|
11 |
|
12 |
|
13 |
## Adapter Description
|
14 |
+
This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allows the base model **BERTIN-GPT-J-6B** to be fine-tuned on the dataset **mrm8488/CHISTES_spanish_jokes** for **Spanish jokes generation** by using the method **LoRA**.
|
15 |
|
16 |
## Model Description
|
17 |
[BERTIN-GPT-J-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
|