mrm8488 commited on
Commit
64981e3
1 Parent(s): 106dfc2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -14,17 +14,15 @@ pipeline_tag: text-generation
14
  This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allows the base model **Bertin-GPT-J-&B-ES** to be fine-tuned on the dataset **mrm8488/CHISTES_spanish_jokes** for **Spanish jokes generation** by using the method **LoRA**.
15
 
16
  ## Model Description
 
17
 
18
  ## Training data
 
 
19
 
20
  ### Training procedure
21
 
22
- ## Intended Use and Limitations
23
-
24
  ## How to use
25
-
26
- ## Limitations and Biases
27
-
28
- ## Evaluation results
29
-
30
- ## Citation and Related Information
 
14
  This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allows the base model **Bertin-GPT-J-&B-ES** to be fine-tuned on the dataset **mrm8488/CHISTES_spanish_jokes** for **Spanish jokes generation** by using the method **LoRA**.
15
 
16
  ## Model Description
17
+ [BERTIN-GPT-J-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
18
 
19
  ## Training data
20
+ Dataset from [Workshop for NLP introduction with Spanish jokes](https://github.com/liopic/chistes-nlp)
21
+ [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
22
 
23
  ### Training procedure
24
 
 
 
25
  ## How to use
26
+ ```py
27
+ # TODO
28
+ ```