--- license: apache-2.0 datasets: - mrm8488/CHISTES_spanish_jokes language: - es pipeline_tag: text-generation --- # Adapter for BERTIN-GPT-J-6B-ES fine-tuned on Jokes for jokes generation ## Adapter Description This adapter was created by using the [PEFT](https://github.com/huggingface/peft) library and allows the base model **Bertin-GPT-J-&B-ES** to be fine-tuned on the dataset **mrm8488/CHISTES_spanish_jokes** for **Spanish jokes generation** by using the method **LoRA**. ## Model Description [BERTIN-GPT-J-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. ## Training data Dataset from [Workshop for NLP introduction with Spanish jokes](https://github.com/liopic/chistes-nlp) [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Training procedure ## How to use ```py # TODO ```