---
license: bigscience-bloom-rail-1.0
pipeline_tag: text-generation
library_name: transformers
tags:
- dolly
- bloomz
- Spanish
- French
- German
datasets:
- argilla/databricks-dolly-15k-multilingual
inference: false
widget:
- text: >-
Below is an instruction that describes a task, paired with an input that
provides further context.
Write a response that appropriately completes the request.
### Instruction:
Tell me about alpacas
language:
- es
- fr
- de
---
# DOLLcerberOOM: 3 x Dolly 🐑 + BLOOMz 💮
## Adapter Description
This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOMz 7B1** to be fine-tuned on the **Dolly's Dataset (tanslated to Spanish, French and German by Argilla)** by using the method **LoRA**.
## Model Description
Instruction Tuned version of BigScience Large Open-science Open-access Multilingual.
[BLOOMz 7B1 MT](https://huggingface.co/bigscience/bloomz-7b1-mt)
## Training data
This collection of datasets are machine-translated (and soon curated) versions of the `databricks-dolly-15k` [dataset](https://github.com/databrickslabs/dolly/tree/master/data) originally created by Databricks, Inc. in 2023.
The goal is to give practitioners a starting point for training open-source instruction-following models beyond English. However, as the translation quality will not be perfect, we highly recommend dedicating time to curate and fix translation issues. Below we explain how to load the datasets into [Argilla for data curation and fixing](https://github.com/argilla-io/argilla). Additionally, we'll be improving the datasets made available here, with the help of different communities.
**We highly recommend dataset curation beyond proof-of-concept experiments.**
### Supported Tasks and Leaderboards
TBA
### Training procedure
TBA
## How to use
TBA
## Citation