Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
VAGOsolutions
/
SauerkrautLM-Mixtral-8x7B-Instruct
like
22
Follow
VAGO solutions
79
Text Generation
Transformers
Safetensors
argilla/distilabel-math-preference-dpo
5 languages
mixtral
mistral
finetune
dpo
Instruct
augmentation
german
Mixture of Experts
conversational
text-generation-inference
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
7
Train
Deploy
Use this model
Document/publish the training data and training procedure
#7
by
kno10
- opened
Jun 11
Discussion
kno10
Jun 11
It would be important to document:
how much data was used for finetuning
how many samples for for DPO
mixture of training data (in particular, languages)
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment