title: README
emoji: π
colorFrom: pink
colorTo: indigo
sdk: static
pinned: false
This organization is a part of the NeurIPS 2021 demonstration "Training Transformers Together".
In this demo, we train a model similar to OpenAI DALL-E β a Transformer "language model" that generates images from text descriptions. Training happens collaboratively β volunteers from all over the Internet contribute to the training using hardware available to them. We use LAION-400M, the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on the dalleβpytorch implementation by Phil Wang with a few tweaks to make it communication-efficient.
See details about how to join and how it works on our website.
This organization gathers people participating in the collaborative training and provides links to the necessary resources:
- π Starter kits for Google Colab and Kaggle (easy way to join the training)
- π Dashboard (the current training state: loss, number of peers, etc.)
- π Colab notebook for running inference
- π Model weights (the latest checkpoint)
- π Weights & Biases plots for aux peers (aggregating the metrics) and actual trainers (contributing with their GPUs)
- π Code
- π Dataset
Feel free to reach us on Discord if you have any questions π