license: apache-2.0
BloomChat V1.0
BloomChat-v1.0 is based on BigScience Group Bloom-176 model, and is instruction-tuned on a subset of 100k datapoints per data source from the OIG dataset provided by laion. Then aligned using Dolly 2.0 and Oasst1.
Model Details
Model Description
- Developed by: SambaNova Systems and Together Computer
- Model type: Language Model
- Language(s): Multiple; see training data from Bloom-176B
- License: apache-2.0
- Instruction Tuned from model: BigScience Group Bloom-176B
Additional Information
- Blogpost: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
Like all LLMs, BloomChat has certain limitations:
- Hallucination: BloomChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: BloomChat may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: BloomChat may inadvertently generate responses containing inappropriate or harmful content.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
Training Procedure
We trained BloomChat with SambaStudio, a platform built on SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from Bloom-176B, an OSS multilingual 176B GPT model pretrained by the BigScience group.
Hyperparameters
Instruction-tuned Training on OIG
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
- Optimizer: AdamW
- Grad accumulation: 1
- Epochs: 1
- Global Batch size: 128
- Batch tokens: 128 * 2048 = 262,144 tokens
- LR: 1e-5
- Weight decay: 0.1
Instruction-tuned Training on Dolly 2.0 and Oasst1
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
- Optimizer: AdamW
- Grad accumulation: 1
- Epochs: 3
- Global Batch size: 128
- Batch tokens: 128 * 2048 = 262,144 tokens
- LR: 1e-5
- Weight decay: 0.1
Evaluation
Community
[Link to discord server]