---
license: other
datasets:
- Rardilit/Panther-dataset_v1
language:
- en
metrics:
- accuracy
- bleu
- code_eval
- chrf
- cer
library_name: transformers
tags:
- LLM
- Panther
- Transformers
- llama
- PyTorch
- Tensorboard
- Text Generation
---
Panther
Rardilit Large Open-access Language Model
Model Card
![Panther Logo](./logo.jpg)
Version 1.0 / 29.May.2023
# Model Card for Bloom-560m
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Recommendations](#recommendations)
5. [Training Details](#training-details)
## Model Details
### Model Description
*This section provides information for anyone who wants to know about the model.*
- **Developed by:** Rardilit ([website](https://www.rardilit.web.app))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
- **Model Type:** Transformer-based Language Model
- **Version:** 1.0.0
- **Languages:** Multiple;
- **License:** Panther License v1.0 ([link](https://www.rardilit.web.app/panther-license.html))
- **Release Date Estimate:** Monday, 16.May.2023
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
#### **Out-of-scope Uses**
Using the model in high-stakes settings is out of scope for this model. The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating human rights, or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- Deception
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
## Bias, Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain personal information
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
### Recommendations
*This section provides information on warnings and potential mitigations.*
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
## Training Details
This repo contains a low-rank adapter for LLaMA-7b with just 4194304 parameters
fit on the [Rardilit/Panther-dataset_v1](https://huggingface.co/datasets/Rardilit/Panther-dataset_v1) dataset with 20k prompts and responses.
This version of the weights was trained with the following hyperparameters:
- Epochs: 1 (load from best epoch)
- LORA_R = 8
- LORA_ALPHA = 16
- LORA_DROPOUT= 0.05
- LORA_TARGET_MODULES = [
"q_proj",
"v_proj",
]
- BATCH_SIZE = 300
- MICRO_BATCH_SIZE = 4
- GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
- LEARNING_RATE = 3e-4
- TRAIN_STEPS = 10
- warmup_steps = 10
- logging_steps = 1
- fp16 = true
- optim = "adamw_torch"
- eval_steps=4
- save_steps=8
#### Training Time
The time in training this model with 1 x T4 16gb vRAM was approx. 45 min.