khalsaa / README.md
mksethi's picture
Update README.md
1993bf0 verified
---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: gemma-2b-dolly-qa
results: []
---
---
# Model Card for Khalsa
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Fine-tuned Gemma Model which was worked on using the intel developer cloud, and trained on using Intel Max 1550 GPU
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
Fine-tuned Gemma Model which was worked on using the intel developer cloud
- **Developed by:** Manik Sethi, Britney Nguyen, Mario Miranda
- **Model type:** Language model
- **Language(s) (NLP):** eng
- **License:** apache-2.0
- **Parent Model:** gemma-2b
- **Resources for more information:** [Intel Develpor Cloud](https://console.cloud.intel.com/training)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Model is intended to be used by individuals who are struggling to understand the information in important documentations. More specifically, the demographic includes immigrants and visa holders who struggle with english. When they receive documentaiton from jobs, government agencies, or healthcare, our model should be able to answer any questions they have.
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
User uploads a pdf to the application, which is then parsed by our model. The user is then able to ask questions about content in the given documentation.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Misuse of the model would entail relying on it to provide legal advice, which it is not intended to give.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
Current limitations are the quantity of languages available for the model to serve in.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
To translate the advice into a target language, we suggest first taking the output from the LLM, and *then* translating it. Trying to get the model to do both simultaneously may result in flawed responses.
# Training Details
## Training Data
Model was trained using the [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) datbase. This dataset contains a diverse range of question-answer pairs spanning multiple categories, facilitating comprehensive training. By focusing specifically on the question-answer pairs, the model adapts to provide accurate and relevant responses to various inquiries.
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
The dataset underwent preprocessing steps to extract question-answer pairs relevant to the "Question answering" category. This involved filtering the dataset to ensure that the model is fine-tuned on pertinent data, enhancing its ability to provide accurate responses.
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
Ran through 25 epocs.
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
We fed the following prompts into the model
<!-- This should link to a Data Card if possible. -->
"What are the main differences between a vegetarian and a vegan diet?",
"What are some effective strategies for managing stress and anxiety?",
"Can you explain the concept of blockchain technology in simple terms?",
"What are the key factors that influence the price of crude oil in global markets?",
"When did Virgin Australia start operating?"
## Results
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Intel XEON hardware
- **Hours used:** More information needed
- **Cloud Provider:** Intel Developer cloud
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
Trained model on Intel Max 1550 GPU
### Software
Developed model using Intel Developer Cloud
# Model Card Authors
Manik Sethi, Britney Nguyen, Mario Miranda
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details>