Model Details
This model is a fine-tuned version of Llam 2 7b from Meta. It was finetuned on 2.5K real-world text message samples that were hand-labeled into one of eight classifications.
Fine-tuning was done based on the Google Colab notebook linked from this blog post.
Model Developers Josiah Bryan
Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
Input Models input text only.
Output Models generate text only.
Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture.
Model Dates This model was trained November 9th, 2023
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model and test in real-world ussages.
Where to send questions or comments about the model Email josiahbryan@gmail.com with feedback.
Intended Use
Intended Use Cases This model is intended to classify incoming text message intent in valet/parking management applications.
Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 2 Community License. Use in languages other than English**.
**Note: Developers may fine-tune this model for languages beyond English provided they comply with the Llama 2 Community License and the Acceptable Use Policy.
Hardware and Software
Training Factors Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Training Data
Overview The fine-tuning data includes propietary text message datasets, with 2.5K human-annotated examples. Neither the pretraining nor the fine-tuning datasets include private user data.
Data Freshness The pretraining data has a cutoff of November 9th, 2023.
Evaluation Results
Evaluations in progress.
Ethical Considerations and Limitations
(These considerations/limitations taken from the original Llama 2 model card.)
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
- Downloads last month
- 7