ZQ-Dev's picture
Add quant links to readme
b5a0d98 verified
---
license: mit
language:
- en
datasets:
- TRAC-MTRY/traclm-v3-data
- Open-Orca/SlimOrca-Dedup
---
# Model Card for traclm-v3-7b-instruct
An Army domain finetune of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) created by finetuning on a merger of domain-specific and general-purpose instruction-tuning datasets.
## Model Details
### Model Description
This model is a research project aimed at exploring whether a pretrained LLM can acquire tangible domain-specific knowledge about the Army domain.
- **Developed by:** The Research and Analysis Center, Army Futures Command, U.S. Army
- **License:** MIT
- **Model Type:** MistralForCausalLM
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Available Quantizations (for running on low-resource hardware):
- [AWQ](https://huggingface.co/TRAC-MTRY/traclm-v3-7b-instruct-AWQ)
- [GPTQ](https://huggingface.co/TRAC-MTRY/traclm-v3-7b-instruct-GPTQ)
- [GGUF](https://huggingface.co/TRAC-MTRY/traclm-v3-7b-instruct-GGUF)
### Downstream Use
This model is instruction-tuned, and is thus more capable of following user instructions than its corresponding base version. However, this model is still capable of extreme hallucination, so all outputs should be verified by end users.
### Out-of-Scope Use
The creation of this model constitutes academic research in partnership with the Naval Postgraduate School. The purpose of this research is to inform future DoD experimentation regarding the development and application of domain-specific large language models. Experiments involving direct application of this model to downstream military tasks are encouraged, but extreme caution should be exercised before productionalization.
## Prompt Format
This model was fine-tuned with the chatml prompt format. It is *highly* recommended that you use the same format for any interactions with the model. Failure to do so will degrade performance significantly.
ChatML Format:
```
<|im_start|>system
Provide some context and/or instructions to the model.
<|im_end|>
<|im_start|>user
The user’s message goes here
<|im_end|>
<|im_start|>assistant
```
The ChatML format can easily be applied to text you plan to process with the model using the `chat_template` included in the tokenizer. Read [here](https://huggingface.co/docs/transformers/main/en/chat_templating) for additional information.
## Training Details
### Training Data
This model was trained on a shuffled merger of the following datasets:
- General Purpose Instruction Tuning: [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- Domain Specific Instruction Tuning: [TRAC-MTRY/traclm-v3-data](https://huggingface.co/datasets/TRAC-MTRY/traclm-v3-data) **(TBP)**
### Training Procedure
The model was trained using Open Access AI Collective's [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) framework and Microsoft's [DeepSpeed](https://github.com/microsoft/DeepSpeed) framework for model/data parallelism.
### Training Hardware
Training was conducted on a single compute node 4x NVIDIA A100 GPUs.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 28
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 19
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5726 | 0.0 | 1 | 1.6102 |
| 1.2333 | 0.2 | 510 | 1.2477 |
| 1.1485 | 0.4 | 1020 | 1.2010 |
| 1.106 | 0.6 | 1530 | 1.1687 |
| 1.1772 | 0.8 | 2040 | 1.1419 |
| 1.1567 | 1.0 | 2550 | 1.1190 |
| 1.0359 | 1.19 | 3060 | 1.1130 |
| 0.945 | 1.39 | 3570 | 1.0977 |
| 0.9365 | 1.59 | 4080 | 1.0831 |
| 0.9334 | 1.79 | 4590 | 1.0721 |
| 0.8913 | 1.99 | 5100 | 1.0627 |
| 0.804 | 2.18 | 5610 | 1.0922 |
| 0.7892 | 2.38 | 6120 | 1.0888 |
| 0.7757 | 2.58 | 6630 | 1.0873 |
| 0.7797 | 2.78 | 7140 | 1.0864 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Model Card Contact
MAJ Daniel C. Ruiz (daniel.ruiz@nps.edu)