You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for traclm-v3-7b-instruct

An Army domain finetune of mistralai/Mistral-7B-v0.1 created by finetuning on a merger of domain-specific and general-purpose instruction-tuning datasets.

Model Details

Model Description

This model is a research project aimed at exploring whether a pretrained LLM can acquire tangible domain-specific knowledge about the Army domain.

  • Developed by: The Research and Analysis Center, Army Futures Command, U.S. Army
  • License: MIT
  • Model Type: MistralForCausalLM
  • Finetuned from model: mistralai/Mistral-7B-v0.1

Available Quantizations (for running on low-resource hardware):

Downstream Use

This model is instruction-tuned, and is thus more capable of following user instructions than its corresponding base version. However, this model is still capable of extreme hallucination, so all outputs should be verified by end users.

Out-of-Scope Use

The creation of this model constitutes academic research in partnership with the Naval Postgraduate School. The purpose of this research is to inform future DoD experimentation regarding the development and application of domain-specific large language models. Experiments involving direct application of this model to downstream military tasks are encouraged, but extreme caution should be exercised before productionalization.

Prompt Format

This model was fine-tuned with the chatml prompt format. It is highly recommended that you use the same format for any interactions with the model. Failure to do so will degrade performance significantly.

ChatML Format:

<|im_start|>system 
Provide some context and/or instructions to the model.
<|im_end|> 
<|im_start|>user 
The user’s message goes here
<|im_end|> 
<|im_start|>assistant 

The ChatML format can easily be applied to text you plan to process with the model using the chat_template included in the tokenizer. Read here for additional information.

Training Details

Training Data

This model was trained on a shuffled merger of the following datasets:

Training Procedure

The model was trained using Open Access AI Collective's Axolotl framework and Microsoft's DeepSpeed framework for model/data parallelism.

Training Hardware

Training was conducted on a single compute node 4x NVIDIA A100 GPUs.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 28
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 16
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 19
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.5726 0.0 1 1.6102
1.2333 0.2 510 1.2477
1.1485 0.4 1020 1.2010
1.106 0.6 1530 1.1687
1.1772 0.8 2040 1.1419
1.1567 1.0 2550 1.1190
1.0359 1.19 3060 1.1130
0.945 1.39 3570 1.0977
0.9365 1.59 4080 1.0831
0.9334 1.79 4590 1.0721
0.8913 1.99 5100 1.0627
0.804 2.18 5610 1.0922
0.7892 2.38 6120 1.0888
0.7757 2.58 6630 1.0873
0.7797 2.78 7140 1.0864

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.0

Built with Axolotl

Model Card Contact

MAJ Daniel C. Ruiz (daniel.ruiz@nps.edu)

Downloads last month
6
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train TRAC-MTRY/traclm-v3-7b-instruct