malmarjeh commited on
Commit
ae0d743
1 Parent(s): 8b46062

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -60,7 +60,7 @@ This model utilizes the `MistralForCausalLM` architecture with a `LlamaTokenizer
60
 
61
  ## Training Data
62
 
63
- The model was fine-tuned on the [Bitext Insurance Dataset](https://huggingface.co/datasets/bitext/Bitext-insurance-llm-chatbot-training-dataset) comprising various insurance-related intents, including: buy_insurance_policy, schedule_appointment, check_payments, calculate_insurance_quote, negotiate_settlement, information_home_insurance, and more. Totaling 39 intents, and each intent is represented by approximately 1000 examples.
64
 
65
  This comprehensive training helps the model address a broad spectrum of insurance-related questions effectively. The dataset follows the same structured approach as our dataset published on Hugging Face as [bitext/Bitext-customer-support-llm-chatbot-training-dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset), but with a focus on insurance.
66
 
@@ -70,7 +70,7 @@ This comprehensive training helps the model address a broad spectrum of insuranc
70
 
71
  - **Optimizer**: AdamW
72
  - **Learning Rate**: 0.0002 with a cosine learning rate scheduler
73
- - **Epochs**: 4
74
  - **Batch Size**: 4
75
  - **Gradient Accumulation Steps**: 4
76
  - **Maximum Sequence Length**: 8192 tokens
 
60
 
61
  ## Training Data
62
 
63
+ The model was fine-tuned on the [Bitext Insurance Dataset](https://huggingface.co/datasets/bitext/Bitext-insurance-llm-chatbot-training-dataset) comprising various insurance-related intents, including: buy_insurance_policy, schedule_appointment, check_payments, calculate_insurance_quote, negotiate_settlement, and more. Totaling 39 intents, and each intent is represented by approximately 1000 examples.
64
 
65
  This comprehensive training helps the model address a broad spectrum of insurance-related questions effectively. The dataset follows the same structured approach as our dataset published on Hugging Face as [bitext/Bitext-customer-support-llm-chatbot-training-dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset), but with a focus on insurance.
66
 
 
70
 
71
  - **Optimizer**: AdamW
72
  - **Learning Rate**: 0.0002 with a cosine learning rate scheduler
73
+ - **Epochs**: 1
74
  - **Batch Size**: 4
75
  - **Gradient Accumulation Steps**: 4
76
  - **Maximum Sequence Length**: 8192 tokens