Edit model card

pima-diabetes

This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1017
  • Rouge1: 87.4026
  • Rouge2: 76.1364
  • Rougel: 87.2727
  • Rougelsum: 87.2727
  • Gen Len: 6.8442

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 39 0.1038 86.7532 75.1623 86.7532 86.7532 7.0
No log 2.0 78 0.1191 86.7532 75.1623 86.7532 86.7532 7.0
No log 3.0 117 0.1026 86.7532 75.1623 86.7532 86.7532 7.0
No log 4.0 156 0.1027 86.7532 75.1623 86.7532 86.7532 7.0
No log 5.0 195 0.1026 86.7532 75.1623 86.7532 86.7532 7.0
No log 6.0 234 0.1025 86.4935 74.6753 86.4935 86.4935 6.9870
No log 7.0 273 0.1030 86.7532 75.1623 86.7532 86.7532 7.0
No log 8.0 312 0.1031 83.6364 69.3182 83.6364 83.6364 6.5325
No log 9.0 351 0.1024 86.4935 74.6753 86.4935 86.4935 6.9610
No log 10.0 390 0.1017 87.4026 76.1364 87.2727 87.2727 6.8442

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.0
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
1
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ahsan-mavros/pima-diabetes

Finetuned
(640)
this model