Edit model card

conditional-detr-resnet-50_fine_tuned_beyond_words

This model is a fine-tuned version of microsoft/conditional-detr-resnet-50 on the loc_beyond_words dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5892

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
6.674 0.28 100 1.7571
1.4721 0.56 200 1.2737
1.2557 0.84 300 1.1037
1.0781 1.12 400 1.0184
1.0353 1.4 500 0.9988
1.0324 1.69 600 0.9951
0.9131 1.97 700 0.9224
0.8724 2.25 800 0.9692
0.8129 2.53 900 0.8670
0.9 2.81 1000 0.8326
0.7993 3.09 1100 0.7875
0.7907 3.37 1200 0.7517
0.8424 3.65 1300 0.9088
0.7808 3.93 1400 0.8506
0.7469 4.21 1500 0.7928
0.7582 4.49 1600 0.7228
0.7546 4.78 1700 0.7588
0.7842 5.06 1800 0.7726
0.775 5.34 1900 0.7676
0.7263 5.62 2000 0.7164
0.7209 5.9 2100 0.7061
0.7259 6.18 2200 0.7579
0.7701 6.46 2300 0.8184
0.7391 6.74 2400 0.6684
0.6834 7.02 2500 0.7042
0.7098 7.3 2600 0.7166
0.7498 7.58 2700 0.6752
0.7056 7.87 2800 0.7064
0.7004 8.15 2900 0.7090
0.6964 8.43 3000 0.7318
0.682 8.71 3100 0.7216
0.7309 8.99 3200 0.6545
0.6576 9.27 3300 0.6478
0.7014 9.55 3400 0.6814
0.673 9.83 3500 0.6783
0.6455 10.11 3600 0.7248
0.7041 10.39 3700 0.7729
0.6664 10.67 3800 0.6746
0.6161 10.96 3900 0.6414
0.6975 11.24 4000 0.6637
0.6751 11.52 4100 0.6570
0.6092 11.8 4200 0.6691
0.6593 12.08 4300 0.6276
0.6449 12.36 4400 0.6388
0.6136 12.64 4500 0.6711
0.6521 12.92 4600 0.6768
0.6162 13.2 4700 0.6427
0.7083 13.48 4800 0.6492
0.6407 13.76 4900 0.6213
0.6371 14.04 5000 0.6674
0.626 14.33 5100 0.6185
0.6442 14.61 5200 0.7180
0.5981 14.89 5300 0.6441
0.629 15.17 5400 0.6262
0.625 15.45 5500 0.6397
0.6123 15.73 5600 0.6440
0.6084 16.01 5700 0.6493
0.6021 16.29 5800 0.6263
0.6502 16.57 5900 0.6254
0.6339 16.85 6000 0.7043
0.5925 17.13 6100 0.8014
0.6453 17.42 6200 0.6385
0.6143 17.7 6300 0.6033
0.6057 17.98 6400 0.6881
0.6386 18.26 6500 0.6366
0.5839 18.54 6600 0.6563
0.6013 18.82 6700 0.5982
0.5999 19.1 6800 0.6064
0.6023 19.38 6900 0.5795
0.5593 19.66 7000 0.6538
0.6375 19.94 7100 0.6991
0.6073 20.22 7200 0.7117
0.596 20.51 7300 0.6034
0.5987 20.79 7400 0.6489
0.5922 21.07 7500 0.6216
0.589 21.35 7600 0.6257
0.6047 21.63 7700 0.6415
0.5775 21.91 7800 0.6159
0.588 22.19 7900 0.6095
0.5844 22.47 8000 0.6373
0.5964 22.75 8100 0.6022
0.5987 23.03 8200 0.6050
0.5605 23.31 8300 0.6083
0.5835 23.6 8400 0.7823
0.5816 23.88 8500 0.6417
0.5757 24.16 8600 0.6324
0.5997 24.44 8700 0.6046
0.5674 24.72 8800 0.6558
0.5703 25.0 8900 0.5819
0.5766 25.28 9000 0.6116
0.5548 25.56 9100 0.5877
0.564 25.84 9200 0.5672
0.548 26.12 9300 0.6073
0.5436 26.4 9400 0.5739
0.6006 26.69 9500 0.6101
0.5519 26.97 9600 0.5869
0.5432 27.25 9700 0.5721
0.5597 27.53 9800 0.5807
0.5254 27.81 9900 0.5849
0.5366 28.09 10000 0.5831
0.5654 28.37 10100 0.5993
0.57 28.65 10200 0.5892

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.0+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
64
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for davanstrien/conditional-detr-resnet-50_fine_tuned_beyond_words

Finetuned
(46)
this model

Dataset used to train davanstrien/conditional-detr-resnet-50_fine_tuned_beyond_words