Edit model card

plant-seedlings-model-ConvNet-all-train

This model is a fine-tuned version of facebook/convnext-tiny-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2653
  • Accuracy: 0.9392

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.2307 0.25 100 0.4912 0.8729
0.0652 0.49 200 0.3280 0.9085
0.1854 0.74 300 0.4850 0.8711
0.1831 0.98 400 0.3827 0.8938
0.1636 1.23 500 0.4071 0.9012
0.0868 1.47 600 0.3980 0.8999
0.2298 1.72 700 0.4855 0.8846
0.2291 1.97 800 0.4019 0.8883
0.2698 2.21 900 0.3855 0.8944
0.0923 2.46 1000 0.3690 0.8938
0.1396 2.7 1100 0.4715 0.8760
0.174 2.95 1200 0.3710 0.9006
0.1009 3.19 1300 0.3481 0.9030
0.1162 3.44 1400 0.3502 0.9153
0.1737 3.69 1500 0.4034 0.8999
0.2478 3.93 1600 0.4053 0.8913
0.1471 4.18 1700 0.3555 0.9036
0.1873 4.42 1800 0.3769 0.9122
0.0615 4.67 1900 0.4147 0.8987
0.1718 4.91 2000 0.2779 0.9214
0.1012 5.16 2100 0.3239 0.9159
0.0967 5.41 2200 0.3290 0.9079
0.0873 5.65 2300 0.4057 0.9055
0.0567 5.9 2400 0.3821 0.9018
0.1356 6.14 2500 0.4183 0.8944
0.168 6.39 2600 0.3755 0.9067
0.1592 6.63 2700 0.3413 0.9079
0.1239 6.88 2800 0.3299 0.9091
0.0382 7.13 2900 0.3391 0.9165
0.1167 7.37 3000 0.4274 0.8987
0.109 7.62 3100 0.3952 0.9018
0.0591 7.86 3200 0.4043 0.9122
0.1407 8.11 3300 0.3325 0.9134
0.054 8.35 3400 0.3333 0.9177
0.0633 8.6 3500 0.3275 0.9208
0.1038 8.85 3600 0.3982 0.9042
0.0435 9.09 3700 0.3656 0.9190
0.1549 9.34 3800 0.3367 0.9190
0.2299 9.58 3900 0.3872 0.9134
0.0375 9.83 4000 0.3206 0.9245
0.0204 10.07 4100 0.3133 0.9263
0.1208 10.32 4200 0.3373 0.9196
0.0617 10.57 4300 0.3045 0.9220
0.1426 10.81 4400 0.2972 0.9294
0.0351 11.06 4500 0.3409 0.9147
0.0311 11.3 4600 0.3003 0.9233
0.1255 11.55 4700 0.3447 0.9282
0.0569 11.79 4800 0.2703 0.9331
0.0918 12.04 4900 0.3170 0.9245
0.0656 12.29 5000 0.3223 0.9190
0.0971 12.53 5100 0.3209 0.9196
0.0742 12.78 5200 0.3030 0.9282
0.0662 13.02 5300 0.2780 0.9319
0.0453 13.27 5400 0.3360 0.9227
0.0869 13.51 5500 0.2417 0.9343
0.1786 13.76 5600 0.3078 0.9263
0.1563 14.0 5700 0.3046 0.9312
0.0584 14.25 5800 0.3011 0.9288
0.0783 14.5 5900 0.2705 0.9288
0.0486 14.74 6000 0.2583 0.9288
0.094 14.99 6100 0.2854 0.9282
0.0852 15.23 6200 0.2693 0.9325
0.0665 15.48 6300 0.2754 0.9282
0.0948 15.72 6400 0.2598 0.9349
0.0368 15.97 6500 0.2875 0.9355
0.0031 16.22 6600 0.2679 0.9325
0.0796 16.46 6700 0.2642 0.9300
0.0903 16.71 6800 0.2977 0.9269
0.0952 16.95 6900 0.2615 0.9337
0.1344 17.2 7000 0.2948 0.9251
0.0854 17.44 7100 0.2748 0.9368
0.0891 17.69 7200 0.2386 0.9325
0.1202 17.94 7300 0.2509 0.9355
0.0832 18.18 7400 0.2406 0.9398
0.0949 18.43 7500 0.2356 0.9386
0.0404 18.67 7600 0.2415 0.9386
0.1008 18.92 7700 0.2582 0.9355
0.092 19.16 7800 0.2724 0.9325
0.0993 19.41 7900 0.2655 0.9325
0.0593 19.66 8000 0.2423 0.9386
0.1011 19.9 8100 0.2653 0.9392

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results