Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

training_args = TrainingArguments( output_dir='bart-base-wikilarge-newsela-with-domain-adaptation', num_train_epochs=20, warmup_steps=250, per_device_train_batch_size=BATCH_SIZE, weight_decay=0.01, learning_rate=2e-4, fp16=True, optim="adafactor", )

Step Training Loss 500 5.111300 1000 3.064000 1500 2.899200 2000 2.779200 2500 2.710700 3000 2.608300 3500 2.546900 4000 2.491100 4500 2.404400 5000 2.374700 5500 2.324800 6000 2.257300 6500 2.239900 7000 2.173400 7500 2.134500 8000 2.115700 8500 2.046100 9000 2.025600 9500 1.989900 10000 1.953900 10500 1.940900 11000 1.894000 11500 1.872400 12000 1.854300 12500 1.823300 13000 1.811900 13500 1.789700 14000 1.764800 14500 1.753300 15000 1.735000 15500 1.727400 16000 1.719900 TrainOutput(global_step=16060, training_loss=2.2460614238848278, metrics={'train_runtime': 5541.9227, 'train_samples_per_second': 370.669, 'train_steps_per_second': 2.898, 'total_flos': 0.0, 'train_loss': 2.2460614238848278, 'epoch': 20.0})

Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.