generate_summaries / README.md
dereju's picture
Update README.md
a6b7772
metadata
license: apache-2.0
language:
  - en
library_name: transformers

Model Card: bart_fine_tuned_model

Model Name

generate_summaries

Model Description

This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of Resume Summarization. The model has been trained to efficiently generate concise and relevant summaries from extensive resume texts. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.

Model information

-Base Model: GebeyaTalent/generate_summaries

-Finetuning Dataset: To be made available in the future.

Training Parameters

  • Evaluation Strategy: epoch:
  • Learning Rate: 5e-5
  • Per Device Train Batch Size: 8:
  • Per Device Eval Batch Size: 8
  • Weight Decay: 0.01
  • Save Total Limit: 5
  • Number of Training Epochs: 10
  • Predict with Generate: True
  • Gradient Accumulation Steps: 1
  • Optimizer: paged_adamw_32bit
  • Learning Rate Scheduler Type: cosine

how to use

1. Install the transformers library:

pip install transformers

2. Import the necessary modules:

  import torch
  from transformers import BartTokenizer, BartForConditionalGeneration

3. Initialize the model and tokenizer:

  model_name = 'GebeyaTalent/generate_summaries'
  tokenizer = BartTokenizer.from_pretrained(model_name)
  model = BartForConditionalGeneration.from_pretrained(model_name)

4. Prepare the text for summarization:

  text = 'Your resume text here'
  inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length")

5. Generate the summary:

min_length_threshold = 55
summary_ids = model.generate(inputs["input_ids"], num_beams=4, min_length=min_length_threshold, max_length=150, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)

6. Output the summary:

print("Summary:", summary)

Model Card Authors

Dereje Hinsermu

Model Card Contact