Edit model card

Medical Summary Generation with BART

This project involves a DistilBART model for generating medical summaries from input text. The model is trained to understand medical data and produce concise and informative summaries.

Table of Contents

Introduction

The DistilBART-Med-Summary Generator is built using the Hugging Face Deep Learning Container and is designed to generate medical summaries from input text. This README provides information on how to use the model, details about the architecture, and where to find downloads.

Usage

To use the model for medical summary generation, follow these steps:

Install the required dependencies:

  • pip install transformers
  • pip install torch
  • pip install datasets
from transformers import pipeline
summarizer = pipeline("summarization", model="Mahalingam/DistilBart-Med-Summary")

conversation = '''write the below JSON into normal text
{
  "Sex": "M",
  "ID": 585248,
  "DateOfBirth": "08/10/1995",
  "Age": "28 years",
  "VisitDate": "09/25/2023",
  "LogNumber": 6418481,
  "Historian": "Self",
  "TriageNotes": ["fever"],
  "HistoryOfPresentIllness": {
    "Complaint": [
      "The patient presents with a chief complaint of chills.",
      "The problem is made better by exercise and rest.",
      "The patient also reports change in appetite and chest pain/pressure as abnormal symptoms related to the complaint."
    ]
  }
}                                           
'''
nlp(conversation)

Model-details

Model Name: DistilBart-Med-Summary Task: Medical Summary Generation Architecture: DistilBART Training Data: Details about the medical dataset used for training Training Duration: Number of training steps, training time, etc.

Contact

For any inquiries or support related to this model, feel free to contact:

Name : Mahalingam Balasubramanian

Email : mahalingamb.1978@gmail.com

Downloads last month
47
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.