helloAI / README.md
Leonhard1337's picture
Update README.md
11cce29
metadata
language:
  - en
tags:
  - text-classification
widget:
  - text: The app crashed when I opened it this morning. Can you fix this please?
    example_title: Likely bug report
  - text: Please add a like button!
    example_title: Unlikely bug report

Model Card for Model ID

This modelcard aims to be a base template for new models. It has been generated using this raw template.

Model Details

Model Description

  • Developed by: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Model type: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]
  • Finetuned from model [optional]: [More Information Needed]

Model Card: Bug Classification Algorithm

Purpose: To classify software bugs according to their clarity, relevance, and readability using a revamped dataset of historical bugs.

Model Type: Machine Learning Model (Supervised Learning)

Dataset Information:

Historical Software Bugs Dataset Split into training and validation sets - Training Data consists of approximately 80% of data and validation/testing data comprises of the remaining 20%. Each example contains features including descriptions of software bugs along with human annotations specifying whether they were clear, relevant, and readable. Features Extracted:

    1. Text description of the bug
    1. Number of lines of code affected by the bug
    1. Timestamp of bug submission
    1. Version control tags associated with the bug
    1. Priority level assigned to the bug
    1. Type of software component impacted by the bug
    1. Operating system compatibility of the software
    1. Programming language used to develop the software
    1. Hardware specifications required to run the software

Models Trained:

Naive Bayes Classifier Random Forest Classifier Gradient Boosting Classifier Neural Networks with Convolutional Layers Hyperparameter tuning techniques: Cross-validation, Grid Search and Random Search applied to each model architecture.

Metrics Used For Evaluation:

Accuracy Score: Fraction of correctly predicted examples out of total examples. Precision: Ratio of correct positive predictions over all positive predictions made by the model. Recall: Ratio of true positives found among actual positives. F1 score: Harmonic mean of precision and recall indicating