Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,51 @@ language:
|
|
5 |
base_model:
|
6 |
- answerdotai/ModernBERT-base
|
7 |
pipeline_tag: text-classification
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
base_model:
|
6 |
- answerdotai/ModernBERT-base
|
7 |
pipeline_tag: text-classification
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
+
---
|
11 |
+
|
12 |
+
# ModernBERT-FakeNewsClassifier
|
13 |
+
|
14 |
+
## Model Description
|
15 |
+
|
16 |
+
**ModernBERT-FakeNewsClassifier** is a fine-tuned version of [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base), optimized for the binary classification task of detecting fake news. This model processes news articles, including their titles, text content, subject, and publication date, to classify them as either **real (1)** or **fake (0)**. The model is fine-tuned on a dataset containing over 30,000 labeled examples, achieving high accuracy and robustness.
|
17 |
+
|
18 |
+
### Key Features:
|
19 |
+
- **Base Model**: ModernBERT, designed for long-context processing (up to 8,192 tokens).
|
20 |
+
- **Task**: Binary classification for fake news detection.
|
21 |
+
- **Architecture Highlights**:
|
22 |
+
- Rotary Positional Embeddings (RoPE) for long-context support.
|
23 |
+
- Local-global alternating attention for memory efficiency.
|
24 |
+
- Flash Attention for optimized inference speed.
|
25 |
+
|
26 |
+
## Dataset
|
27 |
+
|
28 |
+
The dataset used for fine-tuning comprises over 30,000 examples, with the following features:
|
29 |
+
- **Title**: The headline of the news article.
|
30 |
+
- **Text**: The main body of the article.
|
31 |
+
- **Subject**: The category or topic of the article (e.g., Politics, Health).
|
32 |
+
- **Date**: The publication date of the article.
|
33 |
+
- **Label**: Binary labels indicating whether the article is fake (`0`) or real (`1`).
|
34 |
+
|
35 |
+
## Notebook: Training and Fine-Tuning
|
36 |
+
The repository includes the code.ipynb file, which provides:
|
37 |
+
|
38 |
+
- Step-by-step instructions for preprocessing the dataset.
|
39 |
+
- Fine-tuning the ModernBERT model for binary classification.
|
40 |
+
- Code for evaluating the model using metrics such as accuracy, F1-score, and AUC-ROC.
|
41 |
+
- You can directly open and run the notebook to replicate or customize the training process.
|
42 |
+
|
43 |
+
|
44 |
+
## Citation
|
45 |
+
|
46 |
+
If you use this model in your research or applications, please cite:
|
47 |
+
|
48 |
+
```
|
49 |
+
@misc{ModernBERT-FakeNewsClassifier,
|
50 |
+
author = {Daksh Rathi},
|
51 |
+
title = {ModernBERT-FakeNewsClassifier: A Transformer-Based Model for Fake News Detection},
|
52 |
+
year = {2024},
|
53 |
+
url = {https://huggingface.co/dakshrathi/ModernBERT-base-FakeNewsClassifier},
|
54 |
+
}
|
55 |
+
|