sahilnishad commited on
Commit
97f751e
1 Parent(s): b02b635

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -29
README.md CHANGED
@@ -1,35 +1,27 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - grammatical-error-detection
5
- - token-classification
6
- - nlp
7
- - bert
8
- license: mit
9
- language:
10
- - en
11
- base_model:
12
- - google-bert/bert-base-uncased
13
- pipeline_tag: token-classification
14
- ---
15
 
16
- # Model Card for Fine-tuned BERT on FCE Dataset for Grammatical Error Detection
17
 
18
- This model is fine-tuned on the First Certificate in English (FCE) dataset for grammatical error detection at the token level, identifying whether each token is grammatically correct ('c') or incorrect ('i') based on context.
 
19
 
 
 
20
 
21
- ### Model Description
22
 
23
- This model card provides details of a 🤗 transformers model pushed to the Hub. The model performs token-level classification to determine grammatical correctness based on context, using the FCE dataset. Each token is labeled as either 'c' (correct) or 'i' (incorrect).
24
-
25
- - **Developed by:** [Sahil Nishad](https://www.linkedin.com/in/sahilnishad)
26
- - **Model type:** BERT-based model for token-level classification
27
- - **Language:** English
28
- - **Finetuned from model:** [`bert-base-uncased`](https://huggingface.co/bert-base-uncased)
29
- - **Repository:** [Github](https://github.com/sahilnishad/Fine-Tuning-BERT-for-Token-Level-GED)
30
-
31
- ### How to Get Started with the Model
32
- Use the code below to get started with the model.
33
 
34
  ```python
35
  from transformers import AutoModelForTokenClassification, BertTokenizer
@@ -49,8 +41,8 @@ def infer(sentence):
49
  print(infer("Your example sentence here"))
50
  ```
51
 
52
- ### BibTeX:
53
-
54
  ```bibtex
55
  @misc{sahilnishad_bert_ged_fce_ft,
56
  author = {Sahil Nishad},
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - grammatical-error-detection
5
+ - token-classification
6
+ - nlp
7
+ - bert
8
+ license: mit
9
+ language:
10
+ - en
11
+ base_model:
12
+ - google-bert/bert-base-uncased
13
+ pipeline_tag: token-classification
14
+ ---
15
 
 
16
 
17
+ # Model Description
18
+ Fine-tuning bert-base-uncased model for token-level binary grammatical error detection on English-FCE dataset provided by MultiGED-2023
19
 
20
+ - **[Github](https://github.com/sahilnishad/Fine-Tuning-BERT-for-Token-Level-GED)**
21
+ - **[Dataset](https://github.com/spraakbanken/multiged-2023)**
22
 
 
23
 
24
+ # Get Started with the Model
 
 
 
 
 
 
 
 
 
25
 
26
  ```python
27
  from transformers import AutoModelForTokenClassification, BertTokenizer
 
41
  print(infer("Your example sentence here"))
42
  ```
43
 
44
+ ---
45
+ # BibTeX:
46
  ```bibtex
47
  @misc{sahilnishad_bert_ged_fce_ft,
48
  author = {Sahil Nishad},