amanpatkar commited on
Commit
2735c3f
·
verified ·
1 Parent(s): 164d061
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -54,9 +54,18 @@ It achieves the following results on the evaluation set:
54
 
55
  The distilbert-finetuned-ner model is designed for Named Entity Recognition (NER) tasks. It is based on the DistilBERT architecture, which is a smaller, faster, and lighter version of BERT. DistilBERT retains 97% of BERT's language understanding while being 60% faster and 40% smaller, making it efficient for deployment in production systems.
56
 
57
- ## Intended uses & limitations
 
 
 
 
 
 
 
 
 
 
58
 
59
- More information needed
60
 
61
  ## Training and evaluation data
62
 
 
54
 
55
  The distilbert-finetuned-ner model is designed for Named Entity Recognition (NER) tasks. It is based on the DistilBERT architecture, which is a smaller, faster, and lighter version of BERT. DistilBERT retains 97% of BERT's language understanding while being 60% faster and 40% smaller, making it efficient for deployment in production systems.
56
 
57
+ ## Intended Uses & Limitations
58
+
59
+ ### Intended Uses
60
+ - Named Entity Recognition (NER): Extracting entities such as names, locations, organizations, and miscellaneous entities from text.
61
+ - Information Extraction: Automatically identifying and classifying key information in documents.
62
+ - Text Preprocessing: Enhancing text preprocessing for downstream tasks like sentiment analysis and text summarization.
63
+
64
+ ### limitations
65
+ - Domain Specificity: The model is trained on the CoNLL-2003 dataset, which primarily consists of newswire data. Performance may degrade on text from different domains.
66
+ - Language Limitation: This model is trained on English text. It may not perform well on text in other languages.
67
+ - Precision in Complex Sentences: While the model performs well on standard sentences, complex sentence structures or ambiguous contexts might pose challenges.
68
 
 
69
 
70
  ## Training and evaluation data
71