Update README.md
Browse files
README.md
CHANGED
@@ -33,8 +33,8 @@ inputs = tokenizer(text, return_tensors="pt")
|
|
33 |
outputs = model(**inputs)
|
34 |
predicted_label = "positive" if outputs.logits.argmax().item() == 1 else "negative"
|
35 |
|
36 |
-
print(f"Predicted sentiment: {predicted_label}")
|
37 |
-
|
38 |
## Evaluation Metrics
|
39 |
|
40 |
The performance of the fine-tuned Distilled BERT model can be evaluated using various evaluation metrics, such as accuracy, precision, recall, and F1 score. These metrics can be calculated on the test set of the Amazon reviews dataset to assess the model's accuracy and effectiveness in predicting sentiment.
|
|
|
33 |
outputs = model(**inputs)
|
34 |
predicted_label = "positive" if outputs.logits.argmax().item() == 1 else "negative"
|
35 |
|
36 |
+
print(f"Predicted sentiment: {predicted_label}")
|
37 |
+
```
|
38 |
## Evaluation Metrics
|
39 |
|
40 |
The performance of the fine-tuned Distilled BERT model can be evaluated using various evaluation metrics, such as accuracy, precision, recall, and F1 score. These metrics can be calculated on the test set of the Amazon reviews dataset to assess the model's accuracy and effectiveness in predicting sentiment.
|