Prakarsha01's picture
Update README.md
71cd44a verified
|
raw
history blame
797 Bytes
metadata
datasets:
  - nguha/legalbench

These models were trained on cuad_audit_rights dataset from legalbench. There are 5 different models in the zipped folder representing the 5 different folds that the model was trained on. Evaluation results of each fold:

Model 0 - Accuracy: 0.9795918367346939, Precision: 1.0, Recall: 0.9591836734693877, F1 Score: 0.9791666666666666

Model 1 - Accuracy: 0.9897959183673469, Precision: 0.98, Recall: 1.0, F1 Score: 0.98989898989899

Model 2 - Accuracy: 0.9897959183673469, Precision: 1.0, Recall: 0.9795918367346939, F1 Score: 0.9896907216494846

Model 3 - Accuracy: 0.9897959183673469, Precision: 1.0, Recall: 0.9795918367346939, F1 Score: 0.9896907216494846

Model 4 - Accuracy: 0.9795918367346939, Precision: 0.9607843137254902, Recall: 1.0, F1 Score: 0.98