lvwerra HF staff julien-c HF staff commited on
Commit
4c8cf84
1 Parent(s): d7dc735

Tiny fix (#3)

Browse files

- Tiny fix (ef119c1e2c09204be94e8600049d049cf4245fbb)


Co-authored-by: Julien Chaumond <julien-c@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
 
16
  It has three types of evaluations:
17
  - **Metric**: measures the performance of a model on a given dataset, usually by comparing the model's predictions to some ground truth labels -- these are covered in this space.
18
- - **Comparison**: used useful to compare the performance of two or more models on a single test dataset., e.g. by comparing their predictions to ground truth labels and computing their agreement -- covered in the [Evaluate Comparison](https://huggingface.co/spaces/evaluate-comparison) Spaces.
19
  - **Measurement**: for gaining more insights on datasets and model predictions based on their properties and characteristics -- covered in the [Evaluate Measurement](https://huggingface.co/evaluate-measurement) Spaces.
20
 
21
  All three types of evaluation supported by the 🤗 Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation!
 
15
 
16
  It has three types of evaluations:
17
  - **Metric**: measures the performance of a model on a given dataset, usually by comparing the model's predictions to some ground truth labels -- these are covered in this space.
18
+ - **Comparison**: used to compare the performance of two or more models on a single test dataset., e.g. by comparing their predictions to ground truth labels and computing their agreement -- covered in the [Evaluate Comparison](https://huggingface.co/spaces/evaluate-comparison) Spaces.
19
  - **Measurement**: for gaining more insights on datasets and model predictions based on their properties and characteristics -- covered in the [Evaluate Measurement](https://huggingface.co/evaluate-measurement) Spaces.
20
 
21
  All three types of evaluation supported by the 🤗 Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation!