lvwerra HF staff commited on
Commit
d7dc735
1 Parent(s): 42ec372

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -12,6 +12,7 @@ tags:
12
 
13
  🤗 Evaluate provides access to a wide range of evaluation tools. It covers a range of modalities such as text, computer vision, audio, etc. as well as tools to evaluate models or datasets.
14
 
 
15
  It has three types of evaluations:
16
  - **Metric**: measures the performance of a model on a given dataset, usually by comparing the model's predictions to some ground truth labels -- these are covered in this space.
17
  - **Comparison**: used useful to compare the performance of two or more models on a single test dataset., e.g. by comparing their predictions to ground truth labels and computing their agreement -- covered in the [Evaluate Comparison](https://huggingface.co/spaces/evaluate-comparison) Spaces.
 
12
 
13
  🤗 Evaluate provides access to a wide range of evaluation tools. It covers a range of modalities such as text, computer vision, audio, etc. as well as tools to evaluate models or datasets.
14
 
15
+
16
  It has three types of evaluations:
17
  - **Metric**: measures the performance of a model on a given dataset, usually by comparing the model's predictions to some ground truth labels -- these are covered in this space.
18
  - **Comparison**: used useful to compare the performance of two or more models on a single test dataset., e.g. by comparing their predictions to ground truth labels and computing their agreement -- covered in the [Evaluate Comparison](https://huggingface.co/spaces/evaluate-comparison) Spaces.