mdocekal commited on
Commit
1c7b8ac
β€’
1 Parent(s): 1566eee

readme updated

Browse files
Files changed (1) hide show
  1. README.md +19 -17
README.md CHANGED
@@ -58,26 +58,28 @@ It uses the same definition as in previous case, but it works with multiset of l
58
  }
59
 
60
  ### Inputs
61
- *List all input arguments in the format below*
62
- - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
63
 
64
  ### Output Values
65
 
66
- *Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
67
-
68
- *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
69
-
70
- #### Values from Popular Papers
71
- *Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
72
-
73
- ### Examples
74
- *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
75
-
76
- ## Limitations and Bias
77
- *Note any known limitations or biases that the metric has, with links and references if possible.*
78
 
79
  ## Citation
80
- *Cite the source where this metric was introduced.*
81
 
82
- ## Further References
83
- *Add any useful further references.*
 
 
 
 
 
 
 
 
 
 
 
58
  }
59
 
60
  ### Inputs
61
+ - **predictions** *(list[Union[int,str]]): list of predictions to score. Each predictions should be a list of predicted labels*
62
+ - **references** *(list[Union[int,str]]): list of reference for each prediction. Each reference should be a list of reference labels*
63
 
64
  ### Output Values
65
 
66
+ This metric outputs a dictionary, containing:
67
+ - precision
68
+ - recall
69
+ - accuracy
70
+ - fscore
 
 
 
 
 
 
 
71
 
72
  ## Citation
 
73
 
74
+ ```bibtex
75
+ @article{Zhang2014ARO,
76
+ title={A Review on Multi-Label Learning Algorithms},
77
+ author={Min-Ling Zhang and Zhi-Hua Zhou},
78
+ journal={IEEE Transactions on Knowledge and Data Engineering},
79
+ year={2014},
80
+ volume={26},
81
+ pages={1819-1837},
82
+ url={https://api.semanticscholar.org/CorpusID:1008003}
83
+ }
84
+ ```
85
+