oroszgy commited on
Commit
4f4e049
1 Parent(s): e31b9be

Update spacy pipeline to 3.8.0

Browse files
README.md CHANGED
@@ -14,112 +14,60 @@ model-index:
14
  metrics:
15
  - name: NER Precision
16
  type: precision
17
- value: 0.8714565876
18
  - name: NER Recall
19
  type: recall
20
- value: 0.8593530239
21
  - name: NER F Score
22
  type: f_score
23
- value: 0.8653624856
24
  - task:
25
  name: TAG
26
  type: token-classification
27
  metrics:
28
  - name: TAG (XPOS) Accuracy
29
  type: accuracy
30
- value: 0.9688501842
31
  - task:
32
  name: POS
33
  type: token-classification
34
  metrics:
35
  - name: POS (UPOS) Accuracy
36
  type: accuracy
37
- value: 0.9670319154
38
  - task:
39
  name: MORPH
40
  type: token-classification
41
  metrics:
42
  - name: Morph (UFeats) Accuracy
43
  type: accuracy
44
- value: 0.9362618432
45
  - task:
46
  name: LEMMA
47
  type: token-classification
48
  metrics:
49
  - name: Lemma Accuracy
50
  type: accuracy
51
- value: 0.9759831595
52
  - task:
53
  name: UNLABELED_DEPENDENCIES
54
  type: token-classification
55
  metrics:
56
  - name: Unlabeled Attachment Score (UAS)
57
  type: f_score
58
- value: 0.8332400672
59
  - task:
60
  name: LABELED_DEPENDENCIES
61
  type: token-classification
62
  metrics:
63
  - name: Labeled Attachment Score (LAS)
64
  type: f_score
65
- value: 0.76922216
66
  - task:
67
  name: SENTS
68
  type: token-classification
69
  metrics:
70
  - name: Sentences F-Score
71
  type: f_score
72
- value: 0.9821428571
73
  ---
74
- Core Hungarian model for HuSpaCy. Components: tok2vec, senter, tagger, morphologizer, lemmatizer, parser, ner
75
-
76
- | Feature | Description |
77
- | --- | --- |
78
- | **Name** | `hu_core_news_lg` |
79
- | **Version** | `3.7.0` |
80
- | **spaCy** | `>=3.7.0,<3.8.0` |
81
- | **Default Pipeline** | `tok2vec`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `parser`, `ner` |
82
- | **Components** | `tok2vec`, `senter`, `tagger`, `morphologizer`, `lookup_lemmatizer`, `trainable_lemmatizer`, `parser`, `ner` |
83
- | **Vectors** | -1 keys, 200000 unique vectors (300 dimensions) |
84
- | **Sources** | [UD Hungarian Szeged](https://universaldependencies.org/treebanks/hu_szeged/index.html) (Richárd Farkas, Katalin Simkó, Zsolt Szántó, Viktor Varga, Veronika Vincze (MTA-SZTE Research Group on Artificial Intelligence))<br>[NYTK-NerKor Corpus](https://github.com/nytud/NYTK-NerKor) (Eszter Simon, Noémi Vadász (Department of Language Technology and Applied Linguistics))<br>[Szeged NER Corpus](https://rgai.inf.u-szeged.hu/node/130) (György Szarvas, Richárd Farkas, László Felföldi, András Kocsor, János Csirik (MTA-SZTE Research Group on Artificial Intelligence))<br>[Hungarian lg Floret vectors](https://huggingface.co/huspacy/hu_vectors_web_lg) (Szeged AI) |
85
- | **License** | `cc-by-sa-4.0` |
86
- | **Author** | [SzegedAI, MILAB](https://github.com/huspacy/huspacy) |
87
-
88
- ### Label Scheme
89
-
90
- <details>
91
-
92
- <summary>View label scheme (1209 labels for 4 components)</summary>
93
-
94
- | Component | Labels |
95
- | --- | --- |
96
- | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
97
- | **`morphologizer`** | `Definite=Def\|POS=DET\|PronType=Art`, `Case=Ine\|Number=Sing\|POS=NOUN`, `POS=ADV`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN`, `Definite=Ind\|POS=DET\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `Case=Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADP`, `POS=CCONJ`, `Case=Del\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PROPN`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV`, `Case=Sup\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|POS=NOUN`, `POS=ADV\|PronType=Neg`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=SCONJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Int`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `Case=Sup\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `POS=ADV\|PronType=Tot`, `Case=Ill\|Definite=Ind\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Ine\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=NOUN`, `Case=Del\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Sbl\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Rel`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Definite=Def\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|VerbForm=Conv`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dis\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Abs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Del\|Number=Sing\|POS=PROPN`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Loc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ter\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `POS=X`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Tra\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflex=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Iter\|Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Case=Sbl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3\|VerbForm=PartPast`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|POS=PROPN`, `Case=Abs\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ter\|Number=Plur\|POS=NOUN`, `Case=Tem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Case=Ine\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PROPN`, `Case=Ter\|Number=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sbl\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Definite=Def\|POS=DET\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Definite=Ind\|Mood=Imp,Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Sbl\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Aspect=Iter\|Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Tem\|Number=Sing\|POS=NOUN`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=ADV\|PronType=Ind`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|POS=DET\|PronType=Int`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abs\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PROPN`, `Case=Abl\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Definite=Def\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Imp,Pot\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|POS=DET\|PronType=Tot`, `Definite=Def\|POS=DET\|PronType=Neg`, `Case=Ins\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Cau`, `Case=Sbl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ess\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPast`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|POS=ADV`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartFut`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPast`, `Degree=Sup\|POS=ADV`, `Case=Acc\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Acc\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=PartPres`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Cau\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psed]=Sing\|POS=ADJ`, `Case=Nom\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Mood=Pot\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Sbl\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Cau`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=PartPres`, `Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Plur\|POS=ADV\|Person=1\|PronType=PrsPron`, `POS=ADV\|PronType=v`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=1\|PronType=PrsPron`, `Case=Ter\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=ADV\|Person=3\|PronType=PrsPron`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sbl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Del\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Sbl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=All\|Number=Plur\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ade\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PROPN`, `Case=Nom\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|Voice=Act`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sup\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=ADV\|Person=2\|PronType=PrsPron`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV\|PronType=Dem`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Del\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Sbl\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Del\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Com\|Number=Sing\|POS=NOUN`, `Case=Tra\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Sbl\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Tem\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=1`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Definite=Def\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Abs\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Ind`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Cas=1\|Number=Sing\|POS=PROPN`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Sbl\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Sbl\|NumType[sem]=Result\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Ind\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Tot`, `Definite=Ind\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Cau\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Definite=Def\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dis\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abs\|Number=Plur\|POS=NOUN`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Tot`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Degree=Cmp\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ter\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Definite=Ind\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=All\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=2\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Number=Sing\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sup\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Sbl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Plur\|POS=NOUN`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Acc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Ade\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Cas=6\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|NumType=Card\|Number=Sing\|POS=NUM`, `Number=Plur\|POS=VERB\|Person=2\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Cas=6\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Int`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Sbl\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Cau\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Tem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Cau\|Number=Plur\|POS=PROPN`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Definite=Ind\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Definite=Def\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Sup\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Tra\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ter\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Cau\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Ins\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Tem\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Del\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sup\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Sup\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Cau\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=w`, `Case=Gen\|Number=Sing\|POS=SYM\|Type=w`, `Case=Abl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|POS=SYM\|Type=w`, `Case=Tra\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Sup\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Quotient\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Ine\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=o`, `Case=Gen\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Signed\|Number=Sing\|POS=NUM`, `Case=Com\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psed]=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=1\|Reflexive=Yes`, `Case=Nom\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ins\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Gen\|NumType=Dist\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Del\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Rel`, `Case=Ine\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=o`, `Case=Dat\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=All\|Number=Plur\|Number[psed]=Sing\|POS=SYM\|Type=w`, `Case=Ade\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ade\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ill\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Sup\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Sup\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ins\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ela\|Number=Sing\|POS=SYM\|Type=w`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Measure\|Number=Sing\|POS=NUM`, `Case=Abs\|Number=Sing\|POS=PROPN`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=SYM\|Type=m`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=m`, `Case=Sup\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|Number=Plur\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=o`, `Case=Ins\|Number=Sing\|POS=SYM\|Type=w`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Number=Sing\|Number[psed]=Plur\|POS=NOUN`, `Case=Gen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Sbl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Abl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abl\|NumType[sem]=Time\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abs\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Sup\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abs\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ter\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ins\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Gen\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Dat\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=Ine\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Nom\|NumType[sem]=Percent\|Number=Sing\|POS=NUM`, `Case=All\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Abl\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Acc\|NumType=Card\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ter\|NumType[sem]=Formula\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType[sem]=Percent\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Del\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Cau\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ins\|NumType=Ord\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Sup\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Ine\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Tra\|Number=Sing\|Number[psed]=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Tem\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Dat\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Sbl\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=All\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=All\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Sbl\|Number=Plur\|POS=PROPN`, `Case=Tra\|Number=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Sup\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Dat\|Number=Sing\|POS=SYM\|Type=w`, `Case=Ill\|Number=Plur\|POS=PROPN`, `Case=Loc\|Number=Sing\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Degree=Pos\|Number=Plur\|Number[psed]=Sing\|POS=ADJ`, `Case=Abl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Ade\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=SYM\|Type=w`, `Case=Cau\|NumType=Frac\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Abs\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Sbl\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Tem\|Number=Sing\|POS=PROPN`, `Case=Del\|NumType[sem]=Dot\|Number=Sing\|POS=NUM`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot`, `Case=Acc\|Degree=Sup\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Number=Plur\|Number[psed]=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Acc\|Number=Plur\|Number[psed]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=NOUN`, `Case=Nom\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Dat\|Number=Plur\|POS=PROPN`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ter\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Sup\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Gen\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rcp`, `Definite=Ind\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Voice=Act`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ins\|NumType=Card\|Number=Plur\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|Reflexive=Yes`, `Case=Sbl\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Ind`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Sbl\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ill\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Ine\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=3`, `Case=Del\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|Number[psed]=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2\|PronType=Tot`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Int`, `Case=Ine\|Number=Sing\|Number[psed]=Sing\|POS=PROPN`, `Case=Cau\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Del\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Cau\|Number=Sing\|POS=PRON\|Person=3\|Reflexive=Yes`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person[psor]=2`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ine\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Definite=2\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ela\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=SYM\|Type=p`, `Case=Abl\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Ine\|Number=Plur\|POS=PROPN`, `Case=Sbl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Tot`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3\|Poss=Yes\|PronType=Prs`, `Case=Ter\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Case=All\|Number=Sing\|Number[psed]=Sing\|POS=PRON\|Person=3\|PronType=Tot` |
98
- | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:locy`, `advmod:mode`, `advmod:que`, `advmod:tfrom`, `advmod:tlocy`, `advmod:to`, `advmod:tto`, `amod:att`, `appos`, `aux`, `case`, `cc`, `ccomp`, `ccomp:obj`, `ccomp:obl`, `ccomp:pred`, `compound`, `compound:preverb`, `conj`, `cop`, `csubj`, `dep`, `det`, `flat:name`, `iobj`, `list`, `mark`, `nmod`, `nmod:att`, `nmod:obl`, `nsubj`, `nummod`, `obj`, `obj:lvc`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` |
99
- | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
100
-
101
- </details>
102
-
103
- ### Accuracy
104
-
105
- | Type | Score |
106
- | --- | --- |
107
- | `TOKEN_ACC` | 99.99 |
108
- | `TOKEN_P` | 99.86 |
109
- | `TOKEN_R` | 99.93 |
110
- | `TOKEN_F` | 99.89 |
111
- | `SENTS_P` | 98.43 |
112
- | `SENTS_R` | 98.00 |
113
- | `SENTS_F` | 98.21 |
114
- | `TAG_ACC` | 96.89 |
115
- | `POS_ACC` | 96.70 |
116
- | `MORPH_ACC` | 93.63 |
117
- | `MORPH_MICRO_P` | 97.03 |
118
- | `MORPH_MICRO_R` | 95.98 |
119
- | `MORPH_MICRO_F` | 96.50 |
120
- | `LEMMA_ACC` | 97.60 |
121
- | `DEP_UAS` | 83.32 |
122
- | `DEP_LAS` | 76.92 |
123
- | `ENTS_P` | 87.15 |
124
- | `ENTS_R` | 85.94 |
125
- | `ENTS_F` | 86.54 |
 
14
  metrics:
15
  - name: NER Precision
16
  type: precision
17
+ value: 0.8701321586
18
  - name: NER Recall
19
  type: recall
20
+ value: 0.8681434599
21
  - name: NER F Score
22
  type: f_score
23
+ value: 0.8691366717
24
  - task:
25
  name: TAG
26
  type: token-classification
27
  metrics:
28
  - name: TAG (XPOS) Accuracy
29
  type: accuracy
30
+ value: 0.9677481099
31
  - task:
32
  name: POS
33
  type: token-classification
34
  metrics:
35
  - name: POS (UPOS) Accuracy
36
  type: accuracy
37
+ value: 0.966025457
38
  - task:
39
  name: MORPH
40
  type: token-classification
41
  metrics:
42
  - name: Morph (UFeats) Accuracy
43
  type: accuracy
44
+ value: 0.9340606757
45
  - task:
46
  name: LEMMA
47
  type: token-classification
48
  metrics:
49
  - name: Lemma Accuracy
50
  type: accuracy
51
+ value: 0.9761745288
52
  - task:
53
  name: UNLABELED_DEPENDENCIES
54
  type: token-classification
55
  metrics:
56
  - name: Unlabeled Attachment Score (UAS)
57
  type: f_score
58
+ value: 0.8434612153
59
  - task:
60
  name: LABELED_DEPENDENCIES
61
  type: token-classification
62
  metrics:
63
  - name: Labeled Attachment Score (LAS)
64
  type: f_score
65
+ value: 0.78125
66
  - task:
67
  name: SENTS
68
  type: token-classification
69
  metrics:
70
  - name: Sentences F-Score
71
  type: f_score
72
+ value: 0.9866071429
73
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.cfg CHANGED
@@ -1,8 +1,8 @@
1
  [paths]
2
- parser_model = "models/hu_core_news_lg-parser-3.7.0/model-best"
3
- ner_model = "models/hu_core_news_lg-ner-3.7.0/model-best"
4
- lemmatizer_lookups = "models/hu_core_news_lg-lookup-lemmatizer-3.7.0"
5
- tagger_model = "models/hu_core_news_lg-tagger-3.7.0/model-best"
6
  train = null
7
  dev = null
8
  vectors = null
 
1
  [paths]
2
+ parser_model = "models/hu_core_news_lg-parser-3.8.0/model-best"
3
+ ner_model = "models/hu_core_news_lg-ner-3.8.0/model-best"
4
+ lemmatizer_lookups = "models/hu_core_news_lg-lookup-lemmatizer-3.8.0"
5
+ tagger_model = "models/hu_core_news_lg-tagger-3.8.0/model-best"
6
  train = null
7
  dev = null
8
  vectors = null
edit_tree_lemmatizer.py CHANGED
@@ -1,465 +1,465 @@
1
- from functools import lru_cache
2
-
3
- from typing import cast, Any, Callable, Dict, Iterable, List, Optional
4
- from typing import Sequence, Tuple, Union
5
- from collections import Counter
6
- from copy import deepcopy
7
- from itertools import islice
8
- import numpy as np
9
-
10
- import srsly
11
- from thinc.api import Config, Model, SequenceCategoricalCrossentropy, NumpyOps
12
- from thinc.types import Floats2d, Ints2d
13
-
14
- from spacy.pipeline._edit_tree_internals.edit_trees import EditTrees
15
- from spacy.pipeline._edit_tree_internals.schemas import validate_edit_tree
16
- from spacy.pipeline.lemmatizer import lemmatizer_score
17
- from spacy.pipeline.trainable_pipe import TrainablePipe
18
- from spacy.errors import Errors
19
- from spacy.language import Language
20
- from spacy.tokens import Doc, Token
21
- from spacy.training import Example, validate_examples, validate_get_examples
22
- from spacy.vocab import Vocab
23
- from spacy import util
24
-
25
-
26
- TOP_K_GUARDRAIL = 20
27
-
28
-
29
- default_model_config = """
30
- [model]
31
- @architectures = "spacy.Tagger.v2"
32
-
33
- [model.tok2vec]
34
- @architectures = "spacy.HashEmbedCNN.v2"
35
- pretrained_vectors = null
36
- width = 96
37
- depth = 4
38
- embed_size = 2000
39
- window_size = 1
40
- maxout_pieces = 3
41
- subword_features = true
42
- """
43
- DEFAULT_EDIT_TREE_LEMMATIZER_MODEL = Config().from_str(default_model_config)["model"]
44
-
45
-
46
- @Language.factory(
47
- "trainable_lemmatizer_v2",
48
- assigns=["token.lemma"],
49
- requires=[],
50
- default_config={
51
- "model": DEFAULT_EDIT_TREE_LEMMATIZER_MODEL,
52
- "backoff": "orth",
53
- "min_tree_freq": 3,
54
- "overwrite": False,
55
- "top_k": 1,
56
- "overwrite_labels": True,
57
- "scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"},
58
- },
59
- default_score_weights={"lemma_acc": 1.0},
60
- )
61
- def make_edit_tree_lemmatizer(
62
- nlp: Language,
63
- name: str,
64
- model: Model,
65
- backoff: Optional[str],
66
- min_tree_freq: int,
67
- overwrite: bool,
68
- top_k: int,
69
- overwrite_labels: bool,
70
- scorer: Optional[Callable],
71
- ):
72
- """Construct an EditTreeLemmatizer component."""
73
- return EditTreeLemmatizer(
74
- nlp.vocab,
75
- model,
76
- name,
77
- backoff=backoff,
78
- min_tree_freq=min_tree_freq,
79
- overwrite=overwrite,
80
- top_k=top_k,
81
- overwrite_labels=overwrite_labels,
82
- scorer=scorer,
83
- )
84
-
85
-
86
- # _f = open("lemmatizer.log", "w")
87
- # def debug(*args):
88
- # _f.write(" ".join(args) + "\n")
89
- def debug(*args):
90
- pass
91
-
92
-
93
- class EditTreeLemmatizer(TrainablePipe):
94
- """
95
- Lemmatizer that lemmatizes each word using a predicted edit tree.
96
- """
97
-
98
- def __init__(
99
- self,
100
- vocab: Vocab,
101
- model: Model,
102
- name: str = "trainable_lemmatizer",
103
- *,
104
- backoff: Optional[str] = "orth",
105
- min_tree_freq: int = 3,
106
- overwrite: bool = False,
107
- top_k: int = 1,
108
- overwrite_labels,
109
- scorer: Optional[Callable] = lemmatizer_score,
110
- ):
111
- """
112
- Construct an edit tree lemmatizer.
113
-
114
- backoff (Optional[str]): backoff to use when the predicted edit trees
115
- are not applicable. Must be an attribute of Token or None (leave the
116
- lemma unset).
117
- min_tree_freq (int): prune trees that are applied less than this
118
- frequency in the training data.
119
- overwrite (bool): overwrite existing lemma annotations.
120
- top_k (int): try to apply at most the k most probable edit trees.
121
- """
122
- self.vocab = vocab
123
- self.model = model
124
- self.name = name
125
- self.backoff = backoff
126
- self.min_tree_freq = min_tree_freq
127
- self.overwrite = overwrite
128
- self.top_k = top_k
129
- self.overwrite_labels = overwrite_labels
130
-
131
- self.trees = EditTrees(self.vocab.strings)
132
- self.tree2label: Dict[int, int] = {}
133
-
134
- self.cfg: Dict[str, Any] = {"labels": []}
135
- self.scorer = scorer
136
- self.numpy_ops = NumpyOps()
137
-
138
- def get_loss(
139
- self, examples: Iterable[Example], scores: List[Floats2d]
140
- ) -> Tuple[float, List[Floats2d]]:
141
- validate_examples(examples, "EditTreeLemmatizer.get_loss")
142
- loss_func = SequenceCategoricalCrossentropy(normalize=False, missing_value=-1)
143
-
144
- truths = []
145
- for eg in examples:
146
- eg_truths = []
147
- for (predicted, gold_lemma, gold_pos, gold_sent_start) in zip(
148
- eg.predicted,
149
- eg.get_aligned("LEMMA", as_string=True),
150
- eg.get_aligned("POS", as_string=True),
151
- eg.get_aligned_sent_starts(),
152
- ):
153
- if gold_lemma is None:
154
- label = -1
155
- else:
156
- form = self._get_true_cased_form(
157
- predicted.text, gold_sent_start, gold_pos
158
- )
159
- tree_id = self.trees.add(form, gold_lemma)
160
- # debug(f"@get_loss: {predicted}/{gold_pos}[{gold_sent_start}]->{form}|{gold_lemma}[{tree_id}]")
161
- label = self.tree2label.get(tree_id, 0)
162
- eg_truths.append(label)
163
-
164
- truths.append(eg_truths)
165
-
166
- d_scores, loss = loss_func(scores, truths)
167
- if self.model.ops.xp.isnan(loss):
168
- raise ValueError(Errors.E910.format(name=self.name))
169
-
170
- return float(loss), d_scores
171
-
172
- def predict(self, docs: Iterable[Doc]) -> List[Ints2d]:
173
- if self.top_k == 1:
174
- scores2guesses = self._scores2guesses_top_k_equals_1
175
- elif self.top_k <= TOP_K_GUARDRAIL:
176
- scores2guesses = self._scores2guesses_top_k_greater_1
177
- else:
178
- scores2guesses = self._scores2guesses_top_k_guardrail
179
- # The behaviour of *_scores2guesses_top_k_greater_1()* is efficient for values
180
- # of *top_k>1* that are likely to be useful when the edit tree lemmatizer is used
181
- # for its principal purpose of lemmatizing tokens. However, the code could also
182
- # be used for other purposes, and with very large values of *top_k* the method
183
- # becomes inefficient. In such cases, *_scores2guesses_top_k_guardrail()* is used
184
- # instead.
185
- n_docs = len(list(docs))
186
- if not any(len(doc) for doc in docs):
187
- # Handle cases where there are no tokens in any docs.
188
- n_labels = len(self.cfg["labels"])
189
- guesses: List[Ints2d] = [self.model.ops.alloc2i(0, n_labels) for _ in docs]
190
- assert len(guesses) == n_docs
191
- return guesses
192
- scores = self.model.predict(docs)
193
- assert len(scores) == n_docs
194
- guesses = scores2guesses(docs, scores)
195
- assert len(guesses) == n_docs
196
- return guesses
197
-
198
- def _scores2guesses_top_k_equals_1(self, docs, scores):
199
- guesses = []
200
- for doc, doc_scores in zip(docs, scores):
201
- doc_guesses = doc_scores.argmax(axis=1)
202
- doc_guesses = self.numpy_ops.asarray(doc_guesses)
203
-
204
- doc_compat_guesses = []
205
- for i, token in enumerate(doc):
206
- tree_id = self.cfg["labels"][doc_guesses[i]]
207
- form: str = self._get_true_cased_form_of_token(token)
208
- if self.trees.apply(tree_id, form) is not None:
209
- doc_compat_guesses.append(tree_id)
210
- else:
211
- doc_compat_guesses.append(-1)
212
- guesses.append(np.array(doc_compat_guesses))
213
-
214
- return guesses
215
-
216
- def _scores2guesses_top_k_greater_1(self, docs, scores):
217
- guesses = []
218
- top_k = min(self.top_k, len(self.labels))
219
- for doc, doc_scores in zip(docs, scores):
220
- doc_scores = self.numpy_ops.asarray(doc_scores)
221
- doc_compat_guesses = []
222
- for i, token in enumerate(doc):
223
- for _ in range(top_k):
224
- candidate = int(doc_scores[i].argmax())
225
- candidate_tree_id = self.cfg["labels"][candidate]
226
- form: str = self._get_true_cased_form_of_token(token)
227
- if self.trees.apply(candidate_tree_id, form) is not None:
228
- doc_compat_guesses.append(candidate_tree_id)
229
- break
230
- doc_scores[i, candidate] = np.finfo(np.float32).min
231
- else:
232
- doc_compat_guesses.append(-1)
233
- guesses.append(np.array(doc_compat_guesses))
234
-
235
- return guesses
236
-
237
- def _scores2guesses_top_k_guardrail(self, docs, scores):
238
- guesses = []
239
- for doc, doc_scores in zip(docs, scores):
240
- doc_guesses = np.argsort(doc_scores)[..., : -self.top_k - 1 : -1]
241
- doc_guesses = self.numpy_ops.asarray(doc_guesses)
242
-
243
- doc_compat_guesses = []
244
- for token, candidates in zip(doc, doc_guesses):
245
- tree_id = -1
246
- for candidate in candidates:
247
- candidate_tree_id = self.cfg["labels"][candidate]
248
-
249
- form: str = self._get_true_cased_form_of_token(token)
250
-
251
- if self.trees.apply(candidate_tree_id, form) is not None:
252
- tree_id = candidate_tree_id
253
- break
254
- doc_compat_guesses.append(tree_id)
255
-
256
- guesses.append(np.array(doc_compat_guesses))
257
-
258
- return guesses
259
-
260
- def set_annotations(self, docs: Iterable[Doc], batch_tree_ids):
261
- for i, doc in enumerate(docs):
262
- doc_tree_ids = batch_tree_ids[i]
263
- if hasattr(doc_tree_ids, "get"):
264
- doc_tree_ids = doc_tree_ids.get()
265
- for j, tree_id in enumerate(doc_tree_ids):
266
- if self.overwrite or doc[j].lemma == 0:
267
- # If no applicable tree could be found during prediction,
268
- # the special identifier -1 is used. Otherwise the tree
269
- # is guaranteed to be applicable.
270
- if tree_id == -1:
271
- if self.backoff is not None:
272
- doc[j].lemma = getattr(doc[j], self.backoff)
273
- else:
274
- form = self._get_true_cased_form_of_token(doc[j])
275
- lemma = self.trees.apply(tree_id, form) or form
276
- # debug(f"@set_annotations: {doc[j]}/{doc[j].pos_}[{doc[j].is_sent_start}]->{form}|{lemma}[{tree_id}]")
277
- doc[j].lemma_ = lemma
278
-
279
- @property
280
- def labels(self) -> Tuple[int, ...]:
281
- """Returns the labels currently added to the component."""
282
- return tuple(self.cfg["labels"])
283
-
284
- @property
285
- def hide_labels(self) -> bool:
286
- return True
287
-
288
- @property
289
- def label_data(self) -> Dict:
290
- trees = []
291
- for tree_id in range(len(self.trees)):
292
- tree = self.trees[tree_id]
293
- if "orig" in tree:
294
- tree["orig"] = self.vocab.strings[tree["orig"]]
295
- if "subst" in tree:
296
- tree["subst"] = self.vocab.strings[tree["subst"]]
297
- trees.append(tree)
298
- return dict(trees=trees, labels=tuple(self.cfg["labels"]))
299
-
300
- def initialize(
301
- self,
302
- get_examples: Callable[[], Iterable[Example]],
303
- *,
304
- nlp: Optional[Language] = None,
305
- labels: Optional[Dict] = None,
306
- ):
307
- validate_get_examples(get_examples, "EditTreeLemmatizer.initialize")
308
-
309
- if self.overwrite_labels:
310
- if labels is None:
311
- self._labels_from_data(get_examples)
312
- else:
313
- self._add_labels(labels)
314
-
315
- # Sample for the model.
316
- doc_sample = []
317
- label_sample = []
318
- for example in islice(get_examples(), 10):
319
- doc_sample.append(example.x)
320
- gold_labels: List[List[float]] = []
321
- for token in example.reference:
322
- if token.lemma == 0:
323
- gold_label = None
324
- else:
325
- gold_label = self._pair2label(token.text, token.lemma_)
326
-
327
- gold_labels.append(
328
- [
329
- 1.0 if label == gold_label else 0.0
330
- for label in self.cfg["labels"]
331
- ]
332
- )
333
-
334
- gold_labels = cast(Floats2d, gold_labels)
335
- label_sample.append(self.model.ops.asarray(gold_labels, dtype="float32"))
336
-
337
- self._require_labels()
338
- assert len(doc_sample) > 0, Errors.E923.format(name=self.name)
339
- assert len(label_sample) > 0, Errors.E923.format(name=self.name)
340
-
341
- self.model.initialize(X=doc_sample, Y=label_sample)
342
-
343
- def from_bytes(self, bytes_data, *, exclude=tuple()):
344
- deserializers = {
345
- "cfg": lambda b: self.cfg.update(srsly.json_loads(b)),
346
- "model": lambda b: self.model.from_bytes(b),
347
- "vocab": lambda b: self.vocab.from_bytes(b, exclude=exclude),
348
- "trees": lambda b: self.trees.from_bytes(b),
349
- }
350
-
351
- util.from_bytes(bytes_data, deserializers, exclude)
352
-
353
- return self
354
-
355
- def to_bytes(self, *, exclude=tuple()):
356
- serializers = {
357
- "cfg": lambda: srsly.json_dumps(self.cfg),
358
- "model": lambda: self.model.to_bytes(),
359
- "vocab": lambda: self.vocab.to_bytes(exclude=exclude),
360
- "trees": lambda: self.trees.to_bytes(),
361
- }
362
-
363
- return util.to_bytes(serializers, exclude)
364
-
365
- def to_disk(self, path, exclude=tuple()):
366
- path = util.ensure_path(path)
367
- serializers = {
368
- "cfg": lambda p: srsly.write_json(p, self.cfg),
369
- "model": lambda p: self.model.to_disk(p),
370
- "vocab": lambda p: self.vocab.to_disk(p, exclude=exclude),
371
- "trees": lambda p: self.trees.to_disk(p),
372
- }
373
- util.to_disk(path, serializers, exclude)
374
-
375
- def from_disk(self, path, exclude=tuple()):
376
- def load_model(p):
377
- try:
378
- with open(p, "rb") as mfile:
379
- self.model.from_bytes(mfile.read())
380
- except AttributeError:
381
- raise ValueError(Errors.E149) from None
382
-
383
- deserializers = {
384
- "cfg": lambda p: self.cfg.update(srsly.read_json(p)),
385
- "model": load_model,
386
- "vocab": lambda p: self.vocab.from_disk(p, exclude=exclude),
387
- "trees": lambda p: self.trees.from_disk(p),
388
- }
389
-
390
- util.from_disk(path, deserializers, exclude)
391
- return self
392
-
393
- def _add_labels(self, labels: Dict):
394
- if "labels" not in labels:
395
- raise ValueError(Errors.E857.format(name="labels"))
396
- if "trees" not in labels:
397
- raise ValueError(Errors.E857.format(name="trees"))
398
-
399
- self.cfg["labels"] = list(labels["labels"])
400
- trees = []
401
- for tree in labels["trees"]:
402
- errors = validate_edit_tree(tree)
403
- if errors:
404
- raise ValueError(Errors.E1026.format(errors="\n".join(errors)))
405
-
406
- tree = dict(tree)
407
- if "orig" in tree:
408
- tree["orig"] = self.vocab.strings[tree["orig"]]
409
- if "orig" in tree:
410
- tree["subst"] = self.vocab.strings[tree["subst"]]
411
-
412
- trees.append(tree)
413
-
414
- self.trees.from_json(trees)
415
-
416
- for label, tree in enumerate(self.labels):
417
- self.tree2label[tree] = label
418
-
419
- def _labels_from_data(self, get_examples: Callable[[], Iterable[Example]]):
420
- # Count corpus tree frequencies in ad-hoc storage to avoid cluttering
421
- # the final pipe/string store.
422
- vocab = Vocab()
423
- trees = EditTrees(vocab.strings)
424
- tree_freqs: Counter = Counter()
425
- repr_pairs: Dict = {}
426
- for example in get_examples():
427
- for token in example.reference:
428
- if token.lemma != 0:
429
- form = self._get_true_cased_form_of_token(token)
430
- # debug("_labels_from_data", str(token) + "->" + form, token.lemma_)
431
- tree_id = trees.add(form, token.lemma_)
432
- tree_freqs[tree_id] += 1
433
- repr_pairs[tree_id] = (form, token.lemma_)
434
-
435
- # Construct trees that make the frequency cut-off using representative
436
- # form - token pairs.
437
- for tree_id, freq in tree_freqs.items():
438
- if freq >= self.min_tree_freq:
439
- form, lemma = repr_pairs[tree_id]
440
- self._pair2label(form, lemma, add_label=True)
441
-
442
- @lru_cache()
443
- def _get_true_cased_form(self, token: str, is_sent_start: bool, pos: str) -> str:
444
- if is_sent_start and pos != "PROPN":
445
- return token.lower()
446
- else:
447
- return token
448
-
449
- def _get_true_cased_form_of_token(self, token: Token) -> str:
450
- return self._get_true_cased_form(token.text, token.is_sent_start, token.pos_)
451
-
452
- def _pair2label(self, form, lemma, add_label=False):
453
- """
454
- Look up the edit tree identifier for a form/label pair. If the edit
455
- tree is unknown and "add_label" is set, the edit tree will be added to
456
- the labels.
457
- """
458
- tree_id = self.trees.add(form, lemma)
459
- if tree_id not in self.tree2label:
460
- if not add_label:
461
- return None
462
-
463
- self.tree2label[tree_id] = len(self.cfg["labels"])
464
- self.cfg["labels"].append(tree_id)
465
- return self.tree2label[tree_id]
 
1
+ from functools import lru_cache
2
+
3
+ from typing import cast, Any, Callable, Dict, Iterable, List, Optional
4
+ from typing import Sequence, Tuple, Union
5
+ from collections import Counter
6
+ from copy import deepcopy
7
+ from itertools import islice
8
+ import numpy as np
9
+
10
+ import srsly
11
+ from thinc.api import Config, Model, SequenceCategoricalCrossentropy, NumpyOps
12
+ from thinc.types import Floats2d, Ints2d
13
+
14
+ from spacy.pipeline._edit_tree_internals.edit_trees import EditTrees
15
+ from spacy.pipeline._edit_tree_internals.schemas import validate_edit_tree
16
+ from spacy.pipeline.lemmatizer import lemmatizer_score
17
+ from spacy.pipeline.trainable_pipe import TrainablePipe
18
+ from spacy.errors import Errors
19
+ from spacy.language import Language
20
+ from spacy.tokens import Doc, Token
21
+ from spacy.training import Example, validate_examples, validate_get_examples
22
+ from spacy.vocab import Vocab
23
+ from spacy import util
24
+
25
+
26
+ TOP_K_GUARDRAIL = 20
27
+
28
+
29
+ default_model_config = """
30
+ [model]
31
+ @architectures = "spacy.Tagger.v2"
32
+
33
+ [model.tok2vec]
34
+ @architectures = "spacy.HashEmbedCNN.v2"
35
+ pretrained_vectors = null
36
+ width = 96
37
+ depth = 4
38
+ embed_size = 2000
39
+ window_size = 1
40
+ maxout_pieces = 3
41
+ subword_features = true
42
+ """
43
+ DEFAULT_EDIT_TREE_LEMMATIZER_MODEL = Config().from_str(default_model_config)["model"]
44
+
45
+
46
+ @Language.factory(
47
+ "trainable_lemmatizer_v2",
48
+ assigns=["token.lemma"],
49
+ requires=[],
50
+ default_config={
51
+ "model": DEFAULT_EDIT_TREE_LEMMATIZER_MODEL,
52
+ "backoff": "orth",
53
+ "min_tree_freq": 3,
54
+ "overwrite": False,
55
+ "top_k": 1,
56
+ "overwrite_labels": True,
57
+ "scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"},
58
+ },
59
+ default_score_weights={"lemma_acc": 1.0},
60
+ )
61
+ def make_edit_tree_lemmatizer(
62
+ nlp: Language,
63
+ name: str,
64
+ model: Model,
65
+ backoff: Optional[str],
66
+ min_tree_freq: int,
67
+ overwrite: bool,
68
+ top_k: int,
69
+ overwrite_labels: bool,
70
+ scorer: Optional[Callable],
71
+ ):
72
+ """Construct an EditTreeLemmatizer component."""
73
+ return EditTreeLemmatizer(
74
+ nlp.vocab,
75
+ model,
76
+ name,
77
+ backoff=backoff,
78
+ min_tree_freq=min_tree_freq,
79
+ overwrite=overwrite,
80
+ top_k=top_k,
81
+ overwrite_labels=overwrite_labels,
82
+ scorer=scorer,
83
+ )
84
+
85
+
86
+ # _f = open("lemmatizer.log", "w")
87
+ # def debug(*args):
88
+ # _f.write(" ".join(args) + "\n")
89
+ def debug(*args):
90
+ pass
91
+
92
+
93
+ class EditTreeLemmatizer(TrainablePipe):
94
+ """
95
+ Lemmatizer that lemmatizes each word using a predicted edit tree.
96
+ """
97
+
98
+ def __init__(
99
+ self,
100
+ vocab: Vocab,
101
+ model: Model,
102
+ name: str = "trainable_lemmatizer",
103
+ *,
104
+ backoff: Optional[str] = "orth",
105
+ min_tree_freq: int = 3,
106
+ overwrite: bool = False,
107
+ top_k: int = 1,
108
+ overwrite_labels,
109
+ scorer: Optional[Callable] = lemmatizer_score,
110
+ ):
111
+ """
112
+ Construct an edit tree lemmatizer.
113
+
114
+ backoff (Optional[str]): backoff to use when the predicted edit trees
115
+ are not applicable. Must be an attribute of Token or None (leave the
116
+ lemma unset).
117
+ min_tree_freq (int): prune trees that are applied less than this
118
+ frequency in the training data.
119
+ overwrite (bool): overwrite existing lemma annotations.
120
+ top_k (int): try to apply at most the k most probable edit trees.
121
+ """
122
+ self.vocab = vocab
123
+ self.model = model
124
+ self.name = name
125
+ self.backoff = backoff
126
+ self.min_tree_freq = min_tree_freq
127
+ self.overwrite = overwrite
128
+ self.top_k = top_k
129
+ self.overwrite_labels = overwrite_labels
130
+
131
+ self.trees = EditTrees(self.vocab.strings)
132
+ self.tree2label: Dict[int, int] = {}
133
+
134
+ self.cfg: Dict[str, Any] = {"labels": []}
135
+ self.scorer = scorer
136
+ self.numpy_ops = NumpyOps()
137
+
138
+ def get_loss(
139
+ self, examples: Iterable[Example], scores: List[Floats2d]
140
+ ) -> Tuple[float, List[Floats2d]]:
141
+ validate_examples(examples, "EditTreeLemmatizer.get_loss")
142
+ loss_func = SequenceCategoricalCrossentropy(normalize=False, missing_value=-1)
143
+
144
+ truths = []
145
+ for eg in examples:
146
+ eg_truths = []
147
+ for (predicted, gold_lemma, gold_pos, gold_sent_start) in zip(
148
+ eg.predicted,
149
+ eg.get_aligned("LEMMA", as_string=True),
150
+ eg.get_aligned("POS", as_string=True),
151
+ eg.get_aligned_sent_starts(),
152
+ ):
153
+ if gold_lemma is None:
154
+ label = -1
155
+ else:
156
+ form = self._get_true_cased_form(
157
+ predicted.text, gold_sent_start, gold_pos
158
+ )
159
+ tree_id = self.trees.add(form, gold_lemma)
160
+ # debug(f"@get_loss: {predicted}/{gold_pos}[{gold_sent_start}]->{form}|{gold_lemma}[{tree_id}]")
161
+ label = self.tree2label.get(tree_id, 0)
162
+ eg_truths.append(label)
163
+
164
+ truths.append(eg_truths)
165
+
166
+ d_scores, loss = loss_func(scores, truths)
167
+ if self.model.ops.xp.isnan(loss):
168
+ raise ValueError(Errors.E910.format(name=self.name))
169
+
170
+ return float(loss), d_scores
171
+
172
+ def predict(self, docs: Iterable[Doc]) -> List[Ints2d]:
173
+ if self.top_k == 1:
174
+ scores2guesses = self._scores2guesses_top_k_equals_1
175
+ elif self.top_k <= TOP_K_GUARDRAIL:
176
+ scores2guesses = self._scores2guesses_top_k_greater_1
177
+ else:
178
+ scores2guesses = self._scores2guesses_top_k_guardrail
179
+ # The behaviour of *_scores2guesses_top_k_greater_1()* is efficient for values
180
+ # of *top_k>1* that are likely to be useful when the edit tree lemmatizer is used
181
+ # for its principal purpose of lemmatizing tokens. However, the code could also
182
+ # be used for other purposes, and with very large values of *top_k* the method
183
+ # becomes inefficient. In such cases, *_scores2guesses_top_k_guardrail()* is used
184
+ # instead.
185
+ n_docs = len(list(docs))
186
+ if not any(len(doc) for doc in docs):
187
+ # Handle cases where there are no tokens in any docs.
188
+ n_labels = len(self.cfg["labels"])
189
+ guesses: List[Ints2d] = [self.model.ops.alloc2i(0, n_labels) for _ in docs]
190
+ assert len(guesses) == n_docs
191
+ return guesses
192
+ scores = self.model.predict(docs)
193
+ assert len(scores) == n_docs
194
+ guesses = scores2guesses(docs, scores)
195
+ assert len(guesses) == n_docs
196
+ return guesses
197
+
198
+ def _scores2guesses_top_k_equals_1(self, docs, scores):
199
+ guesses = []
200
+ for doc, doc_scores in zip(docs, scores):
201
+ doc_guesses = doc_scores.argmax(axis=1)
202
+ doc_guesses = self.numpy_ops.asarray(doc_guesses)
203
+
204
+ doc_compat_guesses = []
205
+ for i, token in enumerate(doc):
206
+ tree_id = self.cfg["labels"][doc_guesses[i]]
207
+ form: str = self._get_true_cased_form_of_token(token)
208
+ if self.trees.apply(tree_id, form) is not None:
209
+ doc_compat_guesses.append(tree_id)
210
+ else:
211
+ doc_compat_guesses.append(-1)
212
+ guesses.append(np.array(doc_compat_guesses))
213
+
214
+ return guesses
215
+
216
+ def _scores2guesses_top_k_greater_1(self, docs, scores):
217
+ guesses = []
218
+ top_k = min(self.top_k, len(self.labels))
219
+ for doc, doc_scores in zip(docs, scores):
220
+ doc_scores = self.numpy_ops.asarray(doc_scores)
221
+ doc_compat_guesses = []
222
+ for i, token in enumerate(doc):
223
+ for _ in range(top_k):
224
+ candidate = int(doc_scores[i].argmax())
225
+ candidate_tree_id = self.cfg["labels"][candidate]
226
+ form: str = self._get_true_cased_form_of_token(token)
227
+ if self.trees.apply(candidate_tree_id, form) is not None:
228
+ doc_compat_guesses.append(candidate_tree_id)
229
+ break
230
+ doc_scores[i, candidate] = np.finfo(np.float32).min
231
+ else:
232
+ doc_compat_guesses.append(-1)
233
+ guesses.append(np.array(doc_compat_guesses))
234
+
235
+ return guesses
236
+
237
+ def _scores2guesses_top_k_guardrail(self, docs, scores):
238
+ guesses = []
239
+ for doc, doc_scores in zip(docs, scores):
240
+ doc_guesses = np.argsort(doc_scores)[..., : -self.top_k - 1 : -1]
241
+ doc_guesses = self.numpy_ops.asarray(doc_guesses)
242
+
243
+ doc_compat_guesses = []
244
+ for token, candidates in zip(doc, doc_guesses):
245
+ tree_id = -1
246
+ for candidate in candidates:
247
+ candidate_tree_id = self.cfg["labels"][candidate]
248
+
249
+ form: str = self._get_true_cased_form_of_token(token)
250
+
251
+ if self.trees.apply(candidate_tree_id, form) is not None:
252
+ tree_id = candidate_tree_id
253
+ break
254
+ doc_compat_guesses.append(tree_id)
255
+
256
+ guesses.append(np.array(doc_compat_guesses))
257
+
258
+ return guesses
259
+
260
+ def set_annotations(self, docs: Iterable[Doc], batch_tree_ids):
261
+ for i, doc in enumerate(docs):
262
+ doc_tree_ids = batch_tree_ids[i]
263
+ if hasattr(doc_tree_ids, "get"):
264
+ doc_tree_ids = doc_tree_ids.get()
265
+ for j, tree_id in enumerate(doc_tree_ids):
266
+ if self.overwrite or doc[j].lemma == 0:
267
+ # If no applicable tree could be found during prediction,
268
+ # the special identifier -1 is used. Otherwise the tree
269
+ # is guaranteed to be applicable.
270
+ if tree_id == -1:
271
+ if self.backoff is not None:
272
+ doc[j].lemma = getattr(doc[j], self.backoff)
273
+ else:
274
+ form = self._get_true_cased_form_of_token(doc[j])
275
+ lemma = self.trees.apply(tree_id, form) or form
276
+ # debug(f"@set_annotations: {doc[j]}/{doc[j].pos_}[{doc[j].is_sent_start}]->{form}|{lemma}[{tree_id}]")
277
+ doc[j].lemma_ = lemma
278
+
279
+ @property
280
+ def labels(self) -> Tuple[int, ...]:
281
+ """Returns the labels currently added to the component."""
282
+ return tuple(self.cfg["labels"])
283
+
284
+ @property
285
+ def hide_labels(self) -> bool:
286
+ return True
287
+
288
+ @property
289
+ def label_data(self) -> Dict:
290
+ trees = []
291
+ for tree_id in range(len(self.trees)):
292
+ tree = self.trees[tree_id]
293
+ if "orig" in tree:
294
+ tree["orig"] = self.vocab.strings[tree["orig"]]
295
+ if "subst" in tree:
296
+ tree["subst"] = self.vocab.strings[tree["subst"]]
297
+ trees.append(tree)
298
+ return dict(trees=trees, labels=tuple(self.cfg["labels"]))
299
+
300
+ def initialize(
301
+ self,
302
+ get_examples: Callable[[], Iterable[Example]],
303
+ *,
304
+ nlp: Optional[Language] = None,
305
+ labels: Optional[Dict] = None,
306
+ ):
307
+ validate_get_examples(get_examples, "EditTreeLemmatizer.initialize")
308
+
309
+ if self.overwrite_labels:
310
+ if labels is None:
311
+ self._labels_from_data(get_examples)
312
+ else:
313
+ self._add_labels(labels)
314
+
315
+ # Sample for the model.
316
+ doc_sample = []
317
+ label_sample = []
318
+ for example in islice(get_examples(), 10):
319
+ doc_sample.append(example.x)
320
+ gold_labels: List[List[float]] = []
321
+ for token in example.reference:
322
+ if token.lemma == 0:
323
+ gold_label = None
324
+ else:
325
+ gold_label = self._pair2label(token.text, token.lemma_)
326
+
327
+ gold_labels.append(
328
+ [
329
+ 1.0 if label == gold_label else 0.0
330
+ for label in self.cfg["labels"]
331
+ ]
332
+ )
333
+
334
+ gold_labels = cast(Floats2d, gold_labels)
335
+ label_sample.append(self.model.ops.asarray(gold_labels, dtype="float32"))
336
+
337
+ self._require_labels()
338
+ assert len(doc_sample) > 0, Errors.E923.format(name=self.name)
339
+ assert len(label_sample) > 0, Errors.E923.format(name=self.name)
340
+
341
+ self.model.initialize(X=doc_sample, Y=label_sample)
342
+
343
+ def from_bytes(self, bytes_data, *, exclude=tuple()):
344
+ deserializers = {
345
+ "cfg": lambda b: self.cfg.update(srsly.json_loads(b)),
346
+ "model": lambda b: self.model.from_bytes(b),
347
+ "vocab": lambda b: self.vocab.from_bytes(b, exclude=exclude),
348
+ "trees": lambda b: self.trees.from_bytes(b),
349
+ }
350
+
351
+ util.from_bytes(bytes_data, deserializers, exclude)
352
+
353
+ return self
354
+
355
+ def to_bytes(self, *, exclude=tuple()):
356
+ serializers = {
357
+ "cfg": lambda: srsly.json_dumps(self.cfg),
358
+ "model": lambda: self.model.to_bytes(),
359
+ "vocab": lambda: self.vocab.to_bytes(exclude=exclude),
360
+ "trees": lambda: self.trees.to_bytes(),
361
+ }
362
+
363
+ return util.to_bytes(serializers, exclude)
364
+
365
+ def to_disk(self, path, exclude=tuple()):
366
+ path = util.ensure_path(path)
367
+ serializers = {
368
+ "cfg": lambda p: srsly.write_json(p, self.cfg),
369
+ "model": lambda p: self.model.to_disk(p),
370
+ "vocab": lambda p: self.vocab.to_disk(p, exclude=exclude),
371
+ "trees": lambda p: self.trees.to_disk(p),
372
+ }
373
+ util.to_disk(path, serializers, exclude)
374
+
375
+ def from_disk(self, path, exclude=tuple()):
376
+ def load_model(p):
377
+ try:
378
+ with open(p, "rb") as mfile:
379
+ self.model.from_bytes(mfile.read())
380
+ except AttributeError:
381
+ raise ValueError(Errors.E149) from None
382
+
383
+ deserializers = {
384
+ "cfg": lambda p: self.cfg.update(srsly.read_json(p)),
385
+ "model": load_model,
386
+ "vocab": lambda p: self.vocab.from_disk(p, exclude=exclude),
387
+ "trees": lambda p: self.trees.from_disk(p),
388
+ }
389
+
390
+ util.from_disk(path, deserializers, exclude)
391
+ return self
392
+
393
+ def _add_labels(self, labels: Dict):
394
+ if "labels" not in labels:
395
+ raise ValueError(Errors.E857.format(name="labels"))
396
+ if "trees" not in labels:
397
+ raise ValueError(Errors.E857.format(name="trees"))
398
+
399
+ self.cfg["labels"] = list(labels["labels"])
400
+ trees = []
401
+ for tree in labels["trees"]:
402
+ errors = validate_edit_tree(tree)
403
+ if errors:
404
+ raise ValueError(Errors.E1026.format(errors="\n".join(errors)))
405
+
406
+ tree = dict(tree)
407
+ if "orig" in tree:
408
+ tree["orig"] = self.vocab.strings[tree["orig"]]
409
+ if "orig" in tree:
410
+ tree["subst"] = self.vocab.strings[tree["subst"]]
411
+
412
+ trees.append(tree)
413
+
414
+ self.trees.from_json(trees)
415
+
416
+ for label, tree in enumerate(self.labels):
417
+ self.tree2label[tree] = label
418
+
419
+ def _labels_from_data(self, get_examples: Callable[[], Iterable[Example]]):
420
+ # Count corpus tree frequencies in ad-hoc storage to avoid cluttering
421
+ # the final pipe/string store.
422
+ vocab = Vocab()
423
+ trees = EditTrees(vocab.strings)
424
+ tree_freqs: Counter = Counter()
425
+ repr_pairs: Dict = {}
426
+ for example in get_examples():
427
+ for token in example.reference:
428
+ if token.lemma != 0:
429
+ form = self._get_true_cased_form_of_token(token)
430
+ # debug("_labels_from_data", str(token) + "->" + form, token.lemma_)
431
+ tree_id = trees.add(form, token.lemma_)
432
+ tree_freqs[tree_id] += 1
433
+ repr_pairs[tree_id] = (form, token.lemma_)
434
+
435
+ # Construct trees that make the frequency cut-off using representative
436
+ # form - token pairs.
437
+ for tree_id, freq in tree_freqs.items():
438
+ if freq >= self.min_tree_freq:
439
+ form, lemma = repr_pairs[tree_id]
440
+ self._pair2label(form, lemma, add_label=True)
441
+
442
+ @lru_cache()
443
+ def _get_true_cased_form(self, token: str, is_sent_start: bool, pos: str) -> str:
444
+ if is_sent_start and pos != "PROPN":
445
+ return token.lower()
446
+ else:
447
+ return token
448
+
449
+ def _get_true_cased_form_of_token(self, token: Token) -> str:
450
+ return self._get_true_cased_form(token.text, token.is_sent_start, token.pos_)
451
+
452
+ def _pair2label(self, form, lemma, add_label=False):
453
+ """
454
+ Look up the edit tree identifier for a form/label pair. If the edit
455
+ tree is unknown and "add_label" is set, the edit tree will be added to
456
+ the labels.
457
+ """
458
+ tree_id = self.trees.add(form, lemma)
459
+ if tree_id not in self.tree2label:
460
+ if not add_label:
461
+ return None
462
+
463
+ self.tree2label[tree_id] = len(self.cfg["labels"])
464
+ self.cfg["labels"].append(tree_id)
465
+ return self.tree2label[tree_id]
hu_core_news_lg-any-py3-none-any.whl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:347c1c54a8de2254f6ec40a1563a940400a34ba76a39261f2a980a59311e9402
3
- size 401417225
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:803471467921b9db7e77c48ac7bdf9a57c08949427a9dd957a1655cbbdf65adc
3
+ size 401509282
lemma_postprocessing.py CHANGED
@@ -1,113 +1,113 @@
1
- """
2
- This module contains various rule-based components aiming to improve on baseline lemmatization tools.
3
- """
4
-
5
- import re
6
- from typing import List, Callable
7
-
8
- from spacy.lang.hu import Hungarian
9
- from spacy.pipeline import Pipe
10
- from spacy.tokens import Token
11
- from spacy.tokens.doc import Doc
12
-
13
-
14
- @Hungarian.component(
15
- "lemma_case_smoother",
16
- assigns=["token.lemma"],
17
- requires=["token.lemma", "token.pos"],
18
- )
19
- def lemma_case_smoother(doc: Doc) -> Doc:
20
- """Smooth lemma casing by POS.
21
-
22
- DEPRECATED: This is not needed anymore, as the lemmatizer is now case-insensitive.
23
-
24
- Args:
25
- doc (Doc): Input document.
26
-
27
- Returns:
28
- Doc: Output document.
29
- """
30
- for token in doc:
31
- if token.is_sent_start and token.tag_ != "PROPN":
32
- token.lemma_ = token.lemma_.lower()
33
-
34
- return doc
35
-
36
-
37
- class LemmaSmoother(Pipe):
38
- """Smooths lemma by fixing common errors of the edit-tree lemmatizer."""
39
-
40
- _DATE_PATTERN = re.compile(r"(\d+)-j?[éá]?n?a?(t[őó]l)?")
41
- _NUMBER_PATTERN = re.compile(r"(\d+([-,/_.:]?(._)?\d+)*%?)")
42
-
43
- # noinspection PyUnusedLocal
44
- @staticmethod
45
- @Hungarian.factory("lemma_smoother", assigns=["token.lemma"], requires=["token.lemma", "token.pos"])
46
- def create_lemma_smoother(nlp: Hungarian, name: str) -> "LemmaSmoother":
47
- return LemmaSmoother()
48
-
49
- def __call__(self, doc: Doc) -> Doc:
50
- rules: List[Callable] = [
51
- self._remove_exclamation_marks,
52
- self._remove_question_marks,
53
- self._remove_date_suffixes,
54
- self._remove_suffix_after_numbers,
55
- ]
56
-
57
- for token in doc:
58
- for rule in rules:
59
- rule(token)
60
-
61
- return doc
62
-
63
- @classmethod
64
- def _remove_exclamation_marks(cls, token: Token) -> None:
65
- """Removes exclamation marks from the lemma.
66
-
67
- Args:
68
- token (Token): The original token.
69
- """
70
-
71
- if "!" != token.lemma_:
72
- exclamation_mark_index = token.lemma_.find("!")
73
- if exclamation_mark_index != -1:
74
- token.lemma_ = token.lemma_[:exclamation_mark_index]
75
-
76
- @classmethod
77
- def _remove_question_marks(cls, token: Token) -> None:
78
- """Removes question marks from the lemma.
79
-
80
- Args:
81
- token (Token): The original token.
82
- """
83
-
84
- if "?" != token.lemma_:
85
- question_mark_index = token.lemma_.find("?")
86
- if question_mark_index != -1:
87
- token.lemma_ = token.lemma_[:question_mark_index]
88
-
89
- @classmethod
90
- def _remove_date_suffixes(cls, token: Token) -> None:
91
- """Fixes the suffixes of dates.
92
-
93
- Args:
94
- token (Token): The original token.
95
- """
96
-
97
- if token.pos_ == "NOUN":
98
- match = cls._DATE_PATTERN.match(token.lemma_)
99
- if match is not None:
100
- token.lemma_ = match.group(1) + "."
101
-
102
- @classmethod
103
- def _remove_suffix_after_numbers(cls, token: Token) -> None:
104
- """Removes suffixes after numbers.
105
-
106
- Args:
107
- token (str): The original token.
108
- """
109
-
110
- if token.pos_ == "NUM":
111
- match = cls._NUMBER_PATTERN.match(token.text)
112
- if match is not None:
113
- token.lemma_ = match.group(0)
 
1
+ """
2
+ This module contains various rule-based components aiming to improve on baseline lemmatization tools.
3
+ """
4
+
5
+ import re
6
+ from typing import List, Callable
7
+
8
+ from spacy.lang.hu import Hungarian
9
+ from spacy.pipeline import Pipe
10
+ from spacy.tokens import Token
11
+ from spacy.tokens.doc import Doc
12
+
13
+
14
+ @Hungarian.component(
15
+ "lemma_case_smoother",
16
+ assigns=["token.lemma"],
17
+ requires=["token.lemma", "token.pos"],
18
+ )
19
+ def lemma_case_smoother(doc: Doc) -> Doc:
20
+ """Smooth lemma casing by POS.
21
+
22
+ DEPRECATED: This is not needed anymore, as the lemmatizer is now case-insensitive.
23
+
24
+ Args:
25
+ doc (Doc): Input document.
26
+
27
+ Returns:
28
+ Doc: Output document.
29
+ """
30
+ for token in doc:
31
+ if token.is_sent_start and token.tag_ != "PROPN":
32
+ token.lemma_ = token.lemma_.lower()
33
+
34
+ return doc
35
+
36
+
37
+ class LemmaSmoother(Pipe):
38
+ """Smooths lemma by fixing common errors of the edit-tree lemmatizer."""
39
+
40
+ _DATE_PATTERN = re.compile(r"(\d+)-j?[éá]?n?a?(t[őó]l)?")
41
+ _NUMBER_PATTERN = re.compile(r"(\d+([-,/_.:]?(._)?\d+)*%?)")
42
+
43
+ # noinspection PyUnusedLocal
44
+ @staticmethod
45
+ @Hungarian.factory("lemma_smoother", assigns=["token.lemma"], requires=["token.lemma", "token.pos"])
46
+ def create_lemma_smoother(nlp: Hungarian, name: str) -> "LemmaSmoother":
47
+ return LemmaSmoother()
48
+
49
+ def __call__(self, doc: Doc) -> Doc:
50
+ rules: List[Callable] = [
51
+ self._remove_exclamation_marks,
52
+ self._remove_question_marks,
53
+ self._remove_date_suffixes,
54
+ self._remove_suffix_after_numbers,
55
+ ]
56
+
57
+ for token in doc:
58
+ for rule in rules:
59
+ rule(token)
60
+
61
+ return doc
62
+
63
+ @classmethod
64
+ def _remove_exclamation_marks(cls, token: Token) -> None:
65
+ """Removes exclamation marks from the lemma.
66
+
67
+ Args:
68
+ token (Token): The original token.
69
+ """
70
+
71
+ if "!" != token.lemma_:
72
+ exclamation_mark_index = token.lemma_.find("!")
73
+ if exclamation_mark_index != -1:
74
+ token.lemma_ = token.lemma_[:exclamation_mark_index]
75
+
76
+ @classmethod
77
+ def _remove_question_marks(cls, token: Token) -> None:
78
+ """Removes question marks from the lemma.
79
+
80
+ Args:
81
+ token (Token): The original token.
82
+ """
83
+
84
+ if "?" != token.lemma_:
85
+ question_mark_index = token.lemma_.find("?")
86
+ if question_mark_index != -1:
87
+ token.lemma_ = token.lemma_[:question_mark_index]
88
+
89
+ @classmethod
90
+ def _remove_date_suffixes(cls, token: Token) -> None:
91
+ """Fixes the suffixes of dates.
92
+
93
+ Args:
94
+ token (Token): The original token.
95
+ """
96
+
97
+ if token.pos_ == "NOUN":
98
+ match = cls._DATE_PATTERN.match(token.lemma_)
99
+ if match is not None:
100
+ token.lemma_ = match.group(1) + "."
101
+
102
+ @classmethod
103
+ def _remove_suffix_after_numbers(cls, token: Token) -> None:
104
+ """Removes suffixes after numbers.
105
+
106
+ Args:
107
+ token (str): The original token.
108
+ """
109
+
110
+ if token.pos_ == "NUM":
111
+ match = cls._NUMBER_PATTERN.match(token.text)
112
+ if match is not None:
113
+ token.lemma_ = match.group(0)
lookup_lemmatizer.py CHANGED
@@ -1,132 +1,132 @@
1
- import re
2
- from collections import defaultdict
3
- from operator import itemgetter
4
- from pathlib import Path
5
- from re import Pattern
6
- from typing import Optional, Callable, Iterable, Dict, Tuple
7
-
8
- from spacy.lang.hu import Hungarian
9
- from spacy.language import Language
10
- from spacy.lookups import Lookups, Table
11
- from spacy.pipeline import Pipe
12
- from spacy.pipeline.lemmatizer import lemmatizer_score
13
- from spacy.tokens import Token
14
- from spacy.tokens.doc import Doc
15
-
16
- # noinspection PyUnresolvedReferences
17
- from spacy.training.example import Example
18
- from spacy.util import ensure_path
19
-
20
-
21
- class LookupLemmatizer(Pipe):
22
- """
23
- LookupLemmatizer learn `(token, pos, morph. feat) -> lemma` mappings during training, and applies them at prediction
24
- time.
25
- """
26
-
27
- _number_pattern: Pattern = re.compile(r"\d")
28
-
29
- # noinspection PyUnusedLocal
30
- @staticmethod
31
- @Hungarian.factory(
32
- "lookup_lemmatizer",
33
- assigns=["token.lemma"],
34
- requires=["token.pos"],
35
- default_config={"scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"}, "source": ""},
36
- )
37
- def create(nlp: Language, name: str, scorer: Optional[Callable], source: str) -> "LookupLemmatizer":
38
- return LookupLemmatizer(None, source, scorer)
39
-
40
- def train(self, sentences: Iterable[Iterable[Tuple[str, str, str, str]]], min_occurrences: int = 1) -> None:
41
- """
42
-
43
- Args:
44
- sentences (Iterable[Iterable[Tuple[str, str, str, str]]]): Sentences to learn the mappings from
45
- min_occurrences (int): mapping occurring less than this threshold are not learned
46
-
47
- """
48
-
49
- # Lookup table which maps (upos, form) to (lemma -> frequency),
50
- # e.g. `{ ("NOUN", "alma"): { "alma" : 99, "alom": 1} }`
51
- lemma_lookup_table: Dict[Tuple[str, str], Dict[str, int]] = defaultdict(lambda: defaultdict(int))
52
-
53
- for sentence in sentences:
54
- for token, pos, feats, lemma in sentence:
55
- token = self.__mask_numbers(token)
56
- lemma = self.__mask_numbers(lemma)
57
- feats_str = ("|" + feats) if feats else ""
58
- key = (token, pos + feats_str)
59
- lemma_lookup_table[key][lemma] += 1
60
- lemma_lookup_table = dict(lemma_lookup_table)
61
-
62
- self._lookups = Lookups()
63
- table = Table(name="lemma_lookups")
64
-
65
- lemma_freq: Dict[str, int]
66
- for (form, pos), lemma_freq in dict(lemma_lookup_table).items():
67
- most_freq_lemma, freq = sorted(lemma_freq.items(), key=itemgetter(1), reverse=True)[0]
68
- if freq >= min_occurrences:
69
- if form not in table:
70
- # lemma by pos
71
- table[form]: Dict[str, str] = dict()
72
- table[form][pos] = most_freq_lemma
73
-
74
- self._lookups.set_table(name=f"lemma_lookups", table=table)
75
-
76
- def __init__(
77
- self,
78
- lookups: Optional[Lookups] = None,
79
- source: Optional[str] = None,
80
- scorer: Optional[Callable] = lemmatizer_score,
81
- ):
82
- self._lookups: Optional[Lookups] = lookups
83
- self.scorer = scorer
84
- self.source = source
85
-
86
- def __call__(self, doc: Doc) -> Doc:
87
- assert self._lookups is not None, "Lookup table should be initialized first"
88
-
89
- token: Token
90
- for token in doc:
91
- lemma_lookup_table = self._lookups.get_table(f"lemma_lookups")
92
- masked_token = self.__mask_numbers(token.text)
93
-
94
- if masked_token in lemma_lookup_table:
95
- lemma_by_pos: Dict[str, str] = lemma_lookup_table[masked_token]
96
- feats_str = ("|" + str(token.morph)) if str(token.morph) else ""
97
- key = token.pos_ + feats_str
98
- if key in lemma_by_pos:
99
- if masked_token != token.text:
100
- # If the token contains numbers, we need to replace the numbers in the lemma as well
101
- token.lemma_ = self.__replace_numbers(lemma_by_pos[key], token.text)
102
- pass
103
- else:
104
- token.lemma_ = lemma_by_pos[key]
105
- return doc
106
-
107
- # noinspection PyUnusedLocal
108
- def to_disk(self, path, exclude=tuple()):
109
- assert self._lookups is not None, "Lookup table should be initialized first"
110
-
111
- path: Path = ensure_path(path)
112
- path.mkdir(exist_ok=True)
113
- self._lookups.to_disk(path)
114
-
115
- # noinspection PyUnusedLocal
116
- def from_disk(self, path, exclude=tuple()) -> "LookupLemmatizer":
117
- path: Path = ensure_path(path)
118
- lookups = Lookups()
119
- self._lookups = lookups.from_disk(path=path)
120
- return self
121
-
122
- def initialize(self, get_examples: Callable[[], Iterable[Example]], *, nlp: Language = None) -> None:
123
- lookups = Lookups()
124
- self._lookups = lookups.from_disk(path=self.source)
125
-
126
- @classmethod
127
- def __mask_numbers(cls, token: str) -> str:
128
- return cls._number_pattern.sub("0", token)
129
-
130
- @classmethod
131
- def __replace_numbers(cls, lemma: str, token: str) -> str:
132
- return cls._number_pattern.sub(lambda match: token[match.start()], lemma)
 
1
+ import re
2
+ from collections import defaultdict
3
+ from operator import itemgetter
4
+ from pathlib import Path
5
+ from re import Pattern
6
+ from typing import Optional, Callable, Iterable, Dict, Tuple
7
+
8
+ from spacy.lang.hu import Hungarian
9
+ from spacy.language import Language
10
+ from spacy.lookups import Lookups, Table
11
+ from spacy.pipeline import Pipe
12
+ from spacy.pipeline.lemmatizer import lemmatizer_score
13
+ from spacy.tokens import Token
14
+ from spacy.tokens.doc import Doc
15
+
16
+ # noinspection PyUnresolvedReferences
17
+ from spacy.training.example import Example
18
+ from spacy.util import ensure_path
19
+
20
+
21
+ class LookupLemmatizer(Pipe):
22
+ """
23
+ LookupLemmatizer learn `(token, pos, morph. feat) -> lemma` mappings during training, and applies them at prediction
24
+ time.
25
+ """
26
+
27
+ _number_pattern: Pattern = re.compile(r"\d")
28
+
29
+ # noinspection PyUnusedLocal
30
+ @staticmethod
31
+ @Hungarian.factory(
32
+ "lookup_lemmatizer",
33
+ assigns=["token.lemma"],
34
+ requires=["token.pos"],
35
+ default_config={"scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"}, "source": ""},
36
+ )
37
+ def create(nlp: Language, name: str, scorer: Optional[Callable], source: str) -> "LookupLemmatizer":
38
+ return LookupLemmatizer(None, source, scorer)
39
+
40
+ def train(self, sentences: Iterable[Iterable[Tuple[str, str, str, str]]], min_occurrences: int = 1) -> None:
41
+ """
42
+
43
+ Args:
44
+ sentences (Iterable[Iterable[Tuple[str, str, str, str]]]): Sentences to learn the mappings from
45
+ min_occurrences (int): mapping occurring less than this threshold are not learned
46
+
47
+ """
48
+
49
+ # Lookup table which maps (upos, form) to (lemma -> frequency),
50
+ # e.g. `{ ("NOUN", "alma"): { "alma" : 99, "alom": 1} }`
51
+ lemma_lookup_table: Dict[Tuple[str, str], Dict[str, int]] = defaultdict(lambda: defaultdict(int))
52
+
53
+ for sentence in sentences:
54
+ for token, pos, feats, lemma in sentence:
55
+ token = self.__mask_numbers(token)
56
+ lemma = self.__mask_numbers(lemma)
57
+ feats_str = ("|" + feats) if feats else ""
58
+ key = (token, pos + feats_str)
59
+ lemma_lookup_table[key][lemma] += 1
60
+ lemma_lookup_table = dict(lemma_lookup_table)
61
+
62
+ self._lookups = Lookups()
63
+ table = Table(name="lemma_lookups")
64
+
65
+ lemma_freq: Dict[str, int]
66
+ for (form, pos), lemma_freq in dict(lemma_lookup_table).items():
67
+ most_freq_lemma, freq = sorted(lemma_freq.items(), key=itemgetter(1), reverse=True)[0]
68
+ if freq >= min_occurrences:
69
+ if form not in table:
70
+ # lemma by pos
71
+ table[form]: Dict[str, str] = dict()
72
+ table[form][pos] = most_freq_lemma
73
+
74
+ self._lookups.set_table(name=f"lemma_lookups", table=table)
75
+
76
+ def __init__(
77
+ self,
78
+ lookups: Optional[Lookups] = None,
79
+ source: Optional[str] = None,
80
+ scorer: Optional[Callable] = lemmatizer_score,
81
+ ):
82
+ self._lookups: Optional[Lookups] = lookups
83
+ self.scorer = scorer
84
+ self.source = source
85
+
86
+ def __call__(self, doc: Doc) -> Doc:
87
+ assert self._lookups is not None, "Lookup table should be initialized first"
88
+
89
+ token: Token
90
+ for token in doc:
91
+ lemma_lookup_table = self._lookups.get_table(f"lemma_lookups")
92
+ masked_token = self.__mask_numbers(token.text)
93
+
94
+ if masked_token in lemma_lookup_table:
95
+ lemma_by_pos: Dict[str, str] = lemma_lookup_table[masked_token]
96
+ feats_str = ("|" + str(token.morph)) if str(token.morph) else ""
97
+ key = token.pos_ + feats_str
98
+ if key in lemma_by_pos:
99
+ if masked_token != token.text:
100
+ # If the token contains numbers, we need to replace the numbers in the lemma as well
101
+ token.lemma_ = self.__replace_numbers(lemma_by_pos[key], token.text)
102
+ pass
103
+ else:
104
+ token.lemma_ = lemma_by_pos[key]
105
+ return doc
106
+
107
+ # noinspection PyUnusedLocal
108
+ def to_disk(self, path, exclude=tuple()):
109
+ assert self._lookups is not None, "Lookup table should be initialized first"
110
+
111
+ path: Path = ensure_path(path)
112
+ path.mkdir(exist_ok=True)
113
+ self._lookups.to_disk(path)
114
+
115
+ # noinspection PyUnusedLocal
116
+ def from_disk(self, path, exclude=tuple()) -> "LookupLemmatizer":
117
+ path: Path = ensure_path(path)
118
+ lookups = Lookups()
119
+ self._lookups = lookups.from_disk(path=path)
120
+ return self
121
+
122
+ def initialize(self, get_examples: Callable[[], Iterable[Example]], *, nlp: Language = None) -> None:
123
+ lookups = Lookups()
124
+ self._lookups = lookups.from_disk(path=self.source)
125
+
126
+ @classmethod
127
+ def __mask_numbers(cls, token: str) -> str:
128
+ return cls._number_pattern.sub("0", token)
129
+
130
+ @classmethod
131
+ def __replace_numbers(cls, lemma: str, token: str) -> str:
132
+ return cls._number_pattern.sub(lambda match: token[match.start()], lemma)
meta.json CHANGED
@@ -1,14 +1,14 @@
1
  {
2
  "lang":"hu",
3
  "name":"core_news_lg",
4
- "version":"3.7.0",
5
  "description":"Core Hungarian model for HuSpaCy. Components: tok2vec, senter, tagger, morphologizer, lemmatizer, parser, ner",
6
  "author":"SzegedAI, MILAB",
7
  "email":"gyorgy@orosz.link",
8
  "url":"https://github.com/huspacy/huspacy",
9
  "license":"cc-by-sa-4.0",
10
- "spacy_version":">=3.7.0,<3.8.0",
11
- "spacy_git_version":"a89eae928",
12
  "vectors":{
13
  "width":300,
14
  "vectors":200000,
@@ -1268,80 +1268,85 @@
1268
  "token_p":0.998565417,
1269
  "token_r":0.9993300153,
1270
  "token_f":0.9989475698,
1271
- "sents_p":0.9843400447,
1272
- "sents_r":0.9799554566,
1273
- "sents_f":0.9821428571,
1274
- "tag_acc":0.9688501842,
1275
- "pos_acc":0.9670319154,
1276
- "morph_acc":0.9362618432,
1277
- "morph_micro_p":0.9703275697,
1278
- "morph_micro_r":0.9598195101,
1279
- "morph_micro_f":0.9650449361,
1280
  "morph_per_feat":{
1281
  "Definite":{
1282
- "p":0.9637448371,
1283
- "r":0.979934671,
1284
- "f":0.9717723276
1285
  },
1286
  "PronType":{
1287
- "p":0.9735973597,
1288
- "r":0.9768211921,
1289
- "f":0.9752066116
1290
  },
1291
  "Case":{
1292
- "p":0.975014991,
1293
- "r":0.9638411381,
1294
- "f":0.9693958665
1295
  },
1296
  "Degree":{
1297
- "p":0.927871772,
1298
- "r":0.8668885191,
1299
- "f":0.896344086
1300
  },
1301
  "Number":{
1302
- "p":0.9868154158,
1303
- "r":0.978381096,
1304
- "f":0.9825801565
1305
  },
1306
  "Mood":{
1307
- "p":0.9385290889,
1308
- "r":0.9478935698,
1309
- "f":0.943188086
1310
  },
1311
  "Person":{
1312
- "p":0.9567747298,
1313
- "r":0.9465460526,
1314
- "f":0.9516329062
1315
  },
1316
  "Tense":{
1317
- "p":0.9648737651,
1318
- "r":0.9712707182,
1319
- "f":0.968061674
1320
  },
1321
  "VerbForm":{
1322
- "p":0.9514802632,
1323
- "r":0.9278267843,
1324
- "f":0.9395046691
1325
  },
1326
  "Voice":{
1327
- "p":0.9635258359,
1328
  "r":0.972392638,
1329
- "f":0.9679389313
1330
  },
1331
  "Number[psor]":{
1332
- "p":0.9881129272,
1333
- "r":0.9472934473,
1334
- "f":0.9672727273
1335
  },
1336
  "Person[psor]":{
1337
- "p":0.9881129272,
1338
- "r":0.9486447932,
1339
- "f":0.9679767103
1340
  },
1341
  "NumType":{
1342
- "p":0.946835443,
1343
- "r":0.912195122,
1344
- "f":0.9291925466
 
 
 
 
 
1345
  },
1346
  "Reflex":{
1347
  "p":1.0,
@@ -1357,121 +1362,116 @@
1357
  "p":0.0,
1358
  "r":0.0,
1359
  "f":0.0
1360
- },
1361
- "Poss":{
1362
- "p":1.0,
1363
- "r":1.0,
1364
- "f":1.0
1365
  }
1366
  },
1367
- "lemma_acc":0.9759831595,
1368
- "dep_uas":0.8332400672,
1369
- "dep_las":0.76922216,
1370
  "dep_las_per_type":{
1371
  "det":{
1372
- "p":0.8675344564,
1373
- "r":0.9020700637,
1374
- "f":0.8844652615
1375
  },
1376
  "amod:att":{
1377
- "p":0.8616666667,
1378
- "r":0.8454619787,
1379
- "f":0.8534874123
1380
  },
1381
  "nsubj":{
1382
- "p":0.7601880878,
1383
- "r":0.7578125,
1384
- "f":0.7589984351
1385
  },
1386
  "advmod:mode":{
1387
- "p":0.6306532663,
1388
- "r":0.6151960784,
1389
- "f":0.6228287841
1390
  },
1391
  "nmod:att":{
1392
- "p":0.7532051282,
1393
- "r":0.7966101695,
1394
- "f":0.7742998353
1395
  },
1396
  "obl":{
1397
- "p":0.7955357143,
1398
- "r":0.801980198,
1399
- "f":0.7987449574
1400
  },
1401
  "obj":{
1402
- "p":0.8738938053,
1403
  "r":0.8876404494,
1404
- "f":0.8807134894
1405
  },
1406
  "root":{
1407
- "p":0.8344519016,
1408
- "r":0.8307349666,
1409
- "f":0.8325892857
1410
  },
1411
  "cc":{
1412
- "p":0.7109207709,
1413
- "r":0.6989473684,
1414
- "f":0.7048832272
1415
  },
1416
  "conj":{
1417
- "p":0.520661157,
1418
- "r":0.525,
1419
- "f":0.5228215768
1420
  },
1421
  "advmod":{
1422
- "p":0.8383838384,
1423
- "r":0.8736842105,
1424
- "f":0.8556701031
1425
  },
1426
  "flat:name":{
1427
- "p":0.9074074074,
1428
- "r":0.9158878505,
1429
- "f":0.911627907
1430
  },
1431
  "appos":{
1432
- "p":0.4242424242,
1433
- "r":0.2978723404,
1434
- "f":0.35
1435
  },
1436
  "advcl":{
1437
- "p":0.3048780488,
1438
- "r":0.2551020408,
1439
- "f":0.2777777778
1440
  },
1441
  "advmod:tlocy":{
1442
- "p":0.6784313725,
1443
- "r":0.752173913,
1444
- "f":0.7134020619
1445
  },
1446
  "ccomp:obj":{
1447
- "p":0.2941176471,
1448
  "r":0.4545454545,
1449
- "f":0.3571428571
1450
  },
1451
  "mark":{
1452
- "p":0.8553459119,
1453
- "r":0.8607594937,
1454
- "f":0.858044164
1455
  },
1456
  "compound:preverb":{
1457
- "p":0.9369369369,
1458
- "r":0.9541284404,
1459
- "f":0.9454545455
1460
  },
1461
  "advmod:locy":{
1462
- "p":0.8571428571,
1463
- "r":0.5625,
1464
- "f":0.679245283
1465
  },
1466
  "cop":{
1467
- "p":0.8387096774,
1468
- "r":0.6341463415,
1469
- "f":0.7222222222
1470
  },
1471
  "nmod:obl":{
1472
- "p":0.3428571429,
1473
- "r":0.3,
1474
- "f":0.32
1475
  },
1476
  "advmod:to":{
1477
  "p":0.0,
@@ -1480,63 +1480,58 @@
1480
  },
1481
  "obj:lvc":{
1482
  "p":0.5,
1483
- "r":0.25,
1484
- "f":0.3333333333
1485
  },
1486
  "ccomp:obl":{
1487
- "p":0.5625,
1488
- "r":0.5625,
1489
- "f":0.5625
1490
  },
1491
  "iobj":{
1492
- "p":0.25,
1493
- "r":0.2666666667,
1494
- "f":0.2580645161
1495
  },
1496
  "case":{
1497
- "p":0.9387755102,
1498
- "r":0.9387755102,
1499
- "f":0.9387755102
1500
  },
1501
  "csubj":{
1502
- "p":0.5789473684,
1503
- "r":0.2972972973,
1504
- "f":0.3928571429
1505
  },
1506
  "parataxis":{
1507
- "p":0.1612903226,
1508
- "r":0.0684931507,
1509
- "f":0.0961538462
1510
  },
1511
  "xcomp":{
1512
- "p":0.88,
1513
- "r":0.8918918919,
1514
- "f":0.8859060403
1515
  },
1516
  "nummod":{
1517
- "p":0.5841584158,
1518
- "r":0.6344086022,
1519
- "f":0.6082474227
1520
- },
1521
- "dep":{
1522
- "p":0.0,
1523
- "r":0.0,
1524
- "f":0.0
1525
  },
1526
  "acl":{
1527
- "p":0.4571428571,
1528
- "r":0.4444444444,
1529
- "f":0.4507042254
1530
  },
1531
  "advmod:tto":{
1532
- "p":0.6,
1533
  "r":0.3,
1534
- "f":0.4
1535
  },
1536
  "nmod":{
1537
- "p":0.5,
1538
- "r":0.1818181818,
1539
- "f":0.2666666667
1540
  },
1541
  "aux":{
1542
  "p":0.9090909091,
@@ -1554,9 +1549,9 @@
1554
  "f":0.0
1555
  },
1556
  "compound":{
1557
- "p":1.0,
1558
  "r":0.975,
1559
- "f":0.9873417722
1560
  },
1561
  "obl:lvc":{
1562
  "p":0.0,
@@ -1568,10 +1563,10 @@
1568
  "r":0.0,
1569
  "f":0.0
1570
  },
1571
- "ccomp":{
1572
- "p":0.1428571429,
1573
- "r":0.0769230769,
1574
- "f":0.1
1575
  },
1576
  "nsubj:lvc":{
1577
  "p":0.0,
@@ -1579,9 +1574,14 @@
1579
  "f":0.0
1580
  },
1581
  "list":{
1582
- "p":0.2,
1583
  "r":0.1666666667,
1584
- "f":0.1818181818
 
 
 
 
 
1585
  },
1586
  "advmod:que":{
1587
  "p":1.0,
@@ -1594,32 +1594,32 @@
1594
  "f":0.0
1595
  }
1596
  },
1597
- "ents_p":0.8714565876,
1598
- "ents_r":0.8593530239,
1599
- "ents_f":0.8653624856,
1600
  "ents_per_type":{
1601
  "ORG":{
1602
- "p":0.8937644342,
1603
- "r":0.8970792768,
1604
- "f":0.8954187876
1605
  },
1606
  "PER":{
1607
- "p":0.9060036386,
1608
- "r":0.8924731183,
1609
- "f":0.8991874812
1610
  },
1611
  "LOC":{
1612
- "p":0.8752196837,
1613
- "r":0.8645833333,
1614
- "f":0.8698689956
1615
  },
1616
  "MISC":{
1617
- "p":0.704718417,
1618
- "r":0.6567375887,
1619
- "f":0.6798825257
1620
  }
1621
  },
1622
- "speed":847.9348577021
1623
  },
1624
  "sources":[
1625
  {
@@ -1648,6 +1648,6 @@
1648
  }
1649
  ],
1650
  "requirements":[
1651
-
1652
  ]
1653
  }
 
1
  {
2
  "lang":"hu",
3
  "name":"core_news_lg",
4
+ "version":"3.8.0",
5
  "description":"Core Hungarian model for HuSpaCy. Components: tok2vec, senter, tagger, morphologizer, lemmatizer, parser, ner",
6
  "author":"SzegedAI, MILAB",
7
  "email":"gyorgy@orosz.link",
8
  "url":"https://github.com/huspacy/huspacy",
9
  "license":"cc-by-sa-4.0",
10
+ "spacy_version":">=3.8.0,<3.9.0",
11
+ "spacy_git_version":"63f1b53",
12
  "vectors":{
13
  "width":300,
14
  "vectors":200000,
 
1268
  "token_p":0.998565417,
1269
  "token_r":0.9993300153,
1270
  "token_f":0.9989475698,
1271
+ "sents_p":0.9888143177,
1272
+ "sents_r":0.9844097996,
1273
+ "sents_f":0.9866071429,
1274
+ "tag_acc":0.9677481099,
1275
+ "pos_acc":0.966025457,
1276
+ "morph_acc":0.9340606757,
1277
+ "morph_micro_p":0.9686143428,
1278
+ "morph_micro_r":0.9588740868,
1279
+ "morph_micro_f":0.9637196044,
1280
  "morph_per_feat":{
1281
  "Definite":{
1282
+ "p":0.9615032081,
1283
+ "r":0.9790013999,
1284
+ "f":0.9701734104
1285
  },
1286
  "PronType":{
1287
+ "p":0.9768083932,
1288
+ "r":0.9762693157,
1289
+ "f":0.97653878
1290
  },
1291
  "Case":{
1292
+ "p":0.9734,
1293
+ "r":0.9616676546,
1294
+ "f":0.9674982606
1295
  },
1296
  "Degree":{
1297
+ "p":0.9304426378,
1298
+ "r":0.8569051581,
1299
+ "f":0.8921611087
1300
  },
1301
  "Number":{
1302
+ "p":0.9864910503,
1303
+ "r":0.9790514496,
1304
+ "f":0.9827571705
1305
  },
1306
  "Mood":{
1307
+ "p":0.9259259259,
1308
+ "r":0.9423503326,
1309
+ "f":0.9340659341
1310
  },
1311
  "Person":{
1312
+ "p":0.9537190083,
1313
+ "r":0.9490131579,
1314
+ "f":0.9513602638
1315
  },
1316
  "Tense":{
1317
+ "p":0.9607843137,
1318
+ "r":0.9745856354,
1319
+ "f":0.9676357652
1320
  },
1321
  "VerbForm":{
1322
+ "p":0.9570247934,
1323
+ "r":0.9286287089,
1324
+ "f":0.9426129426
1325
  },
1326
  "Voice":{
1327
+ "p":0.9567404427,
1328
  "r":0.972392638,
1329
+ "f":0.9645030426
1330
  },
1331
  "Number[psor]":{
1332
+ "p":0.9866071429,
1333
+ "r":0.9444444444,
1334
+ "f":0.9650655022
1335
  },
1336
  "Person[psor]":{
1337
+ "p":0.9866071429,
1338
+ "r":0.9457917261,
1339
+ "f":0.9657683904
1340
  },
1341
  "NumType":{
1342
+ "p":0.9215686275,
1343
+ "r":0.9170731707,
1344
+ "f":0.9193154034
1345
+ },
1346
+ "Poss":{
1347
+ "p":0.6,
1348
+ "r":1.0,
1349
+ "f":0.75
1350
  },
1351
  "Reflex":{
1352
  "p":1.0,
 
1362
  "p":0.0,
1363
  "r":0.0,
1364
  "f":0.0
 
 
 
 
 
1365
  }
1366
  },
1367
+ "lemma_acc":0.9761745288,
1368
+ "dep_uas":0.8434612153,
1369
+ "dep_las":0.78125,
1370
  "dep_las_per_type":{
1371
  "det":{
1372
+ "p":0.8722741433,
1373
+ "r":0.8917197452,
1374
+ "f":0.8818897638
1375
  },
1376
  "amod:att":{
1377
+ "p":0.855500821,
1378
+ "r":0.8520032706,
1379
+ "f":0.8537484637
1380
  },
1381
  "nsubj":{
1382
+ "p":0.7853810264,
1383
+ "r":0.7890625,
1384
+ "f":0.7872174591
1385
  },
1386
  "advmod:mode":{
1387
+ "p":0.6590909091,
1388
+ "r":0.6397058824,
1389
+ "f":0.6492537313
1390
  },
1391
  "nmod:att":{
1392
+ "p":0.8043844857,
1393
+ "r":0.8084745763,
1394
+ "f":0.8064243449
1395
  },
1396
  "obl":{
1397
+ "p":0.8206521739,
1398
+ "r":0.8154815482,
1399
+ "f":0.8180586907
1400
  },
1401
  "obj":{
1402
+ "p":0.8816964286,
1403
  "r":0.8876404494,
1404
+ "f":0.8846584546
1405
  },
1406
  "root":{
1407
+ "p":0.8299776286,
1408
+ "r":0.8262806236,
1409
+ "f":0.828125
1410
  },
1411
  "cc":{
1412
+ "p":0.7293868922,
1413
+ "r":0.7263157895,
1414
+ "f":0.7278481013
1415
  },
1416
  "conj":{
1417
+ "p":0.5326530612,
1418
+ "r":0.54375,
1419
+ "f":0.5381443299
1420
  },
1421
  "advmod":{
1422
+ "p":0.8265306122,
1423
+ "r":0.8526315789,
1424
+ "f":0.8393782383
1425
  },
1426
  "flat:name":{
1427
+ "p":0.9082568807,
1428
+ "r":0.9252336449,
1429
+ "f":0.9166666667
1430
  },
1431
  "appos":{
1432
+ "p":0.52,
1433
+ "r":0.414893617,
1434
+ "f":0.4615384615
1435
  },
1436
  "advcl":{
1437
+ "p":0.3595505618,
1438
+ "r":0.3265306122,
1439
+ "f":0.3422459893
1440
  },
1441
  "advmod:tlocy":{
1442
+ "p":0.7153846154,
1443
+ "r":0.8086956522,
1444
+ "f":0.7591836735
1445
  },
1446
  "ccomp:obj":{
1447
+ "p":0.3,
1448
  "r":0.4545454545,
1449
+ "f":0.3614457831
1450
  },
1451
  "mark":{
1452
+ "p":0.86875,
1453
+ "r":0.8797468354,
1454
+ "f":0.8742138365
1455
  },
1456
  "compound:preverb":{
1457
+ "p":0.9130434783,
1458
+ "r":0.9633027523,
1459
+ "f":0.9375
1460
  },
1461
  "advmod:locy":{
1462
+ "p":0.7142857143,
1463
+ "r":0.625,
1464
+ "f":0.6666666667
1465
  },
1466
  "cop":{
1467
+ "p":0.8333333333,
1468
+ "r":0.6097560976,
1469
+ "f":0.7042253521
1470
  },
1471
  "nmod:obl":{
1472
+ "p":0.34375,
1473
+ "r":0.275,
1474
+ "f":0.3055555556
1475
  },
1476
  "advmod:to":{
1477
  "p":0.0,
 
1480
  },
1481
  "obj:lvc":{
1482
  "p":0.5,
1483
+ "r":0.0833333333,
1484
+ "f":0.1428571429
1485
  },
1486
  "ccomp:obl":{
1487
+ "p":0.48,
1488
+ "r":0.375,
1489
+ "f":0.4210526316
1490
  },
1491
  "iobj":{
1492
+ "p":0.2142857143,
1493
+ "r":0.2,
1494
+ "f":0.2068965517
1495
  },
1496
  "case":{
1497
+ "p":0.9489795918,
1498
+ "r":0.9489795918,
1499
+ "f":0.9489795918
1500
  },
1501
  "csubj":{
1502
+ "p":0.5454545455,
1503
+ "r":0.4864864865,
1504
+ "f":0.5142857143
1505
  },
1506
  "parataxis":{
1507
+ "p":0.2142857143,
1508
+ "r":0.0821917808,
1509
+ "f":0.1188118812
1510
  },
1511
  "xcomp":{
1512
+ "p":0.8101265823,
1513
+ "r":0.8648648649,
1514
+ "f":0.8366013072
1515
  },
1516
  "nummod":{
1517
+ "p":0.5480769231,
1518
+ "r":0.6129032258,
1519
+ "f":0.578680203
 
 
 
 
 
1520
  },
1521
  "acl":{
1522
+ "p":0.5303030303,
1523
+ "r":0.4861111111,
1524
+ "f":0.5072463768
1525
  },
1526
  "advmod:tto":{
1527
+ "p":0.375,
1528
  "r":0.3,
1529
+ "f":0.3333333333
1530
  },
1531
  "nmod":{
1532
+ "p":0.4285714286,
1533
+ "r":0.2727272727,
1534
+ "f":0.3333333333
1535
  },
1536
  "aux":{
1537
  "p":0.9090909091,
 
1549
  "f":0.0
1550
  },
1551
  "compound":{
1552
+ "p":0.975,
1553
  "r":0.975,
1554
+ "f":0.975
1555
  },
1556
  "obl:lvc":{
1557
  "p":0.0,
 
1563
  "r":0.0,
1564
  "f":0.0
1565
  },
1566
+ "dep":{
1567
+ "p":0.0,
1568
+ "r":0.0,
1569
+ "f":0.0
1570
  },
1571
  "nsubj:lvc":{
1572
  "p":0.0,
 
1574
  "f":0.0
1575
  },
1576
  "list":{
1577
+ "p":0.1428571429,
1578
  "r":0.1666666667,
1579
+ "f":0.1538461538
1580
+ },
1581
+ "ccomp":{
1582
+ "p":0.0,
1583
+ "r":0.0,
1584
+ "f":0.0
1585
  },
1586
  "advmod:que":{
1587
  "p":1.0,
 
1594
  "f":0.0
1595
  }
1596
  },
1597
+ "ents_p":0.8701321586,
1598
+ "ents_r":0.8681434599,
1599
+ "ents_f":0.8691366717,
1600
  "ents_per_type":{
1601
  "ORG":{
1602
+ "p":0.8923992674,
1603
+ "r":0.9035697728,
1604
+ "f":0.8979497812
1605
  },
1606
  "PER":{
1607
+ "p":0.9004739336,
1608
+ "r":0.908004779,
1609
+ "f":0.9042236764
1610
  },
1611
  "LOC":{
1612
+ "p":0.8812056738,
1613
+ "r":0.8628472222,
1614
+ "f":0.8719298246
1615
  },
1616
  "MISC":{
1617
+ "p":0.7037037037,
1618
+ "r":0.6737588652,
1619
+ "f":0.6884057971
1620
  }
1621
  },
1622
+ "speed":1409.5289955649
1623
  },
1624
  "sources":[
1625
  {
 
1648
  }
1649
  ],
1650
  "requirements":[
1651
+ "spacy>=3.8.0,<3.9.0"
1652
  ]
1653
  }
morphologizer/model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8477031b27757039c769236aa89771872ed12822b8810e5a03dfabf40ca44ad
3
  size 1379030
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0297b30e16b64642420d46cb1180f5e7f11ba69421922fd206570de752c01f67
3
  size 1379030
ner/model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:634cc27a221554080043d68a07c43bb33ef13ad708f97b2c22ee4da23f28b912
3
  size 56989063
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c316bfafb91dcc9634c806eccb84b4cafd9d2a7b5aa12f4acedfc444b363141
3
  size 56989063
parser/model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8863a06cb048c1357c38cabab69fbd5601ba7bdf4327c7f66f5eca205e3de939
3
  size 26010735
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b62e3eba5bfb10458cdd596c1e50f554ded1605bbe3f60a6b6bbbc13e0eac4a2
3
  size 26010735
senter/model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9593cebc9f9da0bed0238d78a79e655d748c8d6160a632f94ab5c1f5d72078a8
3
  size 2845
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93af000003faff9592468ba09740b6c53ca04560527285ac7c951303a86ee6c4
3
  size 2845
tagger/model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a3563372a99627811d878af0987510b6be0c34e2e0afe06000cdfe42d3f3630
3
  size 20905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1b7bd4a129f2b582a6ad1cdae8a7556563d30b0807c4eb29c505e9868540995
3
  size 20905
tok2vec/model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8afba52c64b92a2f547e531ba840bdbe9231445a72d0c607d68af0843276faf3
3
  size 56806299
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a2ae415cd358b5ba9ecac0b0cb6a409b338645ed60749ae33e788a0d2bee39f
3
  size 56806299
trainable_lemmatizer/model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6a87904a5122ace3902bd320111362a1bfe5fcba5c9c7ad97769ff57986e3e1e
3
  size 61638320
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc8ea0a05a3c14769fb9c7b4790d8f709a98ac8675e21bd67ccbd3efff03c3b7
3
  size 61638320
vocab/strings.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6c047cc74f05b93094fc1c4859327b07c8be39fa6bd1620dfc4269c06ef1e88
3
- size 6388756
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d1c46f60c878efbe3d59b67a33dcd72eac431a79a0ea3b19538bf7de0b93acd
3
+ size 6388980