KoichiYasuoka commited on
Commit
f2c95da
1 Parent(s): b3b201f

dependency-parsing

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -6,6 +6,7 @@ tags:
6
  - "token-classification"
7
  - "pos"
8
  - "wikipedia"
 
9
  datasets:
10
  - "universal_dependencies"
11
  license: "apache-2.0"
@@ -18,7 +19,7 @@ widget:
18
 
19
  ## Model Description
20
 
21
- This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging, derived from [bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
22
 
23
  ## How to Use
24
 
@@ -29,3 +30,14 @@ tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-thai-upos")
29
  model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-thai-upos")
30
  ```
31
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - "token-classification"
7
  - "pos"
8
  - "wikipedia"
9
+ - "dependency-parsing"
10
  datasets:
11
  - "universal_dependencies"
12
  license: "apache-2.0"
 
19
 
20
  ## Model Description
21
 
22
+ This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
23
 
24
  ## How to Use
25
 
 
30
  model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-thai-upos")
31
  ```
32
 
33
+ or
34
+
35
+ ```py
36
+ import esupar
37
+ nlp=esupar.load("KoichiYasuoka/bert-base-thai-upos")
38
+ ```
39
+
40
+ ## See Also
41
+
42
+ [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
43
+