Deepesh Chaudhari commited on
Commit
d7b5e6e
•
1 Parent(s): 2b5e415

Initial commit

Browse files
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: zh
3
+ ---
4
+
5
+ # Bert-base-chinese
6
+
7
+ ## Table of Contents
8
+ - [Model Details](#model-details)
9
+ - [Uses](#uses)
10
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
11
+ - [Training](#training)
12
+ - [Evaluation](#evaluation)
13
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
14
+
15
+
16
+ # Model Details
17
+ - **Model Description:**
18
+ This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
19
+
20
+ - **Developed by:** HuggingFace team
21
+ - **Model Type:** Fill-Mask
22
+ - **Language(s):** Chinese
23
+ - **License:** [More Information needed]
24
+ - **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
25
+
26
+
27
+ ## Uses
28
+
29
+ #### Direct Use
30
+
31
+ This model can be used for masked language modeling
32
+
33
+
34
+
35
+ ## Risks, Limitations and Biases
36
+ **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
37
+
38
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
39
+
40
+
41
+ ## Training
42
+
43
+ #### Training Procedure
44
+ * **type_vocab_size:** 2
45
+ * **vocab_size:** 21128
46
+ * **num_hidden_layers:** 12
47
+
48
+ #### Training Data
49
+ [More Information Needed]
50
+
51
+ ## Evaluation
52
+
53
+ #### Results
54
+
55
+ [More Information Needed]
56
+
57
+
58
+ ## How to Get Started With the Model
59
+ ```python
60
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
61
+
62
+ tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
63
+
64
+ model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
65
+
66
+ ```
67
+
68
+
69
+
70
+
71
+
bert-base-chinese ADDED
@@ -0,0 +1 @@
 
 
1
+ Subproject commit 38fda776740d17609554e879e3ac7b9837bdb5ee
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "directionality": "bidi",
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 768,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 3072,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 12,
16
+ "num_hidden_layers": 12,
17
+ "pad_token_id": 0,
18
+ "pooler_fc_size": 768,
19
+ "pooler_num_attention_heads": 12,
20
+ "pooler_num_fc_layers": 3,
21
+ "pooler_size_per_head": 128,
22
+ "pooler_type": "first_token_transform",
23
+ "type_vocab_size": 2,
24
+ "vocab_size": 21128
25
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76df8425215fb9ede22e0393e356f82a99d84e79f078cd141afbbf9277460c8e
3
+ size 409168515
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a693db616eaf647ed2bfe531e1fa446637358fc108a8bf04e8d4db17e837ee9
3
+ size 411577189
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:612acd33db45677c3d6ba70615336619dc65cddf1ecf9d39a22dd1934af4aff2
3
+ size 478309336
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "do_lower_case": false
3
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff