Update README.md
Browse files
README.md
CHANGED
@@ -40,7 +40,8 @@ We used a batch size of 32, a maximum sequence length of 128, and a learning rat
|
|
40 |
## Uses
|
41 |
|
42 |
Load the model via the transformers library:
|
|
|
43 |
from transformers import AutoTokenizer, AutoModel
|
44 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
45 |
-
model = AutoModel.from_pretrained("
|
46 |
|
|
|
40 |
## Uses
|
41 |
|
42 |
Load the model via the transformers library:
|
43 |
+
|
44 |
from transformers import AutoTokenizer, AutoModel
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained("nazyrova/clinicalBERT")
|
46 |
+
model = AutoModel.from_pretrained("nazyrova/clinicalBERT")
|
47 |
|