omarmomen commited on
Commit
a39e347
1 Parent(s): 00dfdcf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - omarmomen/babylm_10M
5
+ language:
6
+ - en
7
+ metrics:
8
+ - perplexity
9
+ library_name: transformers
10
+ ---
11
+ # Model Card for omarmomen/roberta_base_32k_final
12
+
13
+ This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023.
14
+ The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
15
+
16
+ <strong>omarmomen/roberta_base_32k_final</strong> is a baseline RobertaModel.
17
+
18
+ The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).