wubingheng JingzeShi commited on
Commit
cc09b7a
1 Parent(s): bca8d24

Update README.md (#1)

Browse files

- Update README.md (1605733814bc6072e88543a51b980aac347ec9e2)


Co-authored-by: Jingze Shi <JingzeShi@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -7,12 +7,15 @@ datasets:
7
  metrics:
8
  - code_eval
9
  base_model:
10
- - JingzeShi/Doge-76M
11
- pipeline_tag: text-generation
12
  library_name: transformers
13
  ---
14
 
15
- ## **basic model : Doge 197M**
 
 
 
16
 
17
  Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.
18
 
 
7
  metrics:
8
  - code_eval
9
  base_model:
10
+ - JingzeShi/Doge-197M
11
+ pipeline_tag: question-answering
12
  library_name: transformers
13
  ---
14
 
15
+ ## **Doge 197M for Medical QA**
16
+
17
+ This model is a fine-tuned version of [JingzeShi/Doge-197M](https://huggingface.co/JingzeShi/Doge-197M).
18
+ It has been trained using [TRL](https://github.com/huggingface/trl).
19
 
20
  Doge is an ongoing research project where we aim to train a series of small language models to further explore whether the Transformer framework allows for more complex feedforward network structures, enabling the model to have fewer cache states and larger knowledge capacity.
21