wassemgtk commited on
Commit
f91439a
1 Parent(s): af3875b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -5,12 +5,21 @@ language:
5
  tags:
6
  - InstructGPT
7
  - hf
 
 
8
  ---
9
 
10
 
11
 
12
  # InstructPalmyra-20b
13
 
 
 
 
 
 
 
 
14
  <style>
15
  img {
16
  display: inline;
@@ -82,6 +91,22 @@ InstructPalmyra's core functionality is to take a string of text and predict the
82
 
83
  InstructPalmyra was trained on Writer’s custom data. As with all language models, it is difficult to predict how InstructPalmyra will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
  ## Citation and Related Information
87
 
@@ -93,7 +118,7 @@ To cite this model:
93
  title = {{InstructPalmyra-20b : Instruct tuned Palmyra-Large model}},
94
  howpublished = {\url{https://dev.writer.com}},
95
  year = 2023,
96
- month = July
97
  }
98
  ```
99
  [![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-20B-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets)|![AUR license](https://img.shields.io/badge/license-Apache%202-blue)
 
5
  tags:
6
  - InstructGPT
7
  - hf
8
+ datasets:
9
+ - Writer/palmyra-data-index
10
  ---
11
 
12
 
13
 
14
  # InstructPalmyra-20b
15
 
16
+ - **Developed by:** [https://writer.com/](https://writer.com/);
17
+ - **Model type:** Causal decoder-only;
18
+ - **Language(s) (NLP):** English;
19
+ - **License:** Apache 2.0;
20
+ - **Finetuned from model:** [Palmyra-20B](https://huggingface.co/Writer/palmyra-large).
21
+
22
+
23
  <style>
24
  img {
25
  display: inline;
 
91
 
92
  InstructPalmyra was trained on Writer’s custom data. As with all language models, it is difficult to predict how InstructPalmyra will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
93
 
94
+ ## Uses
95
+
96
+
97
+ ### Out-of-Scope Use
98
+
99
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
100
+
101
+ ## Bias, Risks, and Limitations
102
+
103
+ Palmyra-Med-20B is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
104
+
105
+ ### Recommendations
106
+
107
+ We recommend users of Palmyra-Med-20B to develop guardrails and to take appropriate precautions for any production use.
108
+
109
+
110
 
111
  ## Citation and Related Information
112
 
 
118
  title = {{InstructPalmyra-20b : Instruct tuned Palmyra-Large model}},
119
  howpublished = {\url{https://dev.writer.com}},
120
  year = 2023,
121
+ month = Augest
122
  }
123
  ```
124
  [![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-20B-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets)|![AUR license](https://img.shields.io/badge/license-Apache%202-blue)