Transformers
PyTorch
English
trl
rlhf
ybelkada yjernite HF staff commited on
Commit
57a70bc
1 Parent(s): 0496b06

change of license + expanded intended use and limitations (#2)

Browse files

- change of license + expanded intended use and limitations (31af57431d8f2745599c72e2c460352027713957)


Co-authored-by: Yacine Jernite <yjernite@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  tags:
6
  - trl
7
  - transformers
8
  - reinforcement-learning
 
 
9
  ---
10
 
11
  ![pull_figure](https://huggingface.co/datasets/trl-internal-testing/example-images/resolve/main/images/stack-llama.png)
@@ -25,10 +27,19 @@ Answer: <Response>
25
  ```
26
 
27
  ## Intended Uses & Limitations
28
- **Llama-se-rl** is intended for use in generating responses to questions related to the Stack Exchange dataset. It is suitable for generating answers to questions in the domains covered by the dataset, such as programming, mathematics, and physics. However, the model may not perform well on questions outside these domains or on questions requiring highly specific or technical knowledge.
 
 
 
29
 
30
  ## Limitations and Bias
31
- The **Llama-se-rl** model inherits limitations and biases from the Llama model and also those contained in the Stack Exchange dataset. The Stack Exchange dataset may contain biases in terms of the topics it covers and the users who contribute to it. It may not include all possible domains, and the quality of answers may vary. Additionally, the model may generate answers that are incorrect or misleading due to biases in the training data or the inherent limitations of the Llama architecture.
 
 
 
 
 
 
32
 
33
  ## BibTeX entry and citation info
34
 
 
1
  ---
2
+ license: bigscience-openrail-m
3
  language:
4
  - en
5
  tags:
6
  - trl
7
  - transformers
8
  - reinforcement-learning
9
+ datasets:
10
+ - lvwerra/stack-exchange-paired
11
  ---
12
 
13
  ![pull_figure](https://huggingface.co/datasets/trl-internal-testing/example-images/resolve/main/images/stack-llama.png)
 
27
  ```
28
 
29
  ## Intended Uses & Limitations
30
+ The **Llama-se-rl** model was trained for long form QA using [Stack Exchange](https://stackexchange.com) data wich is released under a [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/), and covers topics such as programming, mathematics, and physics.
31
+ It is intended to demonstrate a Large Language Model's ability to follow a target behavior (in this case, generating answers to a question that would have been rated more highly on SE).
32
+ It is not intended to replace human expertise, and answers should be validated through the use of external sources.
33
+ Further research is also needed to attribute model generations to sources in the training data, especially in cases where the model may copy answers from the training data *verbatim*.
34
 
35
  ## Limitations and Bias
36
+ The **Llama-se-rl** model inherits limitations and biases from the Llama model and also those contained in the Stack Exchange dataset.
37
+ In particular, per the [latest developer survey for Stack Overflow](https://survey.stackoverflow.co/2022/),
38
+ which constitutes a significant part of the StackExchange data,
39
+ most users who answered the survey identified themselves as [White or European, men, between 25 and 34 years old, and based in the US (with a significant part of responders from India)] (https://survey.stackoverflow.co/2022/#developer-profile-demographics)
40
+ While this demographic information likely varies by topic, disparities between the data contributors and the direct and indiriect users of the technology should inform developers in assessing what constitutes an appropriate use case.
41
+
42
+ Additionally, the model may generate answers that are incorrect or misleading due to the inherent limitations of the Llama architecture.
43
 
44
  ## BibTeX entry and citation info
45