neovalle commited on
Commit
b524063
1 Parent(s): 27ccacf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -2
README.md CHANGED
@@ -3,7 +3,69 @@ tags:
3
  - autotrain
4
  - text-generation
5
  widget:
6
- - text: "I love AutoTrain because "
7
  ---
8
 
9
- # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - autotrain
4
  - text-generation
5
  widget:
6
+ - text: "Tell me about bees "
7
  ---
8
 
9
+ # Model Trained Using AutoTrain---
10
+ tags:
11
+ - autotrain
12
+ - text-generation
13
+ widget:
14
+ - text: 'Tell me about bees.'
15
+ library_name: transformers
16
+ pipeline_tag: text-generation
17
+ ---
18
+
19
+ # Model Card for Model neovalle/H4rmoniousBreeze
20
+
21
+
22
+ ## Model Details
23
+
24
+ ### Model Description
25
+
26
+ This is model is a version of HuggingFaceH4/zephyr-7b-beta fine-tuned via Autotrain Reward Model, using the H4rmony dataset, which aims
27
+ to better align the model with ecological values through the use of ecolinguistics principles.
28
+
29
+ - **Developed by:** Jorge Vallego
30
+ - **Funded by :** Neovalle Ltd.
31
+ - **Shared by :** airesearch@neovalle.co.uk
32
+ - **Model type:** mistral
33
+ - **Language(s) (NLP):** Primarily English
34
+ - **License:** MIT
35
+ - **Finetuned from model:** HuggingFaceH4/zephyr-7b-beta
36
+
37
+
38
+ ## Uses
39
+
40
+ Intended as PoC to show the effects of H4rmony dataset.
41
+
42
+ ### Direct Use
43
+
44
+ For testing purposes to gain insight in order to help with the continous improvement of the H4rmony dataset.
45
+
46
+ ### Downstream Use
47
+
48
+ Its direct use in applications is not recommended as this model is under testing for a specific task only (Ecological Alignment)
49
+
50
+ ### Out-of-Scope Use
51
+
52
+ Not meant to be used other than testing and evaluation of the H4rmony dataset and ecological alignment.
53
+
54
+ ## Bias, Risks, and Limitations
55
+
56
+ This model might produce biased completions already existing in the base model, and others unintentionally introduced during fine-tuning.
57
+
58
+ ## How to Get Started with the Model
59
+
60
+ It can be loaded and run in a Colab instance with High RAM.
61
+ Code to load base and finetuned models to compare outputs:
62
+
63
+ https://github.com/Neovalle/H4rmony/blob/main/H4rmoniousBreeze.ipynb
64
+
65
+ ## Training Details
66
+
67
+ Autotrained reward model
68
+
69
+ ### Training Data
70
+
71
+ H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony