update.
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
6 |
should probably proofread and complete it, then remove this comment. -->
|
7 |
|
8 |
-
This model is a fine-tuned model for Chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with **max_seq_lenght=2048** on
|
9 |
|
10 |
## Model date
|
11 |
Neural-chat-7b-v1.1 was trained between June and July 2023.
|
@@ -21,7 +21,11 @@ We use the same evaluation metrics as [open_llm_leaderboard](https://huggingface
|
|
21 |
|
22 |
### Bias evaluation
|
23 |
|
24 |
-
|
|
|
|
|
|
|
|
|
25 |
|
26 |
|
27 |
## Training procedure
|
|
|
5 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
6 |
should probably proofread and complete it, then remove this comment. -->
|
7 |
|
8 |
+
This model is a fine-tuned model for Chat based on [mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b) with **max_seq_lenght=2048** on [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), [TigerResearch/tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-en-50k), [TigerResearch/tigerbot-gsm-8k-en](https://huggingface.co/datasets/TigerResearch/tigerbot-gsm-8k-en), [TigerResearch/tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-zh-0.5m), [TigerResearch/tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-stackexchange-qa-en-0.5m), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) dataset.
|
9 |
|
10 |
## Model date
|
11 |
Neural-chat-7b-v1.1 was trained between June and July 2023.
|
|
|
21 |
|
22 |
### Bias evaluation
|
23 |
|
24 |
+
Following the blog [evaluating-llm-bias](https://huggingface.co/blog/evaluating-llm-bias), we select 10000 samples randomly from [allenai/real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) to evaluate toxicity bias in Language Models
|
25 |
+
|
26 |
+
| Model | Toxicity Rito ↓|
|
27 |
+
|[mosaicml/mpt-7b](https://huggingface.co/mosaicml/mpt-7b)| 0.027 |
|
28 |
+
| **Ours** | 0.0264 |
|
29 |
|
30 |
|
31 |
## Training procedure
|