File size: 1,678 Bytes
8bd2731
 
68ff9a9
 
 
 
 
8bd2731
68ff9a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44538b3
 
 
68ff9a9
44538b3
68ff9a9
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: gpl-3.0
language:
- en
- zh

inference: false
---

# Ziya-LLaMA-13B-v1

- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)

(LLaMA权重的许可证限制,我们无法直接发布完整的模型权重,用户需要参考[使用说明](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)进行合并)

# 姜子牙系列模型

- [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1)
- [Ziya-LLaMA-7B-Reward](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-7B-Reward)
- [Ziya-LLaMA-13B-Pretrain-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1)
- [Ziya-BLIP2-14B-Visual-v1](https://huggingface.co/IDEA-CCNL/Ziya-BLIP2-14B-Visual-v1)

## 简介 Brief Introduction
我们对Ziya-LLaMA-13B-v1模型进行继续优化,推出开源版本Ziya-LLaMA-13B-v1.1。通过调整微调数据的比例和采用更优的强化学习策略,本版本在问答准确性、数学能力以及安全性等方面得到了明显提升,详细能力分析如下图所示。

We have further optimized the Ziya-LLaMA-13B-v1 model and released the open-source version Ziya-LLaMA-13B-v1.1. By adjusting the proportion of fine-tuning data and adopting a better reinforcement learning strategy, this version has achieved significant improvements in question-answering accuracy, mathematical ability, and safety, as shown in the following figure in detail.
<img src="https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1/blob/main/pk.png" width=1000 height=600>

## 软件依赖
```
pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers
```