myscarlet
commited on
Commit
•
625d434
1
Parent(s):
3caa03d
update readme
Browse files
README.md
CHANGED
@@ -7,4 +7,58 @@ language:
|
|
7 |
- en
|
8 |
- zh
|
9 |
pipeline_tag: text2text-generation
|
10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
- en
|
8 |
- zh
|
9 |
pipeline_tag: text2text-generation
|
10 |
+
---
|
11 |
+
|
12 |
+
|
13 |
+
KwaiAgents ([Github](https://github.com/KwaiKEG/KwaiAgents)) is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). The open-sourced content includes:
|
14 |
+
|
15 |
+
1. **KAgentSys-Lite**: An experimental Agent Loop implemented based on open-source search engines, browsers, time, calendar, weather, and other tools, which is only missing the memory mechanism and some search capabilities compared to the system in the paper.
|
16 |
+
2. **KAgentLMs**: A series of large language models with Agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper.
|
17 |
+
3. **KAgentInstruct**: Fine-tuned data of instructions generated by the Meta-agent in the paper.
|
18 |
+
4. **KAgentBench**: Over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.
|
19 |
+
|
20 |
+
|
21 |
+
## User Guide
|
22 |
+
|
23 |
+
### Direct usage
|
24 |
+
|
25 |
+
Tutorial can refer to [baichuan-inc/Baichuan2-13B-Base](https://github.com/baichuan-inc/Baichuan2)
|
26 |
+
```python
|
27 |
+
import torch
|
28 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
29 |
+
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-13B-Base", use_fast=False, trust_remote_code=True)
|
30 |
+
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-13B-Base", device_map="auto", trust_remote_code=True)
|
31 |
+
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
|
32 |
+
inputs = inputs.to('cuda:0')
|
33 |
+
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
|
34 |
+
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
|
35 |
+
```
|
36 |
+
|
37 |
+
### AgentLMs as service
|
38 |
+
We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
|
39 |
+
```bash
|
40 |
+
pip install "fschat[model_worker,webui]"
|
41 |
+
pip install vllm==0.2.0
|
42 |
+
pip install transformers==4.33.2
|
43 |
+
```
|
44 |
+
To deploy KAgentLMs, you first need to start the controller in one terminal.
|
45 |
+
```bash
|
46 |
+
python -m fastchat.serve.controller
|
47 |
+
```
|
48 |
+
Secondly, you should use the following command in another terminal for single-gpu inference service deployment:
|
49 |
+
```bash
|
50 |
+
python -m fastchat.serve.vllm_worker --model-path $model_path --trust-remote-code
|
51 |
+
```
|
52 |
+
Where `$model_path` is the local path of the model downloaded. If the GPU does not support Bfloat16, you can add `--dtype half` to the command line.
|
53 |
+
|
54 |
+
Thirdly, start the REST API server in the third terminal.
|
55 |
+
```bash
|
56 |
+
python -m fastchat.serve.openai_api_server --host localhost --port 8888
|
57 |
+
```
|
58 |
+
|
59 |
+
Finally, you can use the curl command to invoke the model same as the OpenAI calling format. Here's an example:
|
60 |
+
```bash
|
61 |
+
curl http://localhost:8888/v1/chat/completions \
|
62 |
+
-H "Content-Type: application/json" \
|
63 |
+
-d '{"model": "kagentlms_baichuan2_13b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
|
64 |
+
```
|