KwaiAgents (Github) is a series of Agent-related works open-sourced by the KwaiKEG from Kuaishou Technology. The open-sourced content includes:
- KAgentSys-Lite: An experimental Agent Loop implemented based on open-source search engines, browsers, time, calendar, weather, and other tools, which is only missing the memory mechanism and some search capabilities compared to the system in the paper.
- KAgentLMs: A series of large language models with Agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper.
- KAgentInstruct: Fine-tuned data of instructions generated by the Meta-agent in the paper.
- KAgentBench: Over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.
User Guide
Direct usage
Tutorial can refer to QwenLM/Qwen
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("kwaikeg/kagentlms_qwen_7b_mat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"kwaikeg/kagentlms_qwen_7b_mat",
device_map="auto",
trust_remote_code=True
).eval()
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
AgentLMs as service
Serving by vLLM (GPU)
We recommend using vLLM and FastChat to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
pip install vllm
pip install "fschat[model_worker,webui]"
To deploy KAgentLMs, you first need to start the controller in one terminal.
python -m fastchat.serve.controller
Secondly, you should use the following command in another terminal for single-gpu inference service deployment:
python -m fastchat.serve.vllm_worker --model-path $model_path --trust-remote-code
Where $model_path
is the local path of the model downloaded. If the GPU does not support Bfloat16, you can add --dtype half
to the command line.
Thirdly, start the REST API server in the third terminal.
python -m fastchat.serve.openai_api_server --host localhost --port 8888
Finally, you can use the curl command to invoke the model same as the OpenAI calling format. Here's an example:
curl http://localhost:8888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
Serving by Lamma.cpp (CPU)
llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc). The converted model can be found in kwaikeg/kagentlms_qwen_7b_mat_gguf.
To install the server package and get started:
pip install "llama-cpp-python[server]"
python3 -m llama_cpp.server --model kagentlms_qwen_7b_mat_gguf/ggml-model-q4_0.gguf --chat_format chatml --port 8888
Citation
@article{pan2023kwaiagents,
author = {Haojie Pan and
Zepeng Zhai and
Hao Yuan and
Yaojia Lv and
Ruiji Fu and
Ming Liu and
Zhongyuan Wang and
Bing Qin
},
title = {KwaiAgents: Generalized Information-seeking Agent System with Large Language Models},
journal = {CoRR},
volume = {abs/2312.04889},
year = {2023}
}
- Downloads last month
- 15