rolnan3 commited on
Commit
a06305f
1 Parent(s): bbd638d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -3
README.md CHANGED
@@ -1,3 +1,196 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <h1 align="center"> WKM </h1>
2
+ <h3 align="center"> Agent Planning with World Knowledge Model </h3>
3
+
4
+ <p align="center">
5
+ <a href="https://arxiv.org/abs/2405.14205">📄arXiv</a> •
6
+ <a href="https://www.zjukg.org/project/WKM/">🌐Web</a> •
7
+ <a href="https://x.com/omarsar0/status/1793851075411296761">𝕏 Blog</a>
8
+ </p>
9
+
10
+ [![Awesome](https://awesome.re/badge.svg)](https://github.com/zjunlp/WKM)
11
+ [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
12
+ ![](https://img.shields.io/github/last-commit/zjunlp/WKM?color=green)
13
+
14
+ ## Table of Contents
15
+
16
+ - 🌻[Acknowledgement](#acknowledgement)
17
+ - 🌟[Overview](#overview)
18
+ - 🔧[Installation](#installation)
19
+ - 📚[World Knowledge Build](#world-knowledge-build)
20
+ - 📉[Model Training](#model-training)
21
+ - 🧐[Evaluation](#evaluation)
22
+ - 🚩[Citation](#citation)
23
+
24
+ ---
25
+
26
+
27
+
28
+ ## 🌻Acknowledgement
29
+
30
+ Our code of training module is referenced and adapted from [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), while the code of inference module is implemented based on [ETO](https://github.com/Yifan-Song793/ETO). Various baseline codes use [ReAct](https://github.com/ysymyth/ReAct), [Reflexion](https://github.com/noahshinn/reflexion), [NAT](https://github.com/reason-wang/nat), [ETO](https://github.com/Yifan-Song793/ETO). We use LangChain with open models via [Fastchat](https://github.com/lm-sys/FastChat/blob/main/docs/langchain_integration.md). Thanks for their great contributions!
31
+
32
+
33
+
34
+
35
+ ## 🌟Overview
36
+
37
+ Recent endeavors towards directly using large language models (LLMs) as agent models to execute interactive planning tasks have shown commendable results. Despite their achievements, however, they still struggle with brainless trial-and-error in global planning and generating hallucinatory actions in local planning due to their poor understanding of the "real" physical world. Imitating humans'world knowledge model which provides global prior knowledge before the task and maintains local dynamic knowledge during the task, in this paper, we introduce parametric World Knowledge Model (***WKM***) to facilitate agent planning. Concretely, we steer the agent model to self-synthesize knowledge from both expert and sampled trajectories. Then we develop ***WKM***, providing prior task knowledge to guide the global planning and dynamic state knowledge to assist the local planning. Experimental results on three complex real-world simulated datasets with three state-of-the-art open-source LLMs, Mistral-7B, Gemma-7B, and Llama-3-8B, demonstrate that our method can achieve superior performance compared to various strong baselines. Besides, we analyze to illustrate that our can effectively alleviate the blind trial-and-error and hallucinatory action issues, providing strong support for the agent's understanding of the world.
38
+ Other interesting findings include:
39
+ 1) our instance-level task knowledge can generalize better to unseen tasks,
40
+ 2) weak WKM can guide strong agent model planning
41
+ 3) unified WKM training has promising potential for further development
42
+
43
+
44
+
45
+ ## 🔧Installation
46
+
47
+ ```bash
48
+ git clone https://github.com/zjunlp/WKM
49
+ cd WKM
50
+ pip install -r requirements.txt
51
+ ```
52
+
53
+ ## 📚World Knowledge Build
54
+
55
+ To build the task knowledge
56
+ ```sh
57
+ python world_knowledge_build.py \
58
+ --dataset_path your/rejected and chosen/data/pair \
59
+ --task your/task \
60
+ --gen task_knowledge \
61
+ --model_name your/model/name \
62
+ --output_path your/output/path
63
+ ```
64
+
65
+ To build the state knowledge
66
+ ```sh
67
+ python world_knowledge_build.py \
68
+ --dataset_path your/rejected and chosen/data/pair \
69
+ --task your/task \
70
+ --gen state_knowledge \
71
+ --model_name your/model/name \
72
+ --output_path your/output/path
73
+ ```
74
+
75
+ After your get task_knowledge and state_knowledge, process the data to train format
76
+ ```
77
+ python train_data_process.py \
78
+ --task alfworld \
79
+ --file_path your/path \
80
+ --mode model_type
81
+ --output_path your/output/path
82
+ ```
83
+
84
+ And use the state knowledege train data to build state knowledge cache base
85
+ ```
86
+ python state_base_build.py \
87
+ --state_file_path your/state/knowledge/path \
88
+ --state_action_pair_path path/to/store/state_action/pair \
89
+ --vector_cache_path path/to/store/vector/cache
90
+ ```
91
+
92
+
93
+ ## 📉Model Training
94
+
95
+ Use LLama-Factory to train the agent model and world model
96
+ ```sh
97
+ CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
98
+ --config_file ./examples/accelerate/single_config.yaml \
99
+ src/train_bash.py \
100
+ --ddp_timeout 180000000 \
101
+ --stage sft \
102
+ --do_train \
103
+ --model_name_or_path /base/model/path \
104
+ --dataset_dir ./data \
105
+ --dataset train_data_for_agent \
106
+ --template model_template \
107
+ --finetuning_type lora \
108
+ --lora_target q_proj,v_proj \
109
+ --output_dir ../lora/peft_model_name \
110
+ --overwrite_cache \
111
+ --per_device_train_batch_size 4\
112
+ --gradient_accumulation_steps 2 \
113
+ --lr_scheduler_type cosine \
114
+ --logging_steps 1 \
115
+ --save_steps 1000 \
116
+ --learning_rate 1e-4 \
117
+ --num_train_epochs 3 \
118
+ --plot_loss \
119
+ --fp16 \
120
+ --cutoff_len 2048 \
121
+ --save_safetensors False \
122
+ --overwrite_output_dir \
123
+ --train_on_prompt False
124
+ ```
125
+
126
+ ## 🧐Evaluation
127
+
128
+
129
+ To evaluate the task, you should first lanuch a local API server with fastchat.
130
+ ```sh
131
+ cd .src/eval
132
+ # agent_model api server
133
+ python -u -m fastchat.serve.model_worker \
134
+ --model-path /path/peft/agent_model \
135
+ --port 21020 \
136
+ --worker-address http://localhost:21020 \
137
+ --max-gpu-memory 31GiB \
138
+ --dtype float16
139
+ # world_knowledge_model api server
140
+ python -u -m fastchat.serve.model_worker \
141
+ --model-path /path/peft/world_model \
142
+ --port 21021 \
143
+ --worker-address http://localhost:21021 \
144
+ --max-gpu-memory 31GiB \
145
+ --dtype float16
146
+ ```
147
+
148
+ Evaluate the task
149
+ ```sh
150
+ python -m eval_agent.eto_multi_main_probs \
151
+ --agent_config fastchat \
152
+ --agent_model_name agent_model \
153
+ --world_model_name world_model \
154
+ --exp_config alfworld \
155
+ --exp_name eval \
156
+ --split test
157
+ ```
158
+
159
+ ## 🚩Citation
160
+
161
+ Please cite our repository if you use WKM in your work. Thanks!
162
+
163
+ ```bibtex
164
+ @article{DBLP:journals/corr/abs-2405-14205,
165
+ author = {Shuofei Qiao and
166
+ Runnan Fang and
167
+ Ningyu Zhang and
168
+ Yuqi Zhu and
169
+ Xiang Chen and
170
+ Shumin Deng and
171
+ Yong Jiang and
172
+ Pengjun Xie and
173
+ Fei Huang and
174
+ Huajun Chen},
175
+ title = {Agent Planning with World Knowledge Model},
176
+ journal = {CoRR},
177
+ volume = {abs/2405.14205},
178
+ year = {2024},
179
+ url = {https://doi.org/10.48550/arXiv.2405.14205},
180
+ doi = {10.48550/ARXIV.2405.14205},
181
+ eprinttype = {arXiv},
182
+ eprint = {2405.14205},
183
+ timestamp = {Wed, 19 Jun 2024 08:52:49 +0200},
184
+ biburl = {https://dblp.org/rec/journals/corr/abs-2405-14205.bib},
185
+ bibsource = {dblp computer science bibliography, https://dblp.org}
186
+ }
187
+ ```
188
+
189
+
190
+
191
+ ## 🎉Contributors
192
+
193
+ <a href="https://github.com/zjunlp/AutoAct/graphs/contributors">
194
+ <img src="https://contrib.rocks/image?repo=zjunlp/WKM" /></a>
195
+
196
+ We will offer long-term maintenance to fix bugs and solve issues. So if you have any problems, please put issues to us.