|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
|
|
## Interactive Evolution: A Neural-Symbolic Self-Training Framework for Large Language Models |
|
|
|
Paper Link: https://arxiv.org/abs/2406.11736 |
|
|
|
Code Repo: https://github.com/xufangzhi/ENVISIONS |
|
|
|
|
|
|
|
## π₯ News |
|
|
|
- π₯π₯π₯ We make public the final checkpoints after self-training ! ! ! |
|
|
|
|
|
## Note |
|
The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review. |
|
|
|
|
|
## Prompt for Zero-shot Evaluation |
|
|
|
```markdown |
|
Write Python code to solve the question. |
|
The question is: <question> |
|
The solution code is: |
|
``` |
|
|
|
|
|
## Citation |
|
If you find it helpful, please kindly cite the paper. |
|
``` |
|
@misc{xu2024interactive, |
|
title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models}, |
|
author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu}, |
|
year={2024}, |
|
eprint={2406.11736}, |
|
archivePrefix={arXiv}, |
|
} |
|
``` |
|
|
|
|