clot
zhongshsh commited on
Commit
775629d
1 Parent(s): be1efd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -4
README.md CHANGED
@@ -1,9 +1,70 @@
1
  ---
2
- library_name: peft
 
 
 
3
  ---
4
- ## Training procedure
5
 
6
- ### Framework versions
 
 
7
 
8
 
9
- - PEFT 0.4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: clot
3
+ license: mit
4
+ datasets:
5
+ - zhongshsh/CLoT-Oogiri-GO
6
  ---
 
7
 
8
+ <p align="center">
9
+ <img src="logo.png" width="550" height="150">
10
+ </p>
11
 
12
 
13
+ ## Creative Leap-of-Thought (CLoT)
14
+
15
+
16
+ This repository is the **checkpoint** of "Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation" [[paper]](https://arxiv.org/abs/2312.02439).
17
+
18
+
19
+
20
+ ## Introduction
21
+
22
+ To the best of our knowledge, we are the first to profoundly explore the Leap-of-Thought (LoT) ability in multimodal large language models (LLMs). This involves challenging LLMs to **think outside the box**, a non-sequential thinking skill equally crucial alongside popular sequential thinking abilities, such as Chain-of-Thought based methods. In this study, we delve into the LLM's LoT ability through the lens of a humor generation game called Oogiri (大喜利). The Oogiri game serves as an ideal platform for exploring the LLM's LoT ability, as it compels participants to think outside the box and provide unexpected and humorous responses to multimodal information (including I2T, T2T, and IT2T).
23
+
24
+ 🤣👉**Click [[project page]](https://zhongshsh.github.io/CLoT/) for funny examples**👈.
25
+
26
+
27
+
28
+ ## Quickstart 🤗
29
+
30
+
31
+ We provide a simple Chinese example for using CLoT with zero-shot inference. Specifically, you just need to input a few lines of code as shown below.
32
+
33
+ ```python
34
+ from transformers import AutoTokenizer
35
+ from transformers.generation import GenerationConfig
36
+ from peft import AutoPeftModelForCausalLM
37
+ import torch
38
+
39
+ mpath = "zhongshsh/CLoT-cn"
40
+ tokenizer = AutoTokenizer.from_pretrained(mpath, trust_remote_code=True)
41
+ generation_config = GenerationConfig.from_pretrained(mpath, trust_remote_code=True)
42
+ model = AutoPeftModelForCausalLM.from_pretrained(
43
+ mpath,
44
+ device_map="cuda",
45
+ trust_remote_code=True
46
+ ).eval()
47
+
48
+ query = tokenizer.from_list_format([
49
+ {'image': 'https://i.postimg.cc/Fz0bVzpm/test.png'},
50
+ {'text': '你是一个搞笑专家,非常了解网络和现实生活当中的各种梗和流行语,热衷于用幽默感给大家带来欢乐。让我们打破常规思维思考问题。\n用户:请仔细阅读图片,写出一个令人感到意外且搞笑的句子。'},
51
+ ])
52
+ response, history = model.chat(tokenizer, query=query, history=None, generation_config=generation_config)
53
+ print(response)
54
+ ```
55
+
56
+
57
+ ## Notice
58
+
59
+ We strongly advise users against spreading or allowing others to spread the following content, including but not limited to hate speech, violence, pornography, and fraudulent materials.
60
+
61
+ ## Citation
62
+
63
+ ```
64
+ @misc{zhong2023clot,
65
+   title={Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation},
66
+   author={Zhong, Shanshan and Huang, Zhongzhan and Gao, Shanghua and Wen, Weushao and Lin, Liang and Zitnik, Marinka and Zhou, Pan},
67
+   journal={arXiv preprint arXiv:2312.02439},
68
+   year={2023}
69
+ }
70
+ ```