mrzjy commited on
Commit
dfb7050
·
verified ·
1 Parent(s): e927cef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -2
README.md CHANGED
@@ -16,9 +16,17 @@ size_categories:
16
 
17
  总计 92,421 句剧情对白(带有角色标签)+旁白,从崩坏3的“主线1黄昏、少女、战舰”到“主线第二部03间章:一个梦游者的苦痛”
18
 
19
- 本数据集经由 [honkai_impact_3rd_game_playthrough](https://huggingface.co/datasets/mrzjy/honkai_impact_3rd_game_playthrough) 视频数据集-> OCR 文字识别 -> VLM 结构化解析 -> 后处理 pipeline 得到。
20
 
21
- 所有没有识别出的 character role 记为 "\<unknown\>"
 
 
 
 
 
 
 
 
22
 
23
  由于是通过 AI pipeline,所以难免存在误差(识别错误等),但此数据质量依旧还算尚可
24
 
@@ -121,4 +129,70 @@ You can see that sometimes the role names are not consistent (e.g., 德丽莎女
121
  萝莎莉娅 331
122
  长光 302
123
  羽兔 293
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  ```
 
16
 
17
  总计 92,421 句剧情对白(带有角色标签)+旁白,从崩坏3的“主线1黄昏、少女、战舰”到“主线第二部03间章:一个梦游者的苦痛”
18
 
19
+ 本数据集从 [honkai_impact_3rd_game_playthrough](https://huggingface.co/datasets/mrzjy/honkai_impact_3rd_game_playthrough) 视频数据集出发,经过 AI pipeline 最终获取结构化的文本剧情语料。
20
 
21
+ AI pipeline 概述如下:
22
+
23
+ 1. 分P下载视频(使用 [BBDown](https://github.com/nilaoda/BBDown) 下载 [BiliBili崩三剧情视频](https://www.bilibili.com/video/BV12W411h76f/))
24
+ 2. 视频帧分割(每1秒取一帧画面)
25
+ 3. 逐帧 OCR 检测文本(使用 [Paddle-OCR](https://github.com/PaddlePaddle/PaddleOCR))
26
+ 4. 逐帧 VLM 结构化解析(使用 [MiniCPM-V-2_6](https://huggingface.co/openbmb/MiniCPM-V-2_6),输入为帧图像 + OCR结果)
27
+ 5. 基于规则的后处理
28
+ - 规范化 VLM 输出(e.g., 去噪、排除格式有问题的输出)
29
+ - 基于编辑距离等方法的连续帧的信息去重与归并(e.g., 由于播放对白需要时间,话说一半的帧信息会被 merge 到后续话说完整的帧信息中,合并成一帧))
30
 
31
  由于是通过 AI pipeline,所以难免存在误差(识别错误等),但此数据质量依旧还算尚可
32
 
 
129
  萝莎莉娅 331
130
  长光 302
131
  羽兔 293
132
+ ```
133
+
134
+ # VLM Prompt
135
+
136
+ ```
137
+ PROMPT = """This is an image of RPG game. Given associated OCR result, please help us identify the existence of story narrations and dialogues and extract them in structured format.
138
+ This is the associated OCR results:
139
+ ```ocr
140
+ {ocr}
141
+ ```
142
+
143
+ There are two types of story content you should extract:
144
+
145
+ - Narration: single line or paragraph of narration, telling the story background and plots
146
+ - Dialogue: dialogue contents spoken by a character. The speaker character name and spoken content must co-appear in the image.
147
+
148
+ Note:
149
+
150
+ - Be strict with OCR texts, you are NOT allowed to fabricate contents that are not captured by OCR results.
151
+ - The OCR often separate multiline texts, and it's your task to concatenate consecutive lines if necessary.
152
+ - There might be noisy textual contents (e.g., advertisement, UI elements, combos, etc.), which are not our interest.
153
+ - There might be texts indicating state/environment information (e.g., location, time, source, state, etc), you can extract them as well in environment field.
154
+
155
+ Please output your response in JSON structure in one of the 3 following ways:
156
+
157
+ 1. In case of no desired content (neither dialogue nor narration), output a JSON dict whose type is null.
158
+
159
+ ```json
160
+ {{"type": null}}
161
+ ```
162
+
163
+ 2. In case of dialogue
164
+
165
+ ```json
166
+ {{
167
+ "type": "dialogue",
168
+ "role": "<speaker name>",
169
+ "content": "<spoken content>",
170
+ "state": "<state/environment info, null if there isn't any>"
171
+ }}
172
+ ```
173
+
174
+ 3. In case of narration
175
+
176
+ ```json
177
+ {{
178
+ "type": "narration",
179
+ "content": "<narrative content>"
180
+ }}
181
+ ```"""
182
+ ```
183
+
184
+ # VLM code snippet
185
+
186
+ ```
187
+ # generate
188
+ for batch in tqdm(batches):
189
+ msgs = [
190
+ [{"role": "user", "content": [Image.open(b["frame_path"]), format_template(b["ocr"])]}]
191
+ for b in batch
192
+ ]
193
+ outputs = model.chat(
194
+ image=None,
195
+ msgs=msgs,
196
+ tokenizer=tokenizer
197
+ )
198
  ```