matybohacek commited on
Commit
2b6c3d0
1 Parent(s): 6ba9650

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -48
README.md CHANGED
@@ -120,17 +120,9 @@ task_categories:
120
  </style>
121
 
122
 
123
- <img src="tbd" style="width: 100%">
124
 
125
- The DeepAction dataset contains over 3,000 videos generated by seven text-to-video AI models, as well as real matched videos. These videos show people performing ordinary actions such as walking, running, and cooking. The AI models used to generate these videos include, in alphabetic order, AnimateDiff, CogVideoX5B, Lumiere, Pexels, RunwayML, StableDiffusion, Veo (pre-release version), and VideoPoet.
126
-
127
- <br>
128
-
129
- <br>
130
-
131
- # Licensing
132
-
133
- TBD, will be provided by pcounsel
134
 
135
  <br>
136
 
@@ -138,91 +130,74 @@ TBD, will be provided by pcounsel
138
 
139
  To get started, log into Hugging Face in your CLI environment, and run:
140
 
 
141
  from datasets import load_dataset
142
- dataset = load_dataset("TBD_DATASET_ID", trust_remote_code=True)
143
-
144
- <br>
145
 
146
  <br>
147
 
148
  ## Data
149
 
150
- The data is structured into eight folders, corresponding to different text-to-video AI models. Each folder has 100 subfolders containing AI-generated videos. These subfolders correspond to action classes; all videos in a given subfolder were generated using the same prompt (see the list of prompts here).
151
 
152
  <table class="video-table">
153
  <tr>
154
  <td style="width: 50%;">
155
  <video src="https://data.matsworld.io/ucbresearch/example-real-scripted.mp4" controls></video>
156
- <p style="text-align: center;"><b>Real: </b> Scripted</p>
157
  </td>
158
  <td style="width: 50%;">
159
  <video src="https://data.matsworld.io/ucbresearch/example-real-unscripted.mp4" controls ></video>
160
- <p style="text-align: center;"><b>Real: </b> Unscripted</p>
161
  </td>
162
  </tr>
163
  <tr>
164
  <td style="width: 50%;">
165
  <video src="https://data.matsworld.io/ucbresearch/example-real-hand-movement.mp4" controls></video>
166
- <p style="text-align: center;"><b>Real: </b> Hand movement</p>
167
  </td>
168
  <td style="width: 50%;">
169
  <video src="https://data.matsworld.io/ucbresearch/example-real-head-movement.mp4" controls ></video>
170
- <p style="text-align: center;"><b>Real: </b> Head movement</p>
171
  </td>
172
  </tr>
173
-
174
-
175
  <tr>
176
  <td style="width: 50%;">
177
  <video src="https://data.matsworld.io/ucbresearch/example-fake-wav2lip.mp4" controls></video>
178
- <p style="text-align: center;"><b>Fake: </b> Wav2Lip <i>with real voice</i></p>
179
  </td>
180
  <td style="width: 50%;">
181
  <video src="http://data.matsworld.io/ucbresearch/example-fake-wav2lip-ai-voice.mp4" controls ></video>
182
- <p style="text-align: center;"><b>Fake: </b> Wav2Lip <i>with fake voice</i></p>
183
  </td>
184
  </tr>
185
  <tr>
186
  <td style="width: 50%;">
187
  <video src="https://data.matsworld.io/ucbresearch/example-fake-retalking.mp4" controls></video>
188
- <p style="text-align: center;"><b>Fake: </b> ReTalking <i>with real voice</i></p>
189
  </td>
190
  <td style="width: 50%;">
191
  <video src="http://data.matsworld.io/ucbresearch/example-fake-retalking-ai-voice.mp4" controls ></video>
192
- <p style="text-align: center;"><b>Fake: </b> ReTalking <i>with fake voice</i></p>
193
- </td>
194
- </tr>
195
- <tr>
196
- <td style="width: 50%;">
197
- <video src="https://data.matsworld.io/ucbresearch/example-fake-facefusion.mp4" controls></video>
198
- <p style="text-align: center;"><b>Fake: </b> Face Fusion</p>
199
- </td>
200
- <td style="width: 50%;">
201
- <video src="https://data.matsworld.io/ucbresearch/example-fake-facefusion-gan.mp4" controls ></video>
202
- <p style="text-align: center;"><b>Fake: </b> Face Fusion + GAN</p>
203
- </td>
204
- </tr>
205
- <tr>
206
- <td style="width: 50%;">
207
- <video src="https://data.matsworld.io/ucbresearch/example-fake-facefusion-live.mp4" style="width: 100%;" controls></video>
208
- <p style="text-align: center;"><b>Fake: </b> Face Fusion Live</p>
209
- </td>
210
- <td style="width: 50%;">
211
- <p></p>
212
  </td>
213
  </tr>
214
  </table>
215
 
 
 
 
 
 
 
 
216
 
217
  ## Misc
218
 
219
- Please use the following citation to refer to our work:
220
 
221
  ```bib
222
  TBD
223
  ```
224
 
225
- Matyas Bohacek, Google* and Stanford University
226
- Hany Farid, University of California, Berkeley
227
-
228
- This work was done during the first author's (MB) internship at Google.
 
120
  </style>
121
 
122
 
123
+ <img src="https://data.matsworld.io/ucbresearch/deepaction.gif" style="width: 100%">
124
 
125
+ The DeepAction dataset contains over 3,000 videos generated by seven text-to-video AI models, as well as real matched videos. These videos show people performing ordinary actions such as walking, running, and cooking. The AI models used to generate these videos include, in alphabetic order, AnimateDiff, CogVideoX5B, Lumiere, Pexels, RunwayML, StableDiffusion, Veo (pre-release version), and VideoPoet. Refer to our <a href=''>our pre-print</a> for details.
 
 
 
 
 
 
 
 
126
 
127
  <br>
128
 
 
130
 
131
  To get started, log into Hugging Face in your CLI environment, and run:
132
 
133
+ ```python
134
  from datasets import load_dataset
135
+ dataset = load_dataset("faridlab/deepaction_v1", trust_remote_code=True)
136
+ ```
 
137
 
138
  <br>
139
 
140
  ## Data
141
 
142
+ The data is structured into eight folders, corresponding to different text-to-video AI models. Each folder has 100 subfolders containing AI-generated videos. These subfolders correspond to action classes; all videos in a given subfolder were generated using the same prompt (see the list of prompts <a href=''>here</a>).
143
 
144
  <table class="video-table">
145
  <tr>
146
  <td style="width: 50%;">
147
  <video src="https://data.matsworld.io/ucbresearch/example-real-scripted.mp4" controls></video>
148
+ <p style="text-align: center;">Real</p>
149
  </td>
150
  <td style="width: 50%;">
151
  <video src="https://data.matsworld.io/ucbresearch/example-real-unscripted.mp4" controls ></video>
152
+ <p style="text-align: center;">AnimateDiff</p>
153
  </td>
154
  </tr>
155
  <tr>
156
  <td style="width: 50%;">
157
  <video src="https://data.matsworld.io/ucbresearch/example-real-hand-movement.mp4" controls></video>
158
+ <p style="text-align: center;">CogVideoX5B</p>
159
  </td>
160
  <td style="width: 50%;">
161
  <video src="https://data.matsworld.io/ucbresearch/example-real-head-movement.mp4" controls ></video>
162
+ <p style="text-align: center;">Lumiere</p>
163
  </td>
164
  </tr>
 
 
165
  <tr>
166
  <td style="width: 50%;">
167
  <video src="https://data.matsworld.io/ucbresearch/example-fake-wav2lip.mp4" controls></video>
168
+ <p style="text-align: center;">RunwayML</p>
169
  </td>
170
  <td style="width: 50%;">
171
  <video src="http://data.matsworld.io/ucbresearch/example-fake-wav2lip-ai-voice.mp4" controls ></video>
172
+ <p style="text-align: center;">StableDiffusion</p>
173
  </td>
174
  </tr>
175
  <tr>
176
  <td style="width: 50%;">
177
  <video src="https://data.matsworld.io/ucbresearch/example-fake-retalking.mp4" controls></video>
178
+ <p style="text-align: center;">Veo (pre-release version)</p>
179
  </td>
180
  <td style="width: 50%;">
181
  <video src="http://data.matsworld.io/ucbresearch/example-fake-retalking-ai-voice.mp4" controls ></video>
182
+ <p style="text-align: center;">VideoPoet</p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
  </td>
184
  </tr>
185
  </table>
186
 
187
+ <br>
188
+
189
+ # Licensing
190
+
191
+ TBD, will be provided by pcounsel
192
+
193
+ <br>
194
 
195
  ## Misc
196
 
197
+ Please use the following citation when using this dataset:
198
 
199
  ```bib
200
  TBD
201
  ```
202
 
203
+ This work was done during the first author's (Matyas Bohacek) internship at Google.