jamescalam commited on
Commit
8dca835
·
1 Parent(s): b88170c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -1
README.md CHANGED
@@ -35,4 +35,39 @@ task_ids:
35
  - visual-question-answering
36
  ---
37
 
38
- The YouTube transcriptions dataset contains technical tutorials (currently only from [YouTube.com/c/JamesBriggs](https://www.youtube.com/c/jamesbriggs)) transcribed using [OpenAI's Whisper](https://huggingface.co/openai/whisper-large) (large). Each row represents roughly a sentence-length chunk of text alongside the video URL and timestamp.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  - visual-question-answering
36
  ---
37
 
38
+ The YouTube transcriptions dataset contains technical tutorials (currently only from [YouTube.com/c/JamesBriggs](https://www.youtube.com/c/jamesbriggs)) transcribed using [OpenAI's Whisper](https://huggingface.co/openai/whisper-large) (large). Each row represents roughly a sentence-length chunk of text alongside the video URL and timestamp.
39
+
40
+ Note that each item in the dataset contains just a short chunk of text. For most use cases you will likely need to merge multiple rows to create more substantial chunks of text, if you need to do that, this code snippet will help:
41
+
42
+ ```python
43
+ from datasets import load_dataset
44
+
45
+ # first download the dataset
46
+ data = load_dataset(
47
+ 'jamescalam/youtube-transcriptions',
48
+ split='train'
49
+ )
50
+
51
+ new_data = [] # this will store adjusted data
52
+
53
+ window = 6 # number of sentences to combine
54
+ stride = 3 # number of sentences to 'stride' over, used to create overlap
55
+
56
+ for i in range(0, len(data), stride):
57
+ i_end = min(len(data)-1, i+window)
58
+ if data[i]['title'] != data[i_end]['title']:
59
+ # in this case we skip this entry as we have start/end of two videos
60
+ continue
61
+ # create larger text chunk
62
+ text = ' '.join(data[i:i_end]['text'])
63
+ # add to adjusted data list
64
+ new_data.append({
65
+ 'start': data[i]['start'],
66
+ 'end': data[i_end]['end'],
67
+ 'title': data[i]['title'],
68
+ 'text': text,
69
+ 'id': data[i]['id'],
70
+ 'url': data[i]['url'],
71
+ 'published': data[i]['published']
72
+ })
73
+ ```