patrickvonplaten
commited on
Commit
•
8d88236
1
Parent(s):
0b362f5
Update README.md
Browse files
README.md
CHANGED
@@ -1,96 +1,60 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
-
|
|
|
4 |
|
5 |
-
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
-
$ git checkout add_wav2vec
|
13 |
-
```
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
|
|
18 |
|
19 |
-
|
20 |
-
from transformers import AutoModelForMaskedLM
|
21 |
|
22 |
-
model = AutoModelForMaskedLM.from_pretrained("patrickvonplaten/wav2vec2-base-960h")
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
|
28 |
```python
|
29 |
-
from
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
logits = model(
|
53 |
-
|
54 |
-
#
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
Finally, let's decode the prediction.
|
59 |
-
Let's create a simple CTC-Decoder:
|
60 |
-
|
61 |
-
```python
|
62 |
-
import numpy as np
|
63 |
-
from itertools import groupby
|
64 |
-
|
65 |
-
class Decoder:
|
66 |
-
def __init__(self, json_dict):
|
67 |
-
self.dict = json_dict
|
68 |
-
self.look_up = np.asarray(list(self.dict.keys()))
|
69 |
-
|
70 |
-
def decode(self, ids):
|
71 |
-
converted_tokens = self.look_up[ids]
|
72 |
-
fused_tokens = [tok[0] for tok in groupby(converted_tokens)]
|
73 |
-
output = ' '.join(''.join(''.join(fused_tokens).split("<s>")).split("|"))
|
74 |
-
return output
|
75 |
-
```
|
76 |
-
|
77 |
-
and instantiate with the corresponding dict.
|
78 |
-
|
79 |
-
```python
|
80 |
-
# hard-coded json dict taken from: https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
|
81 |
-
json_dict = {"<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3, "|": 4, "E": 5, "T": 6, "A": 7, "O": 8, "N": 9, "I": 10, "H": 11, "S": 12, "R": 13, "D": 14, "L": 15, "U": 16, "M": 17, "W": 18, "C": 19, "F": 20, "G": 21, "Y": 22, "P": 23, "B": 24, "V": 25, "K": 26, "'": 27, "X": 28, "J": 29, "Q": 30, "Z": 31}
|
82 |
-
|
83 |
-
decoder = Decoder(json_dict=json_dict)
|
84 |
-
```
|
85 |
-
|
86 |
-
and decode the result
|
87 |
-
|
88 |
-
```python
|
89 |
-
pred_trans = decoder.decode(pred_ids)
|
90 |
-
|
91 |
-
print("Prediction:\n", pred_trans)
|
92 |
-
print("\n" + 50 * "=" + "\n")
|
93 |
-
print("Correct result:\n", expected_trans)
|
94 |
-
```
|
95 |
-
|
96 |
-
🎉
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- librispeech_asr
|
5 |
+
tags:
|
6 |
+
- speech
|
7 |
|
8 |
+
license: apache-2.0
|
9 |
+
---
|
10 |
|
11 |
+
# Wav2Vec2-Base-960h
|
12 |
|
13 |
+
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
|
14 |
|
15 |
+
The base model pretrained and fine-tuned on 960 hours of Librispeech.
|
16 |
|
17 |
+
[Paper](https://arxiv.org/abs/2006.11477)
|
|
|
|
|
18 |
|
19 |
+
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
|
20 |
|
21 |
+
**Abstract**
|
22 |
+
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
|
23 |
|
24 |
+
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
|
|
|
25 |
|
|
|
26 |
|
27 |
+
# Usage
|
28 |
|
29 |
+
The model can be used as follows to classify some speech input
|
30 |
|
31 |
```python
|
32 |
+
from transformers import Wav2Vec2Tokenizer, Wav2Vec2Model
|
33 |
+
from datasets import load_dataset
|
34 |
+
import soundfile as sf
|
35 |
+
import torch
|
36 |
+
|
37 |
+
# load model and tokenizer
|
38 |
+
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h")
|
39 |
+
model = Wav2Vec2ForMaskedLM.from_pretrained("facebook/wav2vec2-base-960h")
|
40 |
+
|
41 |
+
# define function to read in sound file
|
42 |
+
def map_to_array(batch):
|
43 |
+
speech, _ = sf.read(batch["file"])
|
44 |
+
batch["speech"] = speech
|
45 |
+
return batch
|
46 |
+
|
47 |
+
# load dummy dataset and read soundfiles
|
48 |
+
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
|
49 |
+
ds = ds.map(map_to_array)
|
50 |
+
|
51 |
+
# tokenize
|
52 |
+
input_values = tokenizer(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1
|
53 |
+
|
54 |
+
# retrieve logits
|
55 |
+
logits = model(input_values).logits
|
56 |
+
|
57 |
+
# take argmax and decode
|
58 |
+
predicted_ids = torch.argmax(logits, dim=-1)
|
59 |
+
transcription = tokenizer.batch_decode(predicted_ids)
|
60 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|