Titouan commited on
Commit
a1a126c
1 Parent(s): d517e2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -4,7 +4,6 @@ thumbnail:
4
  pipeline_tag: automatic-speech-recognition
5
  tags:
6
  - CTC
7
- - Attention
8
  - pytorch
9
  - speechbrain
10
  - Transformer
@@ -19,7 +18,7 @@ metrics:
19
  <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
20
  <br/><br/>
21
 
22
- # wav2vec 2.0 with CTC/Attention trained on CommonVoice English (No LM)
23
 
24
  This repository provides all the necessary tools to perform automatic speech
25
  recognition from an end-to-end system pretrained on CommonVoice (English Language) within
@@ -37,8 +36,8 @@ The performance of the model is the following:
37
  This ASR system is composed of 2 different but linked blocks:
38
  - Tokenizer (unigram) that transforms words into subword units and trained with
39
  the train transcriptions (train.tsv) of CommonVoice (EN).
40
- - Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-lv60-large](https://huggingface.co/facebook/wav2vec2-large-lv60)) is combined with two DNN layers and finetuned on CommonVoice En.
41
- The obtained final acoustic representation is given to the CTC and attention decoders.
42
 
43
  The system is trained with recordings sampled at 16kHz (single channel).
44
  The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
 
4
  pipeline_tag: automatic-speech-recognition
5
  tags:
6
  - CTC
 
7
  - pytorch
8
  - speechbrain
9
  - Transformer
 
18
  <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
19
  <br/><br/>
20
 
21
+ # wav2vec 2.0 with CTC trained on CommonVoice English (No LM)
22
 
23
  This repository provides all the necessary tools to perform automatic speech
24
  recognition from an end-to-end system pretrained on CommonVoice (English Language) within
 
36
  This ASR system is composed of 2 different but linked blocks:
37
  - Tokenizer (unigram) that transforms words into subword units and trained with
38
  the train transcriptions (train.tsv) of CommonVoice (EN).
39
+ - Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([wav2vec2-lv60-large](https://huggingface.co/facebook/wav2vec2-large-lv60)) is combined with two DNN layers and finetuned on CommonVoice En.
40
+ The obtained final acoustic representation is given to the CTC decoder.
41
 
42
  The system is trained with recordings sampled at 16kHz (single channel).
43
  The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.