smajumdar94
commited on
Commit
•
65f7754
1
Parent(s):
9849e8a
Update README.md
Browse files
README.md
CHANGED
@@ -211,7 +211,7 @@ This model provides transcribed speech as a string for a given audio sample.
|
|
211 |
|
212 |
## Model Architecture
|
213 |
|
214 |
-
FastConformer is an optimized version of the Conformer model
|
215 |
|
216 |
## Training
|
217 |
|
@@ -266,7 +266,7 @@ Although this model isn’t supported yet by Riva, the [list of supported models
|
|
266 |
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
|
267 |
|
268 |
## References
|
269 |
-
[1] [Conformer
|
270 |
|
271 |
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
|
272 |
|
|
|
211 |
|
212 |
## Model Architecture
|
213 |
|
214 |
+
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
|
215 |
|
216 |
## Training
|
217 |
|
|
|
266 |
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
|
267 |
|
268 |
## References
|
269 |
+
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
|
270 |
|
271 |
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
|
272 |
|