reach-vb HF staff ylacombe HF staff commited on
Commit
4a7cf16
1 Parent(s): ef45120

Update README.md with hyperlinks and more descriptions on the difference with small (#1)

Browse files

- Update README.md with hyperlinks and more descriptions on the difference with small (04ff434517b58c93ffde8dfa2c4dc49e8e0daf17)


Co-authored-by: Yoach Lacombe <ylacombe@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -13,8 +13,9 @@ SeamlessM4T covers:
13
  - ⌨️ 96 Languages for text input/output
14
  - 🗣️ 35 languages for speech output.
15
 
16
- Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
17
- This folder contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
 
18
 
19
  ## Overview
20
 
@@ -23,7 +24,7 @@ This folder contains an example to run an exported small model covering most tas
23
  | [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa|
24
  | [UnitY-Small-S2T](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 637MB | S2TT, ASR |eng, fra, hin, por, spa|
25
 
26
- UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding.
27
 
28
  ## Inference
29
  To use exported model, users don't need seamless_communication or fairseq2 dependency.
 
13
  - ⌨️ 96 Languages for text input/output
14
  - 🗣️ 35 languages for speech output.
15
 
16
+ Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference. [This folder]((https://huggingface.co/facebook/seamless-m4t-unity-small-s2t)) contains an example to run an exported small model covering ASR and S2TT. The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
17
+
18
+ Refer to [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small) if you also wish to cover speech-to-speech translation (S2ST) in addition to ASR and S2TT tasks.
19
 
20
  ## Overview
21
 
 
24
  | [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small/resolve/main/unity_on_device.ptl) | 862MB | S2ST, S2TT, ASR |eng, fra, hin, por, spa|
25
  | [UnitY-Small-S2T](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t/resolve/main/unity_on_device_s2t.ptl) | 637MB | S2TT, ASR |eng, fra, hin, por, spa|
26
 
27
+ [UnitY-Small-S2T](https://huggingface.co/facebook/seamless-m4t-unity-small-s2t) is a pruned version of [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small) without 2nd pass unit decoding. Unlike [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small), it can only be used for ASR and S2TT tasks.
28
 
29
  ## Inference
30
  To use exported model, users don't need seamless_communication or fairseq2 dependency.