Librarian Bot: Add base_model information to model
Browse filesThis pull request aims to enrich the metadata of your model by adding [`LeBenchmark/wav2vec2-FR-7K-large`](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large) as a `base_model` field, situated in the `YAML` block of your model's `README.md`.
How did we find this information? We performed a regular expression match on your `README.md` file to determine the connection.
**Why add this?** Enhancing your model's metadata in this way:
- **Boosts Discoverability** - It becomes straightforward to trace the relationships between various models on the Hugging Face Hub.
- **Highlights Impact** - It showcases the contributions and influences different models have within the community.
For a hands-on example of how such metadata can play a pivotal role in mapping model connections, take a look at [librarian-bots/base_model_explorer](https://huggingface.co/spaces/librarian-bots/base_model_explorer).
This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bot). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to [@davanstrien](https://huggingface.co/davanstrien). Your input is invaluable to us!
@@ -1,8 +1,7 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
language: fr
|
|
|
4 |
library_name: transformers
|
5 |
-
thumbnail: null
|
6 |
tags:
|
7 |
- automatic-speech-recognition
|
8 |
- hf-asr-leaderboard
|
@@ -17,93 +16,94 @@ datasets:
|
|
17 |
- gigant/african_accented_french
|
18 |
metrics:
|
19 |
- wer
|
|
|
20 |
model-index:
|
21 |
- name: Fine-tuned wav2vec2-FR-7K-large model for ASR in French
|
22 |
results:
|
23 |
- task:
|
24 |
-
name: Automatic Speech Recognition
|
25 |
type: automatic-speech-recognition
|
|
|
26 |
dataset:
|
27 |
name: Common Voice 11.0
|
28 |
type: mozilla-foundation/common_voice_11_0
|
29 |
args: fr
|
30 |
metrics:
|
31 |
-
-
|
32 |
-
type: wer
|
33 |
value: 11.44
|
34 |
-
|
35 |
-
|
36 |
value: 9.66
|
|
|
37 |
- task:
|
38 |
-
name: Automatic Speech Recognition
|
39 |
type: automatic-speech-recognition
|
|
|
40 |
dataset:
|
41 |
name: Multilingual LibriSpeech (MLS)
|
42 |
type: facebook/multilingual_librispeech
|
43 |
args: french
|
44 |
metrics:
|
45 |
-
-
|
46 |
-
type: wer
|
47 |
value: 5.93
|
48 |
-
|
49 |
-
|
50 |
value: 5.13
|
|
|
51 |
- task:
|
52 |
-
name: Automatic Speech Recognition
|
53 |
type: automatic-speech-recognition
|
|
|
54 |
dataset:
|
55 |
name: VoxPopuli
|
56 |
type: facebook/voxpopuli
|
57 |
args: fr
|
58 |
metrics:
|
59 |
-
-
|
60 |
-
type: wer
|
61 |
value: 9.33
|
62 |
-
|
63 |
-
|
64 |
value: 8.51
|
|
|
65 |
- task:
|
66 |
-
name: Automatic Speech Recognition
|
67 |
type: automatic-speech-recognition
|
|
|
68 |
dataset:
|
69 |
name: African Accented French
|
70 |
type: gigant/african_accented_french
|
71 |
args: fr
|
72 |
metrics:
|
73 |
-
-
|
74 |
-
type: wer
|
75 |
value: 16.22
|
76 |
-
|
77 |
-
|
78 |
value: 15.39
|
|
|
79 |
- task:
|
80 |
-
name: Automatic Speech Recognition
|
81 |
type: automatic-speech-recognition
|
|
|
82 |
dataset:
|
83 |
name: Robust Speech Event - Dev Data
|
84 |
type: speech-recognition-community-v2/dev_data
|
85 |
args: fr
|
86 |
metrics:
|
87 |
-
-
|
88 |
-
type: wer
|
89 |
value: 16.56
|
90 |
-
|
91 |
-
|
92 |
value: 12.96
|
|
|
93 |
- task:
|
94 |
-
name: Automatic Speech Recognition
|
95 |
type: automatic-speech-recognition
|
|
|
96 |
dataset:
|
97 |
name: Fleurs
|
98 |
type: google/fleurs
|
99 |
args: fr_fr
|
100 |
metrics:
|
101 |
-
-
|
102 |
-
|
103 |
-
|
104 |
-
-
|
105 |
-
type: wer
|
106 |
value: 8.84
|
|
|
107 |
---
|
108 |
|
109 |
# Fine-tuned wav2vec2-FR-7K-large model for ASR in French
|
|
|
1 |
---
|
|
|
2 |
language: fr
|
3 |
+
license: apache-2.0
|
4 |
library_name: transformers
|
|
|
5 |
tags:
|
6 |
- automatic-speech-recognition
|
7 |
- hf-asr-leaderboard
|
|
|
16 |
- gigant/african_accented_french
|
17 |
metrics:
|
18 |
- wer
|
19 |
+
base_model: LeBenchmark/wav2vec2-FR-7K-large
|
20 |
model-index:
|
21 |
- name: Fine-tuned wav2vec2-FR-7K-large model for ASR in French
|
22 |
results:
|
23 |
- task:
|
|
|
24 |
type: automatic-speech-recognition
|
25 |
+
name: Automatic Speech Recognition
|
26 |
dataset:
|
27 |
name: Common Voice 11.0
|
28 |
type: mozilla-foundation/common_voice_11_0
|
29 |
args: fr
|
30 |
metrics:
|
31 |
+
- type: wer
|
|
|
32 |
value: 11.44
|
33 |
+
name: Test WER
|
34 |
+
- type: wer
|
35 |
value: 9.66
|
36 |
+
name: Test WER (+LM)
|
37 |
- task:
|
|
|
38 |
type: automatic-speech-recognition
|
39 |
+
name: Automatic Speech Recognition
|
40 |
dataset:
|
41 |
name: Multilingual LibriSpeech (MLS)
|
42 |
type: facebook/multilingual_librispeech
|
43 |
args: french
|
44 |
metrics:
|
45 |
+
- type: wer
|
|
|
46 |
value: 5.93
|
47 |
+
name: Test WER
|
48 |
+
- type: wer
|
49 |
value: 5.13
|
50 |
+
name: Test WER (+LM)
|
51 |
- task:
|
|
|
52 |
type: automatic-speech-recognition
|
53 |
+
name: Automatic Speech Recognition
|
54 |
dataset:
|
55 |
name: VoxPopuli
|
56 |
type: facebook/voxpopuli
|
57 |
args: fr
|
58 |
metrics:
|
59 |
+
- type: wer
|
|
|
60 |
value: 9.33
|
61 |
+
name: Test WER
|
62 |
+
- type: wer
|
63 |
value: 8.51
|
64 |
+
name: Test WER (+LM)
|
65 |
- task:
|
|
|
66 |
type: automatic-speech-recognition
|
67 |
+
name: Automatic Speech Recognition
|
68 |
dataset:
|
69 |
name: African Accented French
|
70 |
type: gigant/african_accented_french
|
71 |
args: fr
|
72 |
metrics:
|
73 |
+
- type: wer
|
|
|
74 |
value: 16.22
|
75 |
+
name: Test WER
|
76 |
+
- type: wer
|
77 |
value: 15.39
|
78 |
+
name: Test WER (+LM)
|
79 |
- task:
|
|
|
80 |
type: automatic-speech-recognition
|
81 |
+
name: Automatic Speech Recognition
|
82 |
dataset:
|
83 |
name: Robust Speech Event - Dev Data
|
84 |
type: speech-recognition-community-v2/dev_data
|
85 |
args: fr
|
86 |
metrics:
|
87 |
+
- type: wer
|
|
|
88 |
value: 16.56
|
89 |
+
name: Test WER
|
90 |
+
- type: wer
|
91 |
value: 12.96
|
92 |
+
name: Test WER (+LM)
|
93 |
- task:
|
|
|
94 |
type: automatic-speech-recognition
|
95 |
+
name: Automatic Speech Recognition
|
96 |
dataset:
|
97 |
name: Fleurs
|
98 |
type: google/fleurs
|
99 |
args: fr_fr
|
100 |
metrics:
|
101 |
+
- type: wer
|
102 |
+
value: 10.1
|
103 |
+
name: Test WER
|
104 |
+
- type: wer
|
|
|
105 |
value: 8.84
|
106 |
+
name: Test WER (+LM)
|
107 |
---
|
108 |
|
109 |
# Fine-tuned wav2vec2-FR-7K-large model for ASR in French
|