Update Readme
Browse files
README.md
CHANGED
@@ -7,11 +7,6 @@ tags:
|
|
7 |
- convnext-audio
|
8 |
- audioset
|
9 |
inference: false
|
10 |
-
extra_gated_prompt: "The collected information will help acquire a better knowledge of who is using our audio event tools. If relevant, please cite our Interspeech 2023 paper (Bibtex below)."
|
11 |
-
extra_gated_fields:
|
12 |
-
Company/university: text
|
13 |
-
Website: text
|
14 |
-
I plan to use this model for (task, type of audio data, etc): text
|
15 |
---
|
16 |
|
17 |
**ConvNeXt-Tiny-AT** is an audio tagging CNN model, trained on **AudioSet** (balanced+unbalanced subsets). It reached 0.471 mAP on the test set [(Paper)](https://www.isca-speech.org/archive/interspeech_2023/pellegrini23_interspeech.html).
|
@@ -36,10 +31,6 @@ pip install git+https://github.com/topel/audioset-convnext-inf@pip-install
|
|
36 |
Below is an example of how to instantiate our model convnext_tiny_471mAP.pth
|
37 |
|
38 |
```python
|
39 |
-
# 1. visit hf.co/topel/ConvNeXt-Tiny-AT and accept user conditions
|
40 |
-
# 2. visit hf.co/settings/tokens to create an access token
|
41 |
-
# 3. instantiate pretrained model
|
42 |
-
|
43 |
import os
|
44 |
import numpy as np
|
45 |
import torch
|
@@ -48,7 +39,7 @@ import torchaudio
|
|
48 |
from audioset_convnext_inf.pytorch.convnext import ConvNeXt
|
49 |
from audioset_convnext_inf.utils.utilities import read_audioset_label_tags
|
50 |
|
51 |
-
model = ConvNeXt.from_pretrained("topel/ConvNeXt-Tiny-AT",
|
52 |
|
53 |
print(
|
54 |
"# params:",
|
|
|
7 |
- convnext-audio
|
8 |
- audioset
|
9 |
inference: false
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
**ConvNeXt-Tiny-AT** is an audio tagging CNN model, trained on **AudioSet** (balanced+unbalanced subsets). It reached 0.471 mAP on the test set [(Paper)](https://www.isca-speech.org/archive/interspeech_2023/pellegrini23_interspeech.html).
|
|
|
31 |
Below is an example of how to instantiate our model convnext_tiny_471mAP.pth
|
32 |
|
33 |
```python
|
|
|
|
|
|
|
|
|
34 |
import os
|
35 |
import numpy as np
|
36 |
import torch
|
|
|
39 |
from audioset_convnext_inf.pytorch.convnext import ConvNeXt
|
40 |
from audioset_convnext_inf.utils.utilities import read_audioset_label_tags
|
41 |
|
42 |
+
model = ConvNeXt.from_pretrained("topel/ConvNeXt-Tiny-AT", map_location='cpu')
|
43 |
|
44 |
print(
|
45 |
"# params:",
|