File size: 1,142 Bytes
5c58230
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
datasets:
- agkphysics/AudioSet
- openslr/librispeech_asr
pipeline_tag: audio-classification
license: bsd-3-clause
tags:
- audio-classification
---

# Self Supervised Audio Spectrogram Transformer (pretrained on AudioSet/Librispeech) 

Self Supervised Audio Spectrogram Transformer (SSAST) model with uninitialized classifier head. It was introduced in the paper [SSAST: Self-Supervised Audio Spectrogram Transformer](https://arxiv.org/pdf/2110.09784) by Gong et al. and first released in [this repository](https://github.com/YuanGongND/ssast). 

Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model.

## Model description

The Audio Spectrogram Transformer is equivalent to [ViT](https://huggingface.co/docs/transformers/model_doc/vit), but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.

## Usage

The model is pretrained on a massive amount of audio. Please finetune the classifier head before use, as it comes uninitialized.
---