|
--- |
|
license: mit |
|
datasets: |
|
- proj-persona/PersonaHub |
|
language: |
|
- hi |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: adapter-transformers |
|
--- |
|
|
|
# Amphion Text-to-Audio Pretrained Models |
|
|
|
We provide the following pretrained checkpoints for you to use, specifically: |
|
|
|
Two [AudioLDM](https://github.com/open-mmlab/Amphion/tree/main/egs/tta/audioldm) pretrained checkpoints with corresponding [AutoencoderKL](https://github.com/open-mmlab/Amphion/tree/main/egs/tta/autoencoderkl) checkpoints trained on AudioCaps. |
|
|
|
|
|
|
|
## Quick Start |
|
|
|
To utilize the pretrained models, just run the following commands: |
|
|
|
### Step1: Download the checkpoint |
|
```bash |
|
git lfs install |
|
git clone https://huggingface.co/amphion/text_to_audio |
|
``` |
|
|
|
### Step2: Clone the Amphion's Source Code of GitHub |
|
```bash |
|
git clone https://github.com/open-mmlab/Amphion.git |
|
``` |
|
|
|
### Step3: Specify the checkpoint's path |
|
Use the soft link to specify the downloaded checkpoint in the first step: |
|
|
|
```bash |
|
cd Amphion |
|
mkdir -p ckpts |
|
ln -s ../../../text_to_speech/tta ckpts/ |
|
``` |
|
|
|
### Step4: Inference |
|
|
|
You can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tta/RECIPE.md) to generate audio from text. |
|
|
|
We also provided an online [demo](https://huggingface.co/spaces/amphion/Text-to-Audio), feel free to try it! |