|
--- |
|
language: en |
|
tags: |
|
- pythae |
|
- reproducibility |
|
license: apache-2.0 |
|
--- |
|
|
|
### Downloading this model from the Hub |
|
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub` |
|
```python |
|
>>> from pythae.models import AutoModel |
|
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_aae") |
|
``` |
|
## Reproducibility |
|
This trained model reproduces the results of Table 1 in [1]. |
|
|
|
| Model | Dataset | Metric | Obtained value | Reference value | |
|
|:---:|:---:|:---:|:---:|:---:| |
|
| AAE | CELEBA 64 | FID | 43.3 | 42 | |
|
|
|
[1] Tolstikhin, O Bousquet, S Gelly, and B Schölkopf. Wasserstein auto-encoders. In 6th International Conference on Learning Representations (ICLR 2018), 2018. |