IMvision12
commited on
Commit
•
6e132d1
1
Parent(s):
2c86e3d
Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,9 @@ Full credits to: [A_K_Nain](https://twitter.com/A_K_Nain)
|
|
11 |
|
12 |
## Wasserstein GAN (WGAN) with Gradient Penalty (GP)
|
13 |
|
|
|
|
|
|
|
14 |
The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.
|
15 |
|
16 |
The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.
|
|
|
11 |
|
12 |
## Wasserstein GAN (WGAN) with Gradient Penalty (GP)
|
13 |
|
14 |
+
Original Paper Of WGAN : [Paper](https://arxiv.org/abs/1701.07875)
|
15 |
+
Wasserstein GANs With with Gradient Penalty : [Paper](https://arxiv.org/abs/1704.00028)
|
16 |
+
|
17 |
The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.
|
18 |
|
19 |
The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.
|