Generator vs Discriminator · Nash equilibrium · mode collapse · training dynamics
0
Step
—
G Loss
—
D Loss
—
Quality
Loss curves over training steps
Real vs generated distributions
GAN objective: min_G max_D E[log D(x)] + E[log(1 - D(G(z)))]. At Nash equilibrium, the generator produces the real data distribution and D outputs 0.5 everywhere. Mode collapse: G finds a single output that fools D, causing training instability. Wasserstein GAN and gradient penalties help stabilize training.