📚 https://arxiv.org/abs/2006.11239
🏆 Published in NeurIPS 2020

✅ Day 1 – Abstract & Introduction


📌 Background & Motivation


📌 Core Idea


📌 Main Contributions

  1. Produces high-quality synthesis, rivaling or beating GANs.
  2. Stable training without adversarial tricks.
  3. Simple MSE objective → predict noise directly.
  4. Shows a theoretical link between diffusion models, denoising score matching, and Langevin dynamics.

📌 Early Results


📌 Key Takeaways (Day 1)

  1. Diffusion models redefine generation as noise removal.
  2. Avoid major drawbacks of GANs/VAEs with stable training.
  3. Achieve SOTA results with a simple and interpretable framework.

🧠 Final Thoughts (Day 1)

Day 1 shows how diffusion models emerged as a clean, stable alternative to adversarial or variational approaches.
The elegance of turning generation into progressive denoising laid the foundation for their rapid adoption in vision tasks.