扩散模型研究一:去噪扩散概率模型DDPM

作者: 引线小白-本文永久链接:httpss://www.limoncc.com/post/1c60669bbe56769f/
知识共享许可协议: 本博客采用署名-非商业-禁止演绎4.0国际许可证

一、基本介绍

扩散模型大放异彩,从原理上搞清楚运作机制非常关键。下面我们定义一些符号:对于观测变量 $\bm{x}$,与VAE对应一个隐变量 $\bm{z}$不同,扩散模型对应一组隐变量 $\displaystyle \mathcal{D}_T^\bm{z}=\{\bm{z}_t\}_{t=1}^T$, 而且我们假设$ \mathcal{D}_T^\bm{z}$有马尔可夫性质。这样我们的模型就不是 $\displaystyle p(\bm{x})=\int p(\bm{x}\mid \bm{z})p(\bm{z})d\bm{z}$而是

$$\begin{align}
p(\bm{x})&=\int p(\bm{x}\mid \mathcal{D}_T^\bm{z})d \mathcal{D}_T^\bm{z}\\
&=\int p(\bm{x}\mid \bm{z}_1)p(\bm{z}_1\mid \bm{z}_2)\cdots p(\bm{z}_{t-1}\mid \bm{z}_T)p(\bm{z}_T)d \mathcal{D}_T^\bm{z}\\
&=\int p(\bm{x}\mid \bm{z}_1)p(\bm{z}_T)\prod_{t=2}^{T}p(\bm{z}_{t-1}\mid \bm{z}_{t})d \mathcal{D}_T^\bm{z}
\end{align}$$

下面我们来解释一下这么做的动机:我们认为上帝是这么来生成一张图片的。先掷骰子(对,没错我们又假设上帝投骰子),然后慢慢画出图像(先色块,再形状,再细节,就像美术的厚涂法一样)。这对应着解码过程(decode),反过来就是编码过程(encode)。

$$\begin{align}
\text{encode}: &\bm{x}\rightarrow\bm{z}_1\rightarrow\bm{z}_2\rightarrow\cdots\rightarrow\bm{z}_{T-1}\rightarrow\bm{z}_T=\bm{z}\\
\text{decode}: &\bm{z}=\bm{z}_{T}\rightarrow\bm{z}_{T-1}\rightarrow\bm{z}_{T-2}\rightarrow\cdots\rightarrow\bm{z}_{1}\rightarrow\bm{x}
\end{align}$$

也就是说有三个关键分布
1、编码分布 $\displaystyle p(\bm{z}\mid\bm{x})$
2、生成分布(解码分布) $\displaystyle p(\bm{x}\mid\bm{z})$
3、先验分布 $\displaystyle p(\bm{z})$

二、损失函数

考虑到编码分布的复杂性,我们一般使用易于计算的分布 $\displaystyle q(\bm{z}\mid \bm{x})$。我们使假设的模型分布尽可能与真实数据靠拢

$$\begin{align}
\mathbb{KL}\big[q(\bm{x},\mathcal{D}_T^\bm{z})|p(\bm{x},\mathcal{D}_T^\bm{z})\big]
&=\int q(\bm{x},\mathcal{D}_T^\bm{z})\log \frac{q(\bm{x},\mathcal{D}_T^\bm{z})}{p(\bm{x},\mathcal{D}_T^\bm{z})}d \mathcal{D}_T^\bm{z}d\bm{x}\\
&=\int q(\mathcal{D}_T^\bm{z}\mid \bm{x})q(\bm{x})\log \frac{q(\mathcal{D}_T^\bm{z}\mid\bm{x})q(\bm{x})}{p(\bm{x},\mathcal{D}_T^\bm{z})}d \mathcal{D}_T^\bm{z}d\bm{x}\\
&=\mathbb{E}_{\bm{x}\sim q(\bm{x})}\bigg[\int q(\mathcal{D}_T^\bm{z}\mid \bm{x})\log q(\bm{x})d \mathcal{D}_T^\bm{z}-\int q(\mathcal{D}_T^\bm{z}\mid \bm{x}) \log\frac{p(\bm{x},\mathcal{D}_T^\bm{z})}{q(\mathcal{D}_T^\bm{z}\mid \bm{x})}d \mathcal{D}_T^\bm{z} \bigg]\\
&=-\mathbb{H}_{q(\bm{x})}\big[\bm{x}\big]+\mathbb{E}_{\bm{x}\sim q(\bm{x})}\bigg[-\int q(\mathcal{D}_T^\bm{z}\mid \bm{x}) \log\frac{p(\bm{x},\mathcal{D}_T^\bm{z})}{q(\mathcal{D}_T^\bm{z}\mid \bm{x})}d \mathcal{D}_T^\bm{z} \bigg]\\
&\leqslant \mathbb{E}_{\bm{x}\sim q(\bm{x})}\bigg[-\int q(\mathcal{D}_T^\bm{z}\mid \bm{x}) \log\frac{p(\bm{x},\mathcal{D}_T^\bm{z})}{q(\mathcal{D}_T^\bm{z}\mid \bm{x})}d \mathcal{D}_T^\bm{z} \bigg]
\end{align}$$

令 $\displaystyle f(\bm{\theta})=-\int q(\mathcal{D}_T^\bm{z}\mid \bm{x}) \log\frac{p(\bm{x},\mathcal{D}_T^\bm{z})}{q(\mathcal{D}_T^\bm{z}\mid \bm{x})}d \mathcal{D}_T^\bm{z} $ 我们有

$$\begin{align}
f(\bm{\theta})
&=-\int q(\mathcal{D}_T^\bm{z}\mid \bm{x}) \log\frac{p(\bm{x},\mathcal{D}_T^\bm{z})}{q(\mathcal{D}_T^\bm{z}\mid \bm{x})}d \mathcal{D}_T^\bm{z}\\
&=-\int q(\mathcal{D}_T^\bm{z}\mid \bm{x})\log \frac{p(\bm{x}\mid \bm{z}_1)p(\bm{z}_T)\prod_{t=2}^{T}p(\bm{z}_{t-1}\mid \bm{z}_t)}{q(\bm{z}_1\mid \bm{x})\prod_{t=2}^{T}q(\bm{z}_t\mid \bm{z}_{t-1})}d \mathcal{D}_T^\bm{z}\\
&=-\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\bigg[\log \frac{p(\bm{x}\mid \bm{z}_1)p(\bm{z}_T)\prod_{t=2}^{T}p(\bm{z}_{t-1}\mid \bm{z}_t)}{q(\bm{z}_1\mid \bm{x})\prod_{t=2}^{T}q(\bm{z}_t\mid \bm{z}_{t-1})}\bigg]
\end{align}$$

考虑到
$$\begin{align}
q(\bm{z}_t\mid \bm{z}_{t-1})
=q(\bm{z}_t\mid \bm{z}_{t-1},\bm{x})
=\frac{q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x})q(\bm{z}_t\mid \bm{x})}{q(\bm{z}_{t-1}\mid \bm{x})}
\end{align}$$

于是有
$$\begin{align}
f(\bm{\theta})
&=-\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\bigg[\log \frac{p(\bm{x}\mid \bm{z}_1)p(\bm{z}_T)\prod_{t=2}^{T}p(\bm{z}_{t-1}\mid \bm{z}_t)}{q(\bm{z}_1\mid \bm{x})\prod_{t=2}^{T}q(\bm{z}_t\mid \bm{z}_{t-1})}\bigg]\\
&=-\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\bigg[\log \frac{p(\bm{x}\mid \bm{z}_1)p(\bm{z}_T)\prod_{t=2}^{T}p(\bm{z}_{t-1}\mid \bm{z}_t)q(\bm{z}_{t-1}\mid \bm{x})}{q(\bm{z}_1\mid \bm{x})\prod_{t=2}^{T}q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x})q(\bm{z}_t\mid \bm{x})}\bigg]\\
&=-\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\bigg[\log p(\bm{x}\mid \bm{z}_1)
+\log \frac{p(\bm{z}_T)\cancel{\prod_{t=2}^{T}q(\bm{z}_{t-1}\mid \bm{x})}}{q(\bm{z}_T\mid \bm{x})\cancel{\prod_{t=1}^{T-1}q(\bm{z}_t\mid \bm{x})}}
+\sum_{t=2}^{T}\log \frac{p(\bm{z}_{t-1}\mid \bm{z}_t)}{q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x})}\bigg]\\
&=-\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\bigg[\log p(\bm{x}\mid \bm{z}_1)
+\log \frac{p(\bm{z}_T)}{q(\bm{z}_T\mid \bm{x})}
+\sum_{t=2}^{T}\log \frac{p(\bm{z}_{t-1}\mid \bm{z}_t)}{q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x})}\bigg]\\
&=\underbrace {\mathbb{KL}\big[q(\bm{z}_T\mid \bm{x})|p(\bm{z}_T))\big]}_{\mathcal{L}_{T}}
+\underbrace{\sum_{t=2}^T\mathbb{KL}\big[q(\bm{z}_{t-1}\mid \bm{z}_{t},\bm{x})|p(\bm{z}_{t-1}\mid \bm{z}_{t})\big]}_{\mathcal{L}_{1:T-1}}
+\underbrace{\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\big[\log p(\bm{x}\mid \bm{z}_1)\big]}_{\mathcal{L}_0}
\end{align}$$

所以有损失函数:
$$\begin{align}
\mathcal{L}
= \mathbb{E}_{x\sim p(\bm{x})}\bigg[\mathbb{KL}\big[q(\bm{z}_T\mid \bm{x})|p(\bm{z}_T))\big]
+\sum_{t=2}^T\mathbb{KL}\big[q(\bm{z}_{t-1}\mid \bm{z}_{t},\bm{x})|p(\bm{z}_{t-1}\mid \bm{z}_{t})\big]
+\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\big[\log p(\bm{x}\mid \bm{z}_1)\big]
\bigg]
\end{align}$$

这里需要注意一点,我们是根据马尔可夫假设,推导出了损失函数。如果我们仔细观察损失函数的构成,其实我们很容易发现,马尔可夫假设不是必须的。这也为之后的模型提供了优化空间。

三、模型构建的细节

3.1、编码模型的实现

我们首先来考察一个损失函数 $\displaystyle \mathcal{L}_T$。所谓编码其实就是一个破坏的过程。我们希望通过往正常图片中逐步的加入噪声,来得到噪声。
$$\begin{align}
\bm{z}_t=\alpha_t\bm{z}_{t-1}+\beta_t\bm{\epsilon}_t
\end{align}$$其中 $\displaystyle \bm{\epsilon}\sim \mathcal{N}(\bm{0},\bm{I})$, $\displaystyle \bm{z}_t\in \{\bm{x}\}\cup\{\bm{z}_t\}_{t=1}^T=\{\bm{z}_t\}_{t=0}^T$
最终得到噪声
$$\begin{align}
p(\bm{z}_T)\sim \mathcal{N}(\bm{0},\bm{I})
\end{align}$$

这样有:

$$\begin{align}
q(\bm{z}_t\mid \bm{x})
=\int q(\mathcal{D}_t^\bm{z}\mid \bm{x})d\mathcal{D}_{-t}
= \mathcal{N}(\alpha_t\bm{z}_{t-1},\beta_t^2\bm{I})
\end{align}$$
对于 $\displaystyle q(\bm{z}_T\mid \bm{x})$, 我们展开到 $\displaystyle \bm{x}$就有

$$\begin{align}
\bm{z}_T
&=\alpha_T\bm{z}_{T-1}+\beta_T\bm{\epsilon}_T
=\alpha_T\big[\alpha_{T-1}\bm{z}_{T-2}+\beta_{T-1}\bm{\epsilon}_{T-1}\big]+\beta_T\bm{\epsilon}_T\\
&=\alpha_T\alpha_{T-1}\big[\alpha_{T-2}\bm{z}_{T-3}+\beta_{T-2}\bm{\epsilon}_{T-2}\big]+\alpha_T\beta_{T-1}\bm{\epsilon}_{T-1}+\beta_T\bm{\epsilon}_T\\
&=\prod_{t=1}^{T}\alpha_t\bm{x}
+\bigg[\sum_{t=1}^{T-1}\beta_t\prod_{\tau=t}^{T}\alpha_{\tau+1}\bm{\epsilon}_t+\beta_T\bm{\epsilon}_T\bigg]
\end{align}$$
考察均值与方差
$$\begin{align}
\mathbb{E}[\bm{z}_T]
&=\prod_{t=1}^{T}\alpha_t\bm{x}\\
\mathbb{Cov}[\bm{z}_T]
&=\bigg[\sum_{t=1}^{T-1}\beta_t^2\prod_{\tau=t}^{T}\alpha_{\tau+1}^2+\beta_T^2\bigg]\cdot\bm{I}
\end{align}$$

我们希望最终的 $\displaystyle \bm{z}_T \sim \mathcal{N}(\bm{0},\bm{I})$, 若 $\displaystyle \alpha_t^2+\beta_t^2=1$ 则有
$$\begin{align}
\sum_{t=1}^{T-1}\beta_t^2\prod_{\tau=t}^{T}\alpha_{\tau+1}^2+\beta_T^2
&=\sum_{t=1}^{T-1}(1-\alpha_t^2)\prod_{\tau=t}^{T}\alpha_{\tau+1}^2+\beta_T^2\\
&=\sum_{t=1}^{T-1}\big[\prod_{\tau=t}^{T}\alpha_{\tau+1}^2-\prod_{\tau=t}^{T}\alpha_{\tau}^2\big]+\beta_T^2\\
&=\sum_{t=1}^{T-1}\big[\prod_{\tau=t}^{T}\alpha_{\tau+1}^2-\prod_{\tau=t}^{T}\alpha_{\tau}^2\big]+\beta_T^2\\
&=-\prod_{t=1}^{T}\alpha_t^2+\alpha_T^2+\beta_T^2\\
&=1 -\prod_{t=1}^{T}\alpha_t^2
\end{align}$$

令 $\displaystyle \bar{\alpha}_T=\prod_{t=1}^{T}\alpha_t$, $\displaystyle \bar{\beta}_T =\sqrt{1-\prod_{t=1}^{T}\alpha_t^2}$, 这样就有:

$$\begin{align}
q(\bm{z}_T\mid \bm{x})\sim \mathcal{N}(\bar{\alpha}_T\bm{x},\bar{\beta}_T^2)= \mathcal{N}\bigg(\prod_{t=1}^{T}\alpha_t\bm{x},\bigg[1-\prod_{t=1}^{T}\alpha_t^2\bigg]\bm{I}\bigg)
\end{align}$$

我们只要设计合理的 $\displaystyle \alpha_t$使得 $\displaystyle \lim_{T\to +\infty}\prod_{t=1}^T\alpha_t=\bm{0}$即可。这样无需参数,只通过一些必要的高斯假设就实现了我们的目的 $\displaystyle \mathbb{KL}\big[q(\bm{z}_T\mid \bm{x})|p(\bm{z}_T))\big]=0$。故我们损失的第一项是零。

3.2、生成模型的实现
3.2.1、$\displaystyle\mathcal{L}_{1:T-1}$

考虑逆过程 $\displaystyle q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x})$,根据贝叶斯定理有:
$$\begin{align}
q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x})
&=\frac{q(\bm{z}_t\mid \bm{z}_{t-1},\bm{x})q(\bm{z}_{t-1}\mid \bm{x})}{q(\bm{z}_t \mid \bm{x})}\\
&=\frac{\mathcal{N}(\alpha_t\bm{z}_{t-1},\beta_t^2\bm{I})
\mathcal{N}(\bar{\alpha}_{t-1}\bm{x},\bar{\beta}_{t-1}^2\bm{I})}{\mathcal{N}(\bar{\alpha}_{t}\bm{x},\bar{\beta}_{t}^2\bm{I})}\\
& \propto \exp \sum_{i=1}^k\bigg[
-\frac{1}{2}\bigg[ \frac{(z_t-\alpha_tz_{t-1})^2}{\beta_t^2}
+\frac{(z_{t-1}-\bar{\alpha}_{t-1}x)^2}{\bar{\beta}_{t-1}^2}
-\frac{(z_t-\bar{\alpha}_tx)^2}{\bar{\beta}_t^2}\bigg]\bigg]\\
&\propto \exp \sum_{i=1}^k\bigg[-\frac{1}{2}\bigg[
\bigg(\frac{\alpha_t^2}{\beta_t^2}+\frac{1}{\bar{\beta}_{t-1}^2}\bigg)z_{t-1}^2-
2\bigg(\frac{\alpha_t}{\beta_t^2}z_t+\frac{\bar{\alpha}_{t-1}}{\bar{\beta}_{t-1}^2}x\bigg)z_{t-1}
\bigg]\bigg]
\end{align}$$
根据高斯分布,我们配平方 $\displaystyle ax^2-2bx=a(x-\frac{b}{a})^2+C$可以得到:

均值有 $$\begin{align}
\mathbb{E}[\bm{z}_{t-1}\mid \bm{z}_t,\bm{x}]
&=\bigg(\frac{\alpha_t}{\beta_t^2}\bm{z}_t+\frac{\bar{\alpha}_{t-1}}{\bar{\beta}_{t-1}^2}\bm{x}\bigg)/\bigg(\frac{\alpha_t^2}{\beta_t^2}+\frac{1}{\bar{\beta}_{t-1}^2}\bigg)
=\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\alpha_t\bm{z}_t
+\frac{\beta_t^2}{\bar{\beta}_{t}^2}\bar{\alpha}_{t-1}\bm{x}
\end{align}$$

方差有
$$\begin{align}
\mathbb{Cov}[\bm{z}_{t-1}\mid \bm{z}_t,\bm{x}]
&=1/\bigg(\frac{\alpha_t^2}{\beta_t^2}+\frac{1}{\bar{\beta}_{t-1}^2}\bigg)\bm{I}
=1/\bigg(\frac{\alpha_t^2(1-\bar{\alpha}_{t-1}^2)+\beta_t^2}{\beta_t^2(1-\bar{\alpha}_{t-1}^2)}\bigg)\bm{I}
=\frac{1-\bar{\alpha}_{t-1}^2}{1-\bar{\alpha}_{t}^2}\beta_t^2
=\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\beta_t^2\bm{I}
\end{align}$$

也就是说:
$$\begin{align}
q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x}) =
\mathcal{N}\Bigg(\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\alpha_t\bm{z}_t
+\frac{\beta_t^2}{\bar{\beta}_{t}^2}\bar{\alpha}_{t-1}\bm{x},\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\beta_t^2\bm{I}\Bigg)
\end{align}$$

在编码过程(变成噪声)中,我们有 $\displaystyle q(\bm{z}_t\mid \bm{x})\sim \mathcal{N}(\bar{\alpha}_t\bm{x},\bar{\beta}_t^2)= \mathcal{N}\bigg(\prod_{\tau=1}^{t}\alpha_\tau\bm{x},\bigg[1-\prod_{\tau=1}^{t}\alpha_\tau^2\bigg]\bm{I}\bigg)$ 我们改变一下表现形式

$$\begin{align}
\bm{z}_t=\bar{\alpha}_t\bm{x}+\bar{\beta}_t\bm{\epsilon}
\Rightarrow \bm{x}=\frac{1}{\bar{\alpha}_t}(\bm{z}_t-\bar{\beta}_t\bm{\epsilon})
\end{align}$$我们带入上式,进一步考察均值
$$\begin{align}
\mathbb{E}[\bm{z}_{t-1}\mid \bm{z}_t,\bm{x}]
&=\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\alpha_t\bm{z}_t
+\frac{\beta_t^2}{\bar{\beta}_{t}^2}\bar{\alpha}_{t-1}\bm{x}
=\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\alpha_t\bm{z}_t
+\frac{\beta_t^2}{\bar{\beta}_{t}^2}\bar{\alpha}_{t-1}\frac{1}{\bar{\alpha}_t}(\bm{z}_t-\bar{\beta}_t\bm{\epsilon})\\
&=\bigg[\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\alpha_t+
\frac{\beta_t^2}{\bar{\beta}_{t}^2}\frac{\bar{\alpha}_{t-1}}{\bar{\alpha}_t}\bigg]\bm{z}_t-\frac{\beta_t^2}{\bar{\beta}_{t}^2}\frac{\bar{\alpha}_{t-1}\bar{\beta}_t}{\bar{\alpha}_t}\bm{\epsilon}\\
&=\bigg[\frac{1-\bar{\alpha}_{t-1}^2}{\bar{\beta}_{t}^2}\alpha_t
+\frac{\beta_t^2}{\bar{\beta}_{t}^2}\frac{1}{\alpha_t}\bigg]\bm{z}_t
-\frac{\beta_t^2}{\bar{\beta}_{t}}\frac{1}{\alpha_t}\bm{\epsilon}\\
&=\bigg[\frac{\alpha_t^2+\beta_t^2-\bar{\alpha}_{t-1}^2\alpha_t^2}{\bar{\beta}_{t}^2\alpha_t}\bigg]\bm{z}_t-\frac{\beta_t^2}{\bar{\beta}_{t}}\frac{1}{\alpha_t}\bm{\epsilon}\\
&=\bigg[\frac{1-\bar{\alpha}_{t}^2}{\bar{\beta}_{t}^2\alpha_t}\bigg]\bm{z}_t-\frac{\beta_t^2}{\bar{\beta}_{t}}\frac{1}{\alpha_t}\bm{\epsilon}\\
&=\frac{1}{\alpha_t}\bm{z}_t-\frac{\beta_t^2}{\bar{\beta}_{t}}\frac{1}{\alpha_t}\bm{\epsilon}
\end{align}$$

我们知道 $\displaystyle \mathbb{KL}\bigg[\mathcal{N}(\mu_1, \sigma_1^2) \big| \mathcal{N}(\mu_2, \sigma_2^2)\bigg]=\frac{(\mu_1-\mu_2)^2+\sigma_1^2}{2 \sigma_2^2}+\log \frac{\sigma_2}{\sigma_1}-\frac{1}{2}$,同时 $\displaystyle p(\bm{z}_{t-1}\mid \bm{z}_{t})=\mathcal{N}\big(\bm{\mu}_{\bm{\theta}}[\bm{z}_t,t],\bm{\Sigma}_{\bm{\theta}}[\bm{z}_t,t]\big)$ 是我们真实的生成分布, 其中 $\displaystyle \bm{\mu}_{\bm{\theta}}[\bm{z}_t,t]$ 是我们要学习的神经网络,用神经网络原因很简单图像数据太复杂。那么还剩下 $\displaystyle \bm{\Sigma}_{\bm{\theta}}[\bm{z}_t,t]$,我们注意到:

$$\begin{align}
q(\bm{z}_{t-1}\mid \bm{z}_t,\bm{x}) =
\mathcal{N}\Bigg(\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\alpha_t\bm{z}_t
+\frac{\beta_t^2}{\bar{\beta}_{t}^2}\bar{\alpha}_{t-1}\bm{x},\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\beta_t^2\bm{I}\Bigg)
\end{align}$$

注意到损失函数 $\displaystyle \mathbb{KL}\big[q(\bm{z}_{t-1}\mid \bm{z}_{t},\bm{x})|p(\bm{z}_{t-1}\mid \bm{z}_{t})\big]$,使得最小化的最简单的方法就是令 $\displaystyle \bm{\Sigma}_{\bm{\theta}}[\bm{z}_t,t]=\sigma_t^2 \bm{I}$, $\displaystyle \sigma_t^2=\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\beta_t^2$(论文作者是这么干的)。 $\displaystyle \sigma_t^2$ 其实可以取其他的值,在这之后的模型中提出的各种方案。那么有

$$\begin{align}
\sum_{t=2}^T\mathbb{E}_{\bm{x}}\bigg[\mathbb{KL}\big[q(\bm{z}_{t-1}\mid \bm{z}_{t},\bm{x})|p(\bm{z}_{t-1}\mid \bm{z}_{t})\big]\bigg]
&=\sum_{t=2}^T\mathbb{E}_{t,\bm{x},\bm{\epsilon}}\Bigg[\frac{1}{2\sigma_t^2}\bigg|\frac{1}{\alpha_t}\bm{z}_t-\frac{\beta_t^2}{\bar{\beta}_{t}}\frac{1}{\alpha_t}\bm{\epsilon}-\bm{\mu}_{\bm{\theta}}[\bm{z}_t,t]\bigg|^2\Bigg]+Const
\end{align}$$

变形神经网络形式如下:$\displaystyle \bm{\mu}_{\bm{\theta}}[\bm{z}_t,t]=\bigg[\frac{1}{\alpha_t}\bm{z}_t-\frac{\beta_t^2}{\bar{\beta}_{t}}\frac{1}{\alpha_t}\bm{\epsilon}_{\bm{\theta}}[\bm{z}_t,t]\bigg]$,那么可以简化得到:

$$\begin{align}
\mathcal{L}_{1:T-1}
=\sum_{t=2}^T\mathbb{E}_{t,\bm{x},\bm{\epsilon}}\Bigg[\mathbb{KL}\big[q(\bm{z}_{t-1}\mid \bm{z}_{t},\bm{x})|p(\bm{z}_{t-1}\mid \bm{z}_{t})\big]\Bigg]
&= \sum_{t=2}^T\mathbb{E}_{t,\bm{x},\bm{\epsilon}}\Bigg[\frac{\beta_t^4}{2\sigma_t^2\alpha_t^2\bar{\beta}_{t}^2}\bigg|\bm{\epsilon}-\bm{\epsilon}_{\bm{\theta}}[\bm{z}_t,t]\bigg|^2\Bigg]+Const
\end{align}$$

3.2.2、$\displaystyle\mathcal{L}_0$

接下来我们考察最后一项 $\displaystyle \mathcal{L}_0=\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\big[\log p(\bm{x}\mid \bm{z}_1)\big]$

$$\begin{align}
\mathcal{L}_0
&=\mathbb{E}_{\bm{x}}\bigg[\mathbb{E}_{q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})}\big[\log p(\bm{x}\mid \bm{z}_1)\big]\bigg]
=\mathbb{E}_{\bm{x}}\bigg[\int q(\mathcal{D}_{T}^\bm{z}\mid \bm{x})\log p(\bm{x}\mid \bm{z}_1)d\mathcal{D}_{T}^\bm{z}\bigg]\\
&=\mathbb{E}_{\bm{x}}\bigg[\int q(\bm{z}_1\mid \bm{x})\log p(\bm{x}\mid \bm{z}_1)d\bm{z}_1\bigg]
\end{align}$$

由于 $\displaystyle \bm{z}_1=\alpha_1\bm{x}+\beta_1\bm{\epsilon}_1$,$\displaystyle p(\bm{x}\mid \bm{z}_1)=\mathcal{N}\big(\bm{\mu}_{\bm{\theta}}[\bm{z}_1,1],\bm{\Sigma}_{\bm{\theta}}[\bm{z}_1,1]=\sigma_1^2\big)$,有 $\displaystyle \bm{\mu}_{\bm{\theta}}[\bm{z}_1,1]=\bigg[\frac{1}{\alpha_1}\bm{z}_1-\frac{\beta_1^2}{\bar{\beta}_{1}}\frac{1}{\alpha_1}\bm{\epsilon}_{\bm{\theta}}[\bm{z}_1,1]\bigg]$于是有:

$$\begin{align}
\mathcal{L}_0
& =\mathbb{E}_{t,\bm{x},\bm{\epsilon}_1}\Bigg[ \frac{1}{2\sigma_1^2}\bigg|\bm{x}-\bm{\mu}_{\bm{\theta}}[\bm{z}_1,1]\bigg|^2\Bigg]+Const\\
& = \mathbb{E}_{t,\bm{x},\bm{\epsilon}_1}\Bigg[\frac{1}{2\sigma_1^2}\bigg|\bm{x}-\frac{1}{\alpha_1}\big[\alpha_1\bm{x}+\beta_1\bm{\epsilon}_1\big]+\frac{\beta_1^2}{\bar{\beta}_{1}}\frac{1}{\alpha_1}\bm{\epsilon}_{\bm{\theta}}[\bm{z}_1,1]\bigg|^2\Bigg]+Const\\
& = \mathbb{E}_{t,\bm{x},\bm{\epsilon}_1}\Bigg[\frac{1}{2\sigma_1^2}\bigg|-\frac{\beta_1}{\alpha_1}\bm{\epsilon}_1+\frac{\beta_1^2}{\bar{\beta}_{1}}\frac{1}{\alpha_1}\bm{\epsilon}_{\bm{\theta}}[\bm{z}_1,1]\bigg|^2\Bigg]+Const\\
& = \mathbb{E}_{t,\bm{x},\bm{\epsilon}_1}\Bigg[\frac{\beta_1^4}{2\sigma_1^2\alpha_1^2\bar{\beta}_1^2}\bigg|\bm{\epsilon}_1-\bm{\epsilon}_{\bm{\theta}}[\bar{\alpha}_1\bm{x}+\bar{\beta}_1\bm{\epsilon}_1,1]\bigg|^2\Bigg]+Const
\end{align}$$

3.2.3、$\displaystyle\mathcal{L}_{0:T-1}$

对于 $\displaystyle\mathcal{L}_{0:T-1}$,令 $\displaystyle \mathcal{L}_{t-1}=\mathbb{E}_{t,\bm{x},\bm{\epsilon}}\bigg[\frac{\beta_t^4}{2\sigma_t^2\alpha_t^2\bar{\beta}_{t}^2}\bigg|\bm{\epsilon}-\bm{\epsilon}_{\bm{\theta}}[\bm{z}_t,t]\bigg|^2\bigg]+Const$, 我们去掉与参数无关的权重,简化得到$\displaystyle\mathcal{L}_{0:T-1}$的统一表达式:

$$\begin{align}
\mathcal{L}_{t-1}=\mathbb{E}_{t,\bm{x},\bm{\epsilon}}\bigg[\bigg|\bm{\epsilon}-\bm{\epsilon}_{\bm{\theta}}[\bar{\alpha}_t\bm{x}+\bar{\beta}_t\bm{\epsilon},t]\bigg|^2\bigg]+Const
\end{align}$$

于是有损失函数:

$$\begin{align}
\mathcal{L}=\sum_{t=1}^T\mathbb{E}_{t,\bm{x},\bm{\epsilon}}\Bigg[\frac{\beta_t^4}{2\sigma_t^2\alpha_t^2\bar{\beta}_{t}^2}\bigg|\bm{\epsilon}-\bm{\epsilon}_{\bm{\theta}}[\bm{z}_t,t]\bigg|^2\Bigg]+Const
\end{align}$$

这样,我们有算法:
$$\begin{align}
\begin{array}
\hline \text{Algorithm 1 Training} \\
\hline 1: \text{repeat} \\
2: \quad \mathbf{x}_0 \sim q\left(\mathbf{x}_0\right) \\
3: \quad t \sim \mathrm{Uniform}(\{1, \ldots, T\}) \\
4: \quad \boldsymbol{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I}) \\
5: \quad \text{Take gradient descent step on} \\
\qquad\qquad \nabla_\theta|\bm{\epsilon}-\bm{\epsilon}_\theta[\bar{\alpha}_t\bm{x}+\bar{\beta}_t\bm{\epsilon},t]|^2 \\
6:\text{until converged} \\
\hline
\end{array}
\end{align}$$

对于采样生成有:
$$\begin{align}
\begin{array}
\hline \text{Algorithm 2 Sampling} \\
\hline 1: \bm{z}_T \sim \mathcal{N}(\bm{0}, \bm{I}) \\
2: \text{for}\; t=T, \cdots, 1 \;\mathbf{do} \\
3: \quad\bm{\xi} \sim \mathcal{N}(\bm{0}, \bm{I}) \text{ if }t>1, \text{ else } \bm{\xi}=\bm{0} \\
4: \quad \bm{z}_{t-1}=\frac{1}{\alpha_t}\bm{z}_t-\frac{\beta_t^2}{\bar{\beta}_{t}}\frac{1}{\alpha_t}\bm{\epsilon}_{\bm{\theta}}[\bm{z}_t,t]+\sigma_t \bm{\xi}\\
5: \text{end for} \\
6: \text{return }\bm{x}=\bm{z}_0
\end{array}
\end{align}$$

四、评述

1、推理中的一些关键动机,视乎没有加以说明。一些想法视乎是从天而降。 $\displaystyle \alpha_t^2+\beta_t^2=1$似乎还能说的过去。$\displaystyle \sigma_t^2=\frac{\bar{\beta}_{t-1}^2}{\bar{\beta}_{t}^2}\beta_t^2$似乎就动机不足了,毕竟有那么多可以取的值,不一定非要这个。

2、在逆向生成中$\displaystyle \mathbb{E}[\bm{z}_{t-1}\mid \bm{z}_t,\bm{x}]$, 做变形$\displaystyle \bm{z}_t=\bar{\alpha}_t\bm{x}+\bar{\beta}_t\bm{\epsilon}
\Rightarrow \bm{x}=\frac{1}{\bar{\alpha}_t}(\bm{z}_t-\bar{\beta}_t\bm{\epsilon})$带入。就是为了消除 $\displaystyle \bm{x}$,从而使得 $\displaystyle \bm{z}_{t}\to\bm{z}_{t-1}$不依赖 $\displaystyle \bm{x}$。而这一步推导DDPM依赖了马尔可夫假设,显然限制了分布空间。如果我们去掉马尔可夫假设,我们可以得到更大自由度。

3、最终的结果是简单,一定还有更加显然的推理方式,论文DDPM的推理总觉得还是太过迂回,与不显然。接下我们将使用宋大神SDE的思想来解读这个模型。

4、DDPM以后有一系列的改进,也将一一解读,敬请关注。

Ho, J., Jain, A., & Abbeel, P. (2020, December 16). Denoising Diffusion Probabilistic Models. arXiv. http://arxiv.org/abs/2006.11239. Accessed 22 March 2023
Kingma, D. P., & Welling, M. (2022, December 10). Auto-Encoding Variational Bayes. arXiv. http://arxiv.org/abs/1312.6114. Accessed 18 April 2023


版权声明
引线小白创作并维护的柠檬CC博客采用署名-非商业-禁止演绎4.0国际许可证。
本文首发于柠檬CC [ https://www.limoncc.com ] , 版权所有、侵权必究。
本文永久链接httpss://www.limoncc.com/post/1c60669bbe56769f/
如果您需要引用本文,请参考:
引线小白. (May. 16, 2023). 《扩散模型研究一:去噪扩散概率模型DDPM》[Blog post]. Retrieved from https://www.limoncc.com/post/1c60669bbe56769f
@online{limoncc-1c60669bbe56769f,
title={扩散模型研究一:去噪扩散概率模型DDPM},
author={引线小白},
year={2023},
month={May},
date={16},
url={\url{https://www.limoncc.com/post/1c60669bbe56769f}},
}

'