EM算法和混合模型一:EM算法

作者: 引线小白-本文永久链接:httpss://www.limoncc.com/post/baeffd0a1d876b77/
知识共享许可协议: 本博客采用署名-非商业-禁止演绎4.0国际许可证

一、EM算法
1.1、问题表述

考虑独立同分布数据集 $\displaystyle \mathcal{D}^\bm{x}=\{\bm{x}_i\}_{i=1}^N$,我们加入隐含变量 $\displaystyle \bm{z}\in \mathcal{C}$。这样我们有完整数据集(complete data):$\displaystyle \mathcal{D}^{+}=\{\bm{x}_i,\bm{z}_i\}_{i=1}^N$。 $\displaystyle \mathcal{D}^{+}$依然是独立同分布的,同时我们把 $\displaystyle \mathcal{D}^\bm{x}$称为不完整数据集(incomplete data)。于是我们有:
$$\begin{align}
\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)=\sum_{i=1}^N\ln p\big(\bm{x}\mid \bm{\theta}\big)=\sum_{i=1}^N\ln \bigg[\int_{\mathcal{C}} p\big(\bm{x},\bm{z}\mid \bm{\theta}\big)\mathrm{d}\bm{z}\bigg]
\end{align}$$我们用数据集表示:
$$\begin{align}
\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)=\ln p\big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)=\ln \bigg[\int_{\mathcal{C}^N} p\big(\mathcal{D}^\bm{x},\mathcal{D}^\bm{z}\mid \bm{\theta}\big)\mathrm{d}\mathcal{D}^\bm{z}\bigg]
\end{align}$$对于离散情况,我们把积分换成求和即可。

1.2、痛点问题:对数里求和的复杂性

对于不完全数据集,也就是我们观测到的数据集的对数似然函数 $\displaystyle \ln p\big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)$。它的形式通常是复杂,尤其在想最大化参数时,对数里面居然有求和(积分):$\displaystyle \ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)=\sum_{i=1}^N\ln p\big(\bm{x}\mid \bm{\theta}\big)=\sum_{i=1}^N\ln \big[\sum_{\bm{z}\in\mathcal{C}} p\big(\bm{x},\bm{z}\mid \bm{\theta}\big)\big]$这是很很头疼的问题。

1.3、完全数据集求解的简单性

隐变量加入后:求解 $\displaystyle \ell\big(\mathcal{D}^+\mid\bm{\theta}\big)$要比求解 $\displaystyle \ell(\mathcal{D}^\bm{x}\mid \bm{\theta})$容易:
$$\begin{align}
\ell\big(\mathcal{D}^+\mid\bm{\theta}\big)
&=\ln\bigg[\prod_{i=1}^N \bigg(p\big(\bm{z}\mid \bm{\pi}\big)p\big(\bm{x}\mid \bm{z},\bm{\theta}\big)\bigg)\bigg]=\ln\bigg[\prod_{i=1}^N p\big(\bm{z}\mid \bm{\pi}\big)\times \prod_{i=1}^Np\big(\bm{x}\mid \bm{z},\bm{\theta}\big)\bigg]\\
&=\ln\bigg[\prod_{i=1}^N p\big(\bm{z}\mid \bm{\pi}\big)\bigg]+\ln\bigg[\prod_{i=1}^Np\big(\bm{x}\mid \bm{z},\bm{\theta}\big)\bigg]\\
&=\sum_{i=1}^N\ln p\big(\bm{z}\mid \bm{\pi}\big)+\sum_{i=1}^N \ln p\big(\bm{x}\mid \bm{z},\bm{\theta}\big)
\end{align}$$对数里的求和自然消失,变成了我们熟悉的形式。

1.4、隐变量后验分布与数据点性质

混合模型中,我们会定义隐变量的后验分布:
$$\begin{align}
p\big(\bm{z}\mid \bm{x},\bm{\theta}\big)=\frac{p\big(\bm{x},\bm{z}\mid \bm{\theta}\big)}{p\big(\bm{x}\mid \bm{\theta}\big)}=\frac{p\big(\bm{x},\bm{z}\mid \bm{\theta}\big)}{\displaystyle\int_{\mathcal{C}}p\big(\bm{x},\bm{z}\mid \bm{\theta}\big)\mathrm{d}\bm{z}}
\end{align}$$我们变形一下来说明一个重要引理:

【数据点对数密度分解引理】
不完全数据点对数密度可以被分解。
$$\begin{align}
\ln p\big(\bm{x}_i\mid \bm{\theta}\big)=\mathrm{E}_{q_i}\big[\ln p\big(\bm{x}_i,\bm{z}_i\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q_i,p_i\big]
\end{align}$$
它的分解是
1、隐变量后验分布下,完全数据点对数密度期望
2、隐变量后验似然函数分布的交叉熵
证明:
$$\begin{align}
p\big(\bm{x}_i\mid \bm{\theta}\big)=\frac{p\big(\bm{x}_i,\bm{z}_i\mid \bm{\theta}\big)}{p\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}\big)}\iff
\ln p\big(\bm{x}_i\mid \bm{\theta}\big)=\ln p\big(\bm{x}_i,\bm{z}_i\mid \bm{\theta}\big)-\ln p\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}\big)
\end{align}$$
然后针对一个后验分布 $\displaystyle q_i=p\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}^{old}\big)$求期望:
$$\begin{align}
\mathrm{E}_{q_i}\big\{\ln p\big(\bm{x}_i\mid \bm{\theta}\big)\big\}=\mathrm{E}_{q_i}\big\{\ln p\big(\bm{x}_i,\bm{z}_i\mid \bm{\theta}\big)\big\}-\mathrm{E}_{q_i}\big\{\ln p\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}\big)\big\}
\end{align}$$
于是有:
$$\begin{align}
\ln p\big(\bm{x}_i\mid \bm{\theta}\big)&=\int_{\mathcal{C}} \big[\ln p\big(\bm{x}_i,\bm{z}_i\mid \bm{\theta}\big)\times q_i\big]\mathrm{d}\bm{z}_i-\int_{\mathcal{C}} \big[\ln p\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}\big)\times q_i\big]\mathrm{d}\bm{z}_i\\
&=\mathrm{E}_{q_i}\big[\ln p\big(\bm{x}_i,\bm{z}_i\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q_i,p_i\big]
\end{align}$$证毕。
这一结论是EM算法中隐变量基本变换技巧,起着至关重要的作用。

1.5、隐变量的若干符号

为简洁记,我们定义一下符号,这些符号的定义有助于我们清晰理解EM算法是如何工作的。
$\displaystyle q_i=p\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}^{old}\big)$, $\displaystyle p_i=p\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}\big)$
$\displaystyle q_i^t=q\big(\bm{z}_i\mid \bm{x}_i,\bm{\theta}_t\big)$
$\displaystyle q=p\big(\mathcal{D}^\bm{z}\mid \mathcal{D}^\bm{x},\bm{\theta}^{old}\big)$, $\displaystyle p=p\big(\mathcal{D}^\bm{z}\mid \mathcal{D}^\bm{x},\bm{\theta}\big)$
$\displaystyle q^t=p\big(\mathcal{D}^\bm{z}\mid \mathcal{D}^\bm{x},\bm{\theta}_t\big)$
也就是说隐变量后验分布可以通过参数 $\displaystyle \bm{\theta}_t$不断更新。事实上我们可以得到一个隐变量后验分布序列,并且收敛:
$$\begin{align}
q_i^1\to q_i^2\to \cdots\to q_i^t\to q_i^{t+1}\to\cdots\to q_i^{end}
\end{align}$$

1.6、不完全数据集对数似然函数等价变换

【不完全数据集对数似然函数等价变换引理】
不完全数据集对数似然函数可以被分解。
$$\begin{align}
\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)=\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q,p\big]=\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q\big]+\mathrm{KL}\big[q\parallel p\big]
\end{align}$$它的分解是
1、隐变量后验分布下,完全数据集对数似然函数的期望
2、隐变量后验似然函数分布的交叉熵

证明:我们参考【数据点对数密度分解引理】和隐变量符号有:
$$\begin{align}
p \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)
=\frac{p \big(\mathcal{D}^+\mid \bm{\theta}\big)}{p \big(\mathcal{D}^z\mid \mathcal{D}^\bm{x}, \bm{\theta}\big)}\iff
\ln p \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)
=\ln p \big(\mathcal{D}^+\mid \bm{\theta}\big)-\ln p \big(\mathcal{D}^z\mid \mathcal{D}^\bm{x}, \bm{\theta}\big)
\end{align}$$

针对隐变量数据集的一个新后验分布 $\displaystyle q=q\big(\mathcal{D}^z\mid \mathcal{D}^\bm{x}, \bm{\theta}^{old}\big)$求期望有:
$$\begin{align}
\mathrm{E}_{ q}\left[\ln p \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)\right]
=\mathrm{E}_{ q\big(\mathcal{D}^z\mid \mathcal{D}^\bm{x}, \bm{\theta}\big)}\left[\ln p \big(\mathcal{D}^+\mid \bm{\theta}\big)\right]-\mathrm{E}_{ q\big(\mathcal{D}^z\mid \mathcal{D}^\bm{x}, \bm{\theta}\big)} \left[\ln p \big(\mathcal{D}^z\mid \mathcal{D}^\bm{x}, \bm{\theta}\big)\right]
\end{align}$$
于是有:
$$\begin{align}
\ln p \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)
&=\mathrm{E}_{q}\left[\ln p \big(\mathcal{D}^+\mid \bm{\theta}\big)\right]-\mathrm{E}_{q} \left[\ln p \big(\mathcal{D}^z\mid \mathcal{D}^\bm{x}, \bm{\theta}\big)\right]\\
&=\mathrm{E}_{q}\big[\ln p\big(\mathcal{D}^+\mid \bm{\bm{\theta}}\big)\big]+\mathrm{H}\big[q,p\big]\\
&=\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q\big]+\mathrm{KL}\big[q\parallel p\big]\\
\end{align}$$
注意,我们使用了乘法公式、熵的性质从而我们把数据点的规律,上升到数据集。这说明了独立同分布的可以分解性。

1.7、EM算法收敛定律

【EM算法收敛定律】
存在收敛参数,使得不完全数据集对数似然函数的最大化与完全数据集对数似然函数期望的最大化等价。
$$\begin{align}
\exists \,\bm{\theta}_{end}\to\max\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)\iff\max \mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]
\end{align}$$

证明
1、下面我们引入一个辅助函数(auxiliary function): $\displaystyle \mathcal{Q}\big(\bm{\theta},q\big)\triangleq\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q\big]$有
$$\begin{align}
\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)
&=\mathcal{Q}\big(\bm{\theta},q\big)+\mathrm{KL}\big[q\parallel p\big]\\
&\geqslant\mathcal{Q}\big(\bm{\theta},q\big)=\mathrm{E}_{q}\big[\ln p\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q\big]\\
&\geqslant\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]
\end{align}$$现在我们略微改变一下辅助函数(auxiliary function) $\displaystyle \mathcal{Q}$的观察视角 :
$$\begin{align}
\mathcal{Q}\big({\bm{\theta},\bm{\theta}}_t\big)=\mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q^t\big]
\end{align}$$

同时有 $\displaystyle \ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)\geqslant\mathcal{Q}\big({\bm{\theta},\bm{\theta}}_t\big)\geqslant\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]$,也就是说 $\displaystyle \mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]$是 $\displaystyle \ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)$的下界。

2、下面我们将要构造出 $\displaystyle \bm{\theta}_{end}$,同时证明最大化等价。
令 $\displaystyle \bm{\theta}_{t+1}=\mathop{\mathrm{argmax}}_{\bm{\theta}} \mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)$ 则: $\displaystyle \mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}_{t+1}\big)-\mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]\geqslant0$,同时$\displaystyle \ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)\geqslant\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]$于是有
$$\begin{align}
&\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}_{t+1}\big)-\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)\\
&\geqslant\mathcal{Q}\big({\bm{\theta}_{t+1},\bm{\theta}}_t\big)-\mathcal{Q}\big({\bm{\theta},\bm{\theta}}_t\big)\\
&\geqslant\mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}_{t+1}\big)-\mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}_t\big)\big]\geqslant0
\end{align}$$
通过迭代 $\displaystyle \bm{\theta}$, 使得 $\displaystyle \ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}_{t+1}\big)\geqslant\ell\big(\mathcal{D}^\bm{x}\mid \bm{\theta}_{t}\big)$,$\displaystyle t$步收敛后从而实现 $\displaystyle \max_{\bm{\theta}_{t+1}}\ell(\mathcal{D}\mid \bm{\theta}_{t+1})$
这就证明了结论。

1.8、EM算法

我们总结前面的结论,写出EM算法。


算法:EM算法



1 $\displaystyle \bm{\theta}_0=\bm{a}$
2 $\displaystyle p\big(\mathcal{D}^\bm{z}\mid \bm{\pi}\big)=\prod_{i=1}^Np \big(\bm{z}\mid \bm{\pi}\big)$
3 $\displaystyle \text{while }\mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}_{t+1}\big)=\text{converged}$ :
$\displaystyle
\quad\begin{array}{|lc}
\text{ E step}\\
\text{—————-}\\
q\big(\mathcal{D}^\bm{z}\mid \mathcal{D}^\bm{x},\bm{\theta}_t\big)=\frac{p\big(\mathcal{D}^\bm{x},\mathcal{D}^\bm{z}\mid \bm{\theta}_t\big)}{\displaystyle\int_{\mathcal{C}^N}p\big(\mathcal{D}^\bm{x},\mathcal{D}^\bm{z}\mid \bm{\theta}_t\big)\mathrm{d}\mathcal{D}^\bm{z}}\\
p\big(\mathcal{D}^\bm{x},\mathcal{D}^\bm{z}\mid \bm{\theta}\big)=p\big(\mathcal{D}^\bm{x}\mid \mathcal{D}^\bm{z},\bm{\theta}\big)p\big(\mathcal{D}^\bm{z}\mid \bm{\pi}\big)=\prod_{i=1}^Np\big(\bm{x}\mid \bm{z},\bm{\theta}\big)\times \prod_{i=1}^Np\big(\bm{z}\mid \bm{\pi}\big)\\
\mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]
=\int q\big(\mathcal{D}^\bm{z}\mid \mathcal{D}^\bm{x},\bm{\theta}_t\big)\ln p\big(\mathcal{D}^\bm{x},\mathcal{D}^\bm{z}\mid \bm{\theta}\big)\mathrm{d}\mathcal{D}^\bm{z}\\
\text{ M step}\\
\text{—————-}\\
\bm{\theta}_{t+1}=\mathop{\mathrm{argmax}}_{\bm{\theta}} \mathrm{E}_{q^t}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\\
\end{array}\\$
4 $\displaystyle \bm{\theta}_{end}$

#end while


二、评述

1、EM算法的关键在于引入隐变量,然后我们考察它的先验、后验和完全数据集分布,于是我们得到: 不完全数据集对数似然函数等价变换
$$\begin{align}
\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)=\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q,p\big]=\mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]+\mathrm{H}\big[q\big]+\mathrm{KL}\big[q\parallel p\big]
\end{align}$$正是因为如此,我们才有了【EM算法收敛定律】:
$$\begin{align}
\exists \,\bm{\theta}_{end}\to\max\ell \big(\mathcal{D}^\bm{x}\mid \bm{\theta}\big)\iff\max \mathrm{E}_{q}\big[\ell\big(\mathcal{D}^+\mid \bm{\theta}\big)\big]
\end{align}$$


版权声明
引线小白创作并维护的柠檬CC博客采用署名-非商业-禁止演绎4.0国际许可证。
本文首发于柠檬CC [ https://www.limoncc.com ] , 版权所有、侵权必究。
本文永久链接httpss://www.limoncc.com/post/baeffd0a1d876b77/
如果您需要引用本文,请参考:
引线小白. (Mar. 13, 2017). 《EM算法和混合模型一:EM算法》[Blog post]. Retrieved from https://www.limoncc.com/post/baeffd0a1d876b77
@online{limoncc-baeffd0a1d876b77,
title={EM算法和混合模型一:EM算法},
author={引线小白},
year={2017},
month={Mar},
date={13},
url={\url{https://www.limoncc.com/post/baeffd0a1d876b77}},
}

'