ML| EM

What‘s xxx

The EM algorithm is used to find the maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either there are missing values among the data, or the model can be formulated more simply by assuming the existence of additional unobserved data points.

The motivation is as follows. If we know the value of the parameters $\boldsymbol\theta$, we can usually find the value of the latent variables $\mathbf{Z}$ by maximizing the log-likelihood over all possible values of $\mathbf{Z}$, either simply by iterating over $\mathbf{Z}$ or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables $\mathbf{Z}$, we can find an estimate of the parameters $$\boldsymbol\theta$$ fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both $\boldsymbol\theta$ and $\mathbf{Z}$ are unknown:

  1. First, initialize the parameters $\boldsymbol\theta$ to some random values.
  2. Compute the best value for $\mathbf{Z}$ given these parameter values.
  3. Then, use the just-computed values of $\mathbf{Z}$ to compute a better estimate for the parameters $\boldsymbol\theta$. Parameters associated with a particular value of $\mathbf{Z}$ will use only those data points whose associated latent variable has that value.
  4. Iterate steps 2 and 3 until convergence.

The algorithm as just described monotonically approaches a local minimum of the cost function, and is commonly called hard EM. The k-means algorithm is an example of this class of algorithms.

However, we can do somewhat better by, rather than making a hard choice for $\mathbf{Z}$ given the current parameter values and averaging only over the set of data points associated with a particular value of $\mathbf{Z}$, instead determining the probability of each possible value of $\mathbf{Z}$ for each data point, and then using the probabilities associated with a particular value of $\mathbf{Z}$ to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM.

With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.

Algorithm

Given a statistical model consisting of a set $\mathbf{X}$ of observed data, a set of unobserved latent data or missing values $\mathbf{Z}$, and a vector of unknown parameters $\boldsymbol\theta$, along with a likelihood function $L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}|\boldsymbol\theta)$, the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data

$L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}|\boldsymbol\theta) = \sum_{\mathbf{Z}} p(\mathbf{X},\mathbf{Z}|\boldsymbol\theta) $
However, this quantity is often intractable (e.g. if $\mathbf{Z}$ is a sequence of events, so that the number of values grows exponentially with the sequence length, making the exact calculation of the sum extremely difficult).

The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:

1. Expectation step (E step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of $\mathbf{Z}$ given $\mathbf{X}$ under the current estimate of the parameters $\boldsymbol\theta^{(t)}$:
$Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}|\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta;\mathbf{X},\mathbf{Z}) \right] \,$
2. Maximization step (M step): Find the parameter that maximizes this quantity:
$\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \, $
Note that in typical models to which EM is applied:

  • The observed data points $\mathbf{X}$ may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). There may in fact be a vector of observations associated with each data point.
  • The missing values (aka latent variables) $\mathbf{Z}$ are discrete, drawn from a fixed number of values, and there is one latent variable per observed data point.
  • The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).

ML| EM

时间: 2024-10-06 13:20:13

ML| EM的相关文章

机器学习算法思想简单梳理

朴素贝叶斯: 有以下几个地方需要注意: 1. 如果给出的特征向量长度可能不同,这是需要归一化为通长度的向量(这里以文本分类为例),比如说是句子单词的话,则长度为整个词汇量的长度,对应位置是该单词出现的次数. 2. 计算公式如下: 其中一项条件概率可以通过朴素贝叶斯条件独立展开.要注意一点就是的计算方法,而由朴素贝叶斯的前提假设可知,=,因此一般有两种,一种是在类别为ci的那些样本集中,找到wj出现次数的总和,然后除以该样本的总和:第二种方法是类别为ci的那些样本集中,找到wj出现次数的总和,然后

机器学习常用算法思想

朴素贝叶斯: 有以下几个地方需要注意: 1. 如果给出的特征向量长度可能不同,这是需要归一化为通长度的向量(这里以文本分类为例),比如说是句子单词的话,则长度为整个词汇量的长度,对应位置是该单词出现的次数. 2. 计算公式如下: 其中一项条件概率可以通过朴素贝叶斯条件独立展开.要注意一点就是的计算方法,而由朴素贝叶斯的前提假设可知,=,因此一般有两种,一种是在类别为ci的那些样本集中,找到wj出现次数的总和,然后除以该样本的总和:第二种方法是类别为ci的那些样本集中,找到wj出现次数的总和,然后

斯坦福ML公开课笔记13B-因子分析模型及其EM求解

转载请注明:http://blog.csdn.net/stdcoutzyx/article/details/37559995 本文是<斯坦福ML公开课笔记13A>的续篇.主要讲述针对混合高斯模型的问题所采取的简单解决方法,即对假设进行限制的简单方法,最后引出因子分析模型(Factor Analysis Model),包括因子分析模型的介绍.EM求解等. 斯坦福ML公开课笔记13B-因子分析模型及其EM求解,布布扣,bubuko.com

Notes : &lt;Hands-on ML with Sklearn &amp; TF&gt; Chapter 7

.caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px solid #000; } .table { border-collapse: collapse !important; } .table td, .table th { background-color: #fff !important; } .table-bordered th, .table-bordere

Notes : &lt;Hands-on ML with Sklearn &amp; TF&gt; Chapter 5

.caret, .dropup > .btn > .caret { border-top-color: #000 !important; } .label { border: 1px solid #000; } .table { border-collapse: collapse !important; } .table td, .table th { background-color: #fff !important; } .table-bordered th, .table-bordere

ML: 聚类算法R包-对比

测试验证环境 数据: 7w+ 条,数据结构如下图: > head(car.train) DV DC RV RC SOC HV LV HT LT Type TypeName 1 379 85.09 0.00 0.0 62.99 3.99 0.00 12 0 10f689e8-e6cc-47a3-be5a-dbc3833428ef EV200 2 379 85.09 370.89 59.9 63.99 4.01 0.00 12 0 10f689e8-e6cc-47a3-be5a-dbc3833428

EM学习-思想和代码

EM算法的简明实现 当然是教学用的简明实现了,这份实现是针对双硬币模型的. 双硬币模型 假设有两枚硬币A.B,以相同的概率随机选择一个硬币,进行如下的抛硬币实验:共做5次实验,每次实验独立的抛十次,结果如图中a所示,例如某次实验产生了H.T.T.T.H.H.T.H.T.H,H代表正面朝上. 假设试验数据记录员可能是实习生,业务不一定熟悉,造成a和b两种情况 a表示实习生记录了详细的试验数据,我们可以观测到试验数据中每次选择的是A还是B b表示实习生忘了记录每次试验选择的是A还是B,我们无法观测实

EM算法原理以及高斯混合模型实践

EM算法有很多的应用: 最广泛的就是GMM混合高斯模型.聚类.HMM等等. The EM Algorithm 高斯混合模型(Mixtures of Gaussians)和EM算法 EM算法 求最大似然函数估计值的一般步骤: (1)写出似然函数: (2)对似然函数取对数,并整理: (3)求导数,令导数为0,得到似然方程: (4)解似然方程,得到的参数即为所求. 期望最大化算法(EM算法): 优点: 1. 简单稳定: 2. 通过E步骤和M步骤使得期望最大化,是自收敛的分类算法,既不需要事先设定类别也

基于spark Mllib(ML)聚类实战

    写在前面的话:由于spark2.0.0之后ML中才包括LDA,GaussianMixture 模型,这里k-means用的是ML模块做测试,LDA,GaussianMixture 则用的是MLlib模块 数据资料下载网站,大力推荐!!! http://archive.ics.uci.edu/ml/datasets.html?format=&task=clu&att=&area=&numAtt=&numIns=&type=&sort=nameU