2018 10-708 (CMU) Probabilistic Graphical Models {Lecture 5} [Algorithms for Exact Inference]

Not in the frontier of research, but the results are used commonly now.

X_{k+1} - X_n are known,

to calculate the joint probability, we have to do inference

Recent research: on the approximate inference teches

approx:

1) optimization-based

2) sampling-based

Compare the computational complexity:

NAIVE way: K^n

Chain rule:  n*K^2

n=4

Chain rule derivation:

marginalizing out the rest

P(a) P(b) P(c|b) ...... P(h|e,f) => a,b,c,d,e,f,g,h (elimation sequence)

introduce a term m_h(e,f) to make e and f dependent

not introducing any dependency here

Different elimination sequence will lead to different computational complexity.

It‘s dependent on how large the new clique is

In one step, if you connect every vertex, then you are in trouble.

if there‘s a loop in graph, you can‘t view message passing as a variable to another variable, but clique to clique.

原文地址:https://www.cnblogs.com/ecoflex/p/10231273.html

时间: 2024-10-02 02:15:13

2018 10-708 (CMU) Probabilistic Graphical Models {Lecture 5} [Algorithms for Exact Inference]的相关文章

2018 10-708 (CMU) Probabilistic Graphical Models {Lecture 15} [Mean field Approximation]

p_bar(x) = p(x,e) p(x) is p(x|e) p_bar is hard to estimate, but p(x) is easy and p(x) \propotion p_bar q is easier for sampling, while we cant sample from p Occam's Razor??? 原文地址:https://www.cnblogs.com/ecoflex/p/10259391.html

2018 10-708 (CMU) Probabilistic Graphical Models {Lecture 25} [Spectral Methods]

draw a topic h and then draw independent xfrom multinomial distribution A paper can have several topics, and each topics has different proportions O is the matrix of topic word, given by the model h is the topic 原文地址:https://www.cnblogs.com/ecoflex/p

Probabilistic Graphical Models:一、Introduction and Overview(1、Overview and Motivation)

一.PGM用来做什么 1.  医学诊断:从各种病症分析病人得了什么病,该用什么手段治疗 2.  图像分割:从一张百万像素级的图片中分析每个像素点对应的是什么东西 两个共同点:(1)有非常多不同的输入变量:(2)对于算法而言,结果都是不确定的 二.PGM各代表什么 1.  Models 2.  Probabilistic (1)概率:设计model即是为了分析一些不确定的东西(uncertainty) (2)Uncertainty的来源: (3)概率在模型表达上的优势 3.  Graphical

Probabilistic Graphical Models 1: Representation-Week1-reasoning-patterns

课程链接:https://www.coursera.org/learn/probabilistic-graphical-models/lecture/KMjHs/reasoning-patterns 原CPD如下: 计算: p(d1|g3) = p(d1, g3) / p(g3) p(d1, g3) = p(g3, d1) = p(g3|d1, i0) * p(d1, i0) + p(g3|d1, i1) * p(d1, i1) = 0.7*0.4*0.7 + 0.2*0.4*0.3  约等于0

Probabilistic Graphical Models 10-708, Spring 2017

https://www.cs.cmu.edu/~epxing/Class/10708-17/slides/lecture1-Introduction.pdf Computational and CS orientated => DK and NF's book Statistical and easier one => Jordan's book MLAPP => also a good book HWs => Theory, algorithm design and implem

Probabilistic Graphical Models:一、Introduction and Overview(2、Factors)

一.什么是factors? 类似于function,将一个自变量空间投影到新空间.这个自变量空间叫做scope. 二.例子 如概率论中的联合分布,就是将不同变量值的组合映射到一个概率,概率和为1. 三.几种操作(factor operation)的介绍 1.乘积 2.边缘化 3.缩减 四.总结(为何引入factor?) 1.对于定义高维空间的分布具有关键意义: 2.包括了概率分布的基本操作.

Probabilistic Graphical Models:二、Bayes Network Fundamentals(1、Semantics & Factorization)

一.How to construct the dependency? 1.首字母即随机变量名称 2.I->G是更加复杂的模型,但Bayes里不考虑,因为Bayes只是无环图. 3.CPD = conditional probability distribution.图中的每一个点都是一个CPD,这里5个点,就有五个CPD. 二.Chain Rule for Bayesian Neatworks 将整个Bayes网络的所有节点所构成的联合概率(Joint probability)利用链式法则(ch

概率主题模型简介 Introduction to Probabilistic Topic Models

此文为David M. Blei所写的<Introduction to Probabilistic Topic Models>的译文,供大家参考. 摘要:概率主题模型是一系列旨在发现隐藏在大规模文档中的主题结构的算法.本文首先回顾了这一领域的主要思想,接着调研了当前的研究水平,最后展望某些有所希望的方向.从最简单的主题模型--潜在狄立克雷分配(Latent Dirichlet Allocation,LDA)出发,讨论了其与概率建模的联系,描述了用于主题发现的两种算法.主题模型日新月异,被扩展和

2018/10/10 awk 分析 nginx 日志

废话不多说,简单了解一下 awk - 强大的文本分析工具,也就是分析日志 最常用的就是分析日志了吧,做统计什么,这里也拿 nginx 日志来做分析 1:统计出现次数 - 比如状态码出现次数 - 先直接上命令 awk '{print $9}' access_log | sort | uniq -c | sort -rn - 可能会有一些疑惑,这个$9 是什么? - 我们拿出一条日志来看一下先 123.124.16.83 - - [10/Oct/2018:10:24:56 +0800] "GET /