Chapter 1 Summary : MLE (Maximum-likelihood Estimate) and Bayesian Approach
Chapter 1 Summary : MLE (Maximum-likelihood Estimate) and Bayesian Approach
Christopher M. Bishop, PRML, Chapter 1 Introdcution
1. Notations and Logical Relation
- Training data: input values and their corresponding target values . For simplicity, written as .
- Goal of Making Prediction: to be able to make predictions for the target variable given some new value of the input variable .
- Assumption of the predictive distribution over : we shall assume that, given the value of , the corresponding value of has a Gaussian distribution with a mean equal to the value y(x, w) of the polynomial curve given by (1.1). Thus we have
- Likelihood function of i.i.d. training data :
- MLE of parameters and :
- for linear regression
- :
- ML plugin prediction for new values of : substituting the maximum likelihood parameters into (1.60) to give
- Prior distribution over : For simplicity, let us consider a Gaussian distribution of the form where
- hyperparameter is the precision of the distribution,
- M +1 is the total number of elements in the vector for an order polynomial.
- Posterior distribution for : using Bayes’ Theorem,
- MAP: a step towards a more Bayesian approach, note MAP is still a point estimate. We find that the maximum of the posterior is given by the minimum of
Although we have included a prior distribution , we are so far still making a point estimate of and so this does not yet amount to a Bayesian treatment. In a fully Bayesian approach, we should consistently apply the sum and product rules of probability, which requires, as we shall see shortly, that we integrate over all values of w. Such marginalizations lie at the heart of Bayesian methods for pattern recognition.
- Fully Bayesian approach:
- Here we shall assume that the parameters and are fixed and known in advance (in later chapters we shall discuss how such parameters can be inferred from data in a Bayesian setting).
- A Bayesian treatment simply corresponds to a consistent application of the sum and product rules of probability, which allow the predictive distribution to be written in the form
- Result of Integration in (1.68):
- (1.66): this posterior distribution is a Gaussian and can be evaluated analytically.
- (1.68) can also be performed analytically with the result that the predictive distribution is given by a Gaussian of the form where the mean and variance are given by Here the matrix S is given by where is the unit matrix, and we have defined the vector with elements for .
2. Flowchart
The relation between all of those equations or notions above:
时间: 2024-10-15 01:23:16