http://blog.csdn.net/pipisorry/article/details/43115525
机器学习Machine Learning - Andrew NG courses学习笔记
单变量线性回归Linear regression with one variable
模型表示Model representation
例子:
这是Regression Problem(one of supervised learning)并且是Univariate linear regression (Linear regression with one variable.)
变量定义Notation(术语terminology):
m = Number of training examples
x’s = “input” variable / features
y’s = “output” variable / “target” variable
e.g. (x,y)表示一个trainning example 而 (xi,yi)表示ith trainning example.
Model representation
h代表假设hypothesis,h maps x‘s to y‘s(其实就是求解x到y的一个函数)
成本函数cost function
上个例子中h设为下图中的式子,我们要做的是
how to go about choosing these parameter values, theta zero and theta one.
try to minimize the square difference between the output of the hypothesis and the actual price of the house.
定义这个函数为(J函数就是cost function的一种)
why do we minimize one by 2M?
going to minimize one by 2M.Putting the 2, the constant one half, in front it just makes some of the math a little easier.
why do we take the squares of the errors?
It turns out that the squared error cost function is a reasonable choice and will work well for most problems, for most regression problems. There are other cost functions that will work pretty well, but the squared error cost function is probably the most
commonly used one for regression problems.
from:http://blog.csdn.net/pipisorry/article/details/43115525