tensorflow:实战Google深度学习框架第四章02神经网络优化(学习率,避免过拟合,滑动平均模型)

1、学习率的设置既不能太小,又不能太大,解决方法:使用指数衰减法

例如:

假设我们要最小化函数 y=x2y=x2, 选择初始点 x0=5x0=5

1. 学习率为1的时候,x在5和-5之间震荡。

import tensorflow as tf
TRAINING_STEPS = 10
LEARNING_RATE = 1
x = tf.Variable(tf.constant(5, dtype=tf.float32), name="x")
y = tf.square(x)

train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(y)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(TRAINING_STEPS):
        sess.run(train_op)
        x_value = sess.run(x)
        print "After %s iteration(s): x%s is %f."% (i+1, i+1, x_value) 
After 1 iteration(s): x1 is -5.000000.
After 2 iteration(s): x2 is 5.000000.
After 3 iteration(s): x3 is -5.000000.
After 4 iteration(s): x4 is 5.000000.
After 5 iteration(s): x5 is -5.000000.
After 6 iteration(s): x6 is 5.000000.
After 7 iteration(s): x7 is -5.000000.
After 8 iteration(s): x8 is 5.000000.
After 9 iteration(s): x9 is -5.000000.
After 10 iteration(s): x10 is 5.000000.

2. 学习率为0.001的时候,下降速度过慢,在901轮时才收敛到0.823355

TRAINING_STEPS = 1000
LEARNING_RATE = 0.001
x = tf.Variable(tf.constant(5, dtype=tf.float32), name="x")
y = tf.square(x)

train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(y)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(TRAINING_STEPS):
        sess.run(train_op)
        if i % 100 == 0:
            x_value = sess.run(x)
            print "After %s iteration(s): x%s is %f."% (i+1, i+1, x_value)
After 1 iteration(s): x1 is 4.990000.
After 101 iteration(s): x101 is 4.084646.
After 201 iteration(s): x201 is 3.343555.
After 301 iteration(s): x301 is 2.736923.
After 401 iteration(s): x401 is 2.240355.
After 501 iteration(s): x501 is 1.833880.
After 601 iteration(s): x601 is 1.501153.
After 701 iteration(s): x701 is 1.228794.
After 801 iteration(s): x801 is 1.005850.
After 901 iteration(s): x901 is 0.823355.

3. 使用指数衰减的学习率,在迭代初期得到较高的下降速度,可以在较小的训练轮数下取得不错的收敛程度

TRAINING_STEPS = 100
global_step = tf.Variable(0)
LEARNING_RATE = tf.train.exponential_decay(0.1, global_step, 1, 0.96, staircase=True)

x = tf.Variable(tf.constant(5, dtype=tf.float32), name="x")
y = tf.square(x)
train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(y, global_step=global_step)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(TRAINING_STEPS):
        sess.run(train_op)
        if i % 10 == 0:
            LEARNING_RATE_value = sess.run(LEARNING_RATE)
            x_value = sess.run(x)
            print "After %s iteration(s): x%s is %f, learning rate is %f."% (i+1, i+1, x_value, LEARNING_RATE_value)
After 1 iteration(s): x1 is 4.000000, learning rate is 0.096000.
After 11 iteration(s): x11 is 0.690561, learning rate is 0.063824.
After 21 iteration(s): x21 is 0.222583, learning rate is 0.042432.
After 31 iteration(s): x31 is 0.106405, learning rate is 0.028210.
After 41 iteration(s): x41 is 0.065548, learning rate is 0.018755.
After 51 iteration(s): x51 is 0.047625, learning rate is 0.012469.
After 61 iteration(s): x61 is 0.038558, learning rate is 0.008290.
After 71 iteration(s): x71 is 0.033523, learning rate is 0.005511.
After 81 iteration(s): x81 is 0.030553, learning rate is 0.003664.
After 91 iteration(s): x91 is 0.028727, learning rate is 0.002436.

2、过拟合要避免过拟合,解决办法:正则化

1. 生成模拟数据集。

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np

data = []
label = []
np.random.seed(0)

# 以原点为圆心,半径为1的圆把散点划分成红蓝两部分,并加入随机噪音。
for i in range(150):
    x1 = np.random.uniform(-1,1)
    x2 = np.random.uniform(0,2)
    if x1**2 + x2**2 <= 1:
        data.append([np.random.normal(x1, 0.1),np.random.normal(x2,0.1)])
        label.append(0)
    else:
        data.append([np.random.normal(x1, 0.1), np.random.normal(x2, 0.1)])
        label.append(1)

data = np.hstack(data).reshape(-1,2)
label = np.hstack(label).reshape(-1, 1)
plt.scatter(data[:,0], data[:,1], c=label,
           cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white")
plt.show()

2. 定义一个获取权重,并自动加入正则项到损失的函数

def get_weight(shape, lambda1):
    var = tf.Variable(tf.random_normal(shape), dtype=tf.float32)
    tf.add_to_collection(‘losses‘, tf.contrib.layers.l2_regularizer(lambda1)(var))
    return var

3. 定义神经网络。

x = tf.placeholder(tf.float32, shape=(None, 2))
y_ = tf.placeholder(tf.float32, shape=(None, 1))
sample_size = len(data)

# 每层节点的个数
layer_dimension = [2,10,5,3,1]

n_layers = len(layer_dimension)

cur_layer = x
in_dimension = layer_dimension[0]

# 循环生成网络结构
for i in range(1, n_layers):
    out_dimension = layer_dimension[i]
    weight = get_weight([in_dimension, out_dimension], 0.003)
    bias = tf.Variable(tf.constant(0.1, shape=[out_dimension]))
    cur_layer = tf.nn.elu(tf.matmul(cur_layer, weight) + bias)
    in_dimension = layer_dimension[i]

y= cur_layer

# 损失函数的定义。
mse_loss = tf.reduce_sum(tf.pow(y_ - y, 2)) / sample_size
tf.add_to_collection(‘losses‘, mse_loss)
loss = tf.add_n(tf.get_collection(‘losses‘))

4. 训练不带正则项的损失函数mse_loss

# 定义训练的目标函数mse_loss,训练次数及训练模型
train_op = tf.train.AdamOptimizer(0.001).minimize(mse_loss)
TRAINING_STEPS = 40000

with tf.Session() as sess:
    tf.global_variables_initializer().run()
    for i in range(TRAINING_STEPS):
        sess.run(train_op, feed_dict={x: data, y_: label})
        if i % 2000 == 0:
            print("After %d steps, mse_loss: %f" % (i,sess.run(mse_loss, feed_dict={x: data, y_: label})))

    # 画出训练后的分割曲线
    xx, yy = np.mgrid[-1.2:1.2:.01, -0.2:2.2:.01]
    grid = np.c_[xx.ravel(), yy.ravel()]
    probs = sess.run(y, feed_dict={x:grid})
    probs = probs.reshape(xx.shape)

plt.scatter(data[:,0], data[:,1], c=label,
           cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white")
plt.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.1)
plt.show()
After 0 steps, mse_loss: 2.315934
After 2000 steps, mse_loss: 0.054761
After 4000 steps, mse_loss: 0.047252
After 6000 steps, mse_loss: 0.029857
After 8000 steps, mse_loss: 0.026388
After 10000 steps, mse_loss: 0.024671
After 12000 steps, mse_loss: 0.023310
After 14000 steps, mse_loss: 0.021284
After 16000 steps, mse_loss: 0.019408
After 18000 steps, mse_loss: 0.017947
After 20000 steps, mse_loss: 0.016683
After 22000 steps, mse_loss: 0.015700
After 24000 steps, mse_loss: 0.014854
After 26000 steps, mse_loss: 0.014021
After 28000 steps, mse_loss: 0.013597
After 30000 steps, mse_loss: 0.013161
After 32000 steps, mse_loss: 0.012915
After 34000 steps, mse_loss: 0.012671
After 36000 steps, mse_loss: 0.012465
After 38000 steps, mse_loss: 0.012251

5. 训练带正则项的损失函数loss。

# 定义训练的目标函数loss,训练次数及训练模型
train_op = tf.train.AdamOptimizer(0.001).minimize(loss)
TRAINING_STEPS = 40000

with tf.Session() as sess:
    tf.global_variables_initializer().run()
    for i in range(TRAINING_STEPS):
        sess.run(train_op, feed_dict={x: data, y_: label})
        if i % 2000 == 0:
            print("After %d steps, loss: %f" % (i, sess.run(loss, feed_dict={x: data, y_: label})))

    # 画出训练后的分割曲线
    xx, yy = np.mgrid[-1:1:.01, 0:2:.01]
    grid = np.c_[xx.ravel(), yy.ravel()]
    probs = sess.run(y, feed_dict={x:grid})
    probs = probs.reshape(xx.shape)

plt.scatter(data[:,0], data[:,1], c=label,
           cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white")
plt.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.1)
plt.show()
After 0 steps, loss: 2.468601
After 2000 steps, loss: 0.111190
After 4000 steps, loss: 0.079666
After 6000 steps, loss: 0.066808
After 8000 steps, loss: 0.060114
After 10000 steps, loss: 0.058860
After 12000 steps, loss: 0.058358
After 14000 steps, loss: 0.058301
After 16000 steps, loss: 0.058279
After 18000 steps, loss: 0.058266
After 20000 steps, loss: 0.058260
After 22000 steps, loss: 0.058255
After 24000 steps, loss: 0.058243
After 26000 steps, loss: 0.058225
After 28000 steps, loss: 0.058208
After 30000 steps, loss: 0.058196
After 32000 steps, loss: 0.058187
After 34000 steps, loss: 0.058181
After 36000 steps, loss: 0.058177
After 38000 steps, loss: 0.058174

3、滑动平均模型

可以使模型有更好的表现

1. 定义变量及滑动平均类

import tensorflow as tf
v1 = tf.Variable(0, dtype=tf.float32)
step = tf.Variable(0, trainable=False)
ema = tf.train.ExponentialMovingAverage(0.99, step)
maintain_averages_op = ema.apply([v1]) 

2. 查看不同迭代中变量取值的变化。

with tf.Session() as sess:

    # 初始化
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    print sess.run([v1, ema.average(v1)])

    # 更新变量v1的取值
    sess.run(tf.assign(v1, 5))
    sess.run(maintain_averages_op)
    print sess.run([v1, ema.average(v1)]) 

    # 更新step和v1的取值
    sess.run(tf.assign(step, 10000))
    sess.run(tf.assign(v1, 10))
    sess.run(maintain_averages_op)
    print sess.run([v1, ema.average(v1)])       

    # 更新一次v1的滑动平均值
    sess.run(maintain_averages_op)
    print sess.run([v1, ema.average(v1)]) 
[0.0, 0.0]
[5.0, 4.5]
[10.0, 4.5549998]
[10.0, 4.6094499]

 
时间: 2024-12-08 22:33:14

tensorflow:实战Google深度学习框架第四章02神经网络优化(学习率,避免过拟合,滑动平均模型)的相关文章

《Tensorflow实战Google深度学习框架》PDF一套四本+源代码_高清_完整

TensorFlow实战 热门Tensorflow实战书籍PDF高清版一套共四本+源代码,包含<Tensorflow实战>.<Tensorflow:实战Google深度学习框架(完整版)>.<TensorFlow:实战Google深度学习框架(第2版)>与<TensorFlow技术解析与实战>,不能错过的学习Tensorflow书籍. TensorFlow是谷歌2015年开源的主流深度学习框架,目前已在谷歌.优步(Uber).京东.小米等科技公司广泛应用.&

[Tensorflow实战Google深度学习框架

本系列为Tensorflow实战Google深度学习框架知识笔记,仅为博主看书过程中觉得较为重要的知识点,简单摘要下来,内容较为零散,请见谅. 2017-11-06 [第五章] MNIST数字识别问题 1. MNIST数据处理 为了方便使用,Tensorflow提供了一个类来处理MNIST数据,这个类会自动下载并转化MNIST数据的格式,将数据从原始的数据包中解析成训练和测试神经网络时使用的格式. 2. 神经网络模型训练及不同模型结果对比 为了评测神经网络模型在不同参数下的效果,一般会从训练数据

分享《TensorFlow实战Google深度学习框架 (第2版) 》中文版PDF和源代码

下载:https://pan.baidu.com/s/1aD1Y2erdtppgAbk8c63xEw 更多最新的资料:http://blog.51cto.com/3215120 <TensorFlow实战Google深度学习框架(第2版)>中文版PDF和源代码中文版PDF,带目录标签,文字可以复制,363页.配套源代码:经典书籍,讲解详细:如图 原文地址:https://www.cnblogs.com/javapythonstudy/p/9873505.html

《TensorFlow实战Google深度学习框架 (第2版) 》中文版PDF和源代码

下载:https://pan.baidu.com/s/1aD1Y2erdtppgAbk8c63xEw 更多最新的资料:http://blog.51cto.com/3215120 <TensorFlow实战Google深度学习框架(第2版)>中文版PDF和源代码中文版PDF,带目录标签,文字可以复制,363页.配套源代码:经典书籍,讲解详细:如图 原文地址:http://blog.51cto.com/3215120/2310423

《TensorFlow实战Google深度学习框架(第2版)》+《TensorFlow实战_黄文坚》

资源链接:https://pan.baidu.com/s/1n-ydbsnvKag4jWgpHRGIUA<TensorFlow实战Google深度学习框架(第2版)>中文版PDF和源代码带目录标签,文字可以复制,363页.配套源代码:<TensorFlow实战_黄文坚>中文版PDF和源代码带目录标签,文字可以复制,313页.配套源代码:两本都是经典书籍,讲解详细:如下: 原文地址:http://blog.51cto.com/14063572/2320200

学习《TensorFlow实战Google深度学习框架 (第2版) 》中文PDF和代码

TensorFlow是谷歌2015年开源的主流深度学习框架,目前已得到广泛应用.<TensorFlow:实战Google深度学习框架(第2版)>为TensorFlow入门参考书,帮助快速.有效的方式上手TensorFlow和深度学习.书中省略了烦琐的数学模型推导,从实际应用问题出发,通过具体的TensorFlow示例介绍如何使用深度学习解决实际问题.书中包含深度学习的入门知识和大量实践经验,是走进这个前沿.热门的人工智能领域的优选参考书. 第2版将书中所有示例代码从TensorFlow 0.9

tensorflow:实战Google深度学习框架第三章

tensorflow的计算模型:计算图–tf.Graph tensorflow的数据模型:张量–tf.Tensor tensorflow的运行模型:会话–tf.Session tensorflow可视化工具:TensorBoard 通过集合管理资源:tf.add_to_collection.tf.get_collection Tensor主要三个属性:名字(name).维度(shape).类型(type) #张量,可以简单的理解为多维数组 import tensorflow as tf a =

实现迁徙学习-《Tensorflow 实战Google深度学习框架》代码详解

为了实现迁徙学习,首先是数据集的下载 #利用curl下载数据集 curl -o flower_photos.tgz http://download.tensorflow.org/example_images/flower_photos.tgz #在当前路径下对下载的数据集进行解压 tar xzf flower_photos.tgz 下载谷歌提供的训练好的Inception-v3模型 wget -P /Volumes/Cu/QianXi_Learning --no-check-certificat

2019年8月12日 《TensorFlow 实战Google深度学习框架 》学习 20190813

2014年提出的Seq2Seq模型. 训练步骤分为  预处理,词对齐,短语对齐,抽取短语特征,训练语言模型,学习特征权重等诸多步骤. 基本思想为:使用一个循环神经网络读取输入句子,将整个句子的信息压缩到一个固定维度的编码中:再使用另一个循环神经网络读取这个编码,将其解压为目标语言的一个句子. 原文地址:https://www.cnblogs.com/beautifulchenxi/p/11348044.html