Tensorflow Eager execution and interface

Lecture note 4: Eager execution and interface

Eager execution

Eager execution is (1) a NumPy-like library for numerical computation with support for GPU acceleration and automatic differentiation, and (2) a flexible platform for machine learning research and experimentation. It‘s available as tf.contrib.eager, starting with version 1.50 of TensorFlow.

  • Motivation:

    • TensorFlow today: Construct a graph and execute it.

      • This is declarative programming. Its benefits include performance and easy translation to other platforms; drawbacks include that declarative programming is non-Pythonic and difficult to debug.
    • What if you could execute operations directly?
      • Eager execution offers just that: it is an imperative front-end to TensorFlow.
  • Key advantages: Eager execution …
    • is compatible with Python debugging tools

      • pdb.set_trace() to your heart‘s content!
    • provides immediate error reporting
    • permits use of Python data structures
      • e.g., for structured input
    • enables you to use and differentiate through Python control flow
  • Enabling eager execution requires two lines of code

    import tensorflow as tf

    import tensorflow.contrib.eager as tfe

    tfe.enable_eager_execution() # Call this at program start-up

and lets you write code that you can easily execute in a REPL, like this

x = [[2.]] # No need for placeholders!

m = tf.matmul(x, x)

print(m) # No sessions!

# tf.Tensor([[4.]], shape=(1, 1), dtype=float32)

For more details, check out lecture slides 04.

原文地址:https://www.cnblogs.com/kexinxin/p/10162829.html

时间: 2024-10-10 15:26:04

Tensorflow Eager execution and interface的相关文章

微信北京赛车源码下载搭建

微信北京赛车源码下载搭建 http://hubawl.com 程序中可以定义为: @tf.custom_gradient def f3(x, n): v = tf.pow(x, n) def grad(dy): return (dy* (n*tf.pow(x, n-1)) ).numpy() return v.numpy(), grad def dp1_f1(x): return 64*x*(1-x)*f3(1-2*x,2)*f3(1-8*x+8*x*x, 2) 1 2 3 4 5 6 7 8

2018百度之星开发者大赛-paddlepaddle学习

前言 本次比赛赛题是进行人流密度的估计,因为之前看过很多人体姿态估计和目标检测的论文,隐约感觉到可以用到这次比赛上来,所以趁着现在时间比较多,赶紧报名参加了一下比赛,比赛规定用paddlepaddle来进行开发,所以最近几天先学习一下paddlepaddle的相关流程,在这里记录一下,也好让自己真正的能够学到东西. 流程前瞻 在我看来,设计一个深度学习网络(主要是基于CNN的,其他的没怎么接触),主要有以下几方面: 数据的读取(这里主要是图片数据和它的"标签"). 数据的预处理(包含数

tensorflow源码学习之五 -- 同步训练和异步训练

同步和异步训练是由optimizer来决定的. 1. 同步训练 同步训练需要使用SyncReplicasOptimizer,参考https://www.tensorflow.org/api_docs/python/tf/train/SyncReplicasOptimizer .其他optimizer都属于异步训练方式. 同步训练实现在sync_replicas_optimizer.py文件中的def apply_gradient()方法中.假设有n个参数: 对于PS,需要创建n个参数收集器(每个

Linear and Logistic Regression in TensorFlow

Linear and Logistic Regression in TensorFlow Graphs and sessions TF Ops: constants, variables, functions TensorBoard Lazy loading Linear Regression: Predict life expectancy from birth rate Let's start with a simple linear regression example. I hope y

Tensorflow word2vec+manage experiments

Lecture note 5: word2vec + manage experiments Word2vec Most of you are probably already familiar with word embedding and understand the importance of a model like word2vec. For those who aren't, Stanford CS 224N's lecture on word vectors is a great r

TensorFlow tutorial

代码示例来自https://github.com/aymericdamien/TensorFlow-Examples tensorflow先定义运算图,在run的时候才会进行真正的运算. run之前需要先建立一个session 常量用constant 如a = tf.constant(2) 变量用placeholder 需要指定类型 如a = tf.placeholder(tf.int16) 矩阵相乘 matrix1 = tf.constant([[3., 3.]]) #1*2矩阵 matrix

TensorFlow中国研发负责人李双峰演讲实录:TensorFlow从研究到实践

5 月 23 日周三晚,TensorFlow 中国研发负责人.Google 搜索架构技术总监李双峰受邀参与北京大学"人工智能前沿与产业趋势"系列座谈会,分享了深度学习的发展与应用.TensorFlow 从研究到实践的相关内容. 感谢作为北京大学"人工智能前沿与产业趋势"系列座谈会合作媒体的量子位旗下公众号"吃瓜社"俞晶翔.张康对活动的记录,演讲嘉宾李双峰也参与本文的联合撰写和修改. 主讲嘉宾:李双峰,TensorFlow 中国研发负责人.Goog

TensorFlow相关的一些技巧

谷歌开发技术推广工程师 Laurence Moroney 在 Google Cloud Next 大会上进行了一段 42 分钟的演讲,主题是「What's New with TensorFlow?」.本文作者 Cassie Kozyrkov 对该演讲进行了总结,概括出关于 TensorFlow 的九件事.机器之心对本文进行了编译介绍,希望对大家有所帮助. 我总结了今年 Google Cloud Next 大会上我最爱的一段演讲--What's New with TensorFlow?(https

基于TensorFlow的深度学习系列教程 2——常量Constant

前面介绍过了Tensorflow的基本概念,比如如何使用tensorboard查看计算图.本篇则着重介绍和整理下Constant相关的内容. 基于TensorFlow的深度学习系列教程 1--Hello World! 常量的概念 在tensorflow中,数据分为几种类型: 常量Constant.变量Variable.占位符Placeholder.其中: 常量:用于存储一些不变的数值,在计算图创建的时候,调用初始化方法时,直接保存在计算图中 变量:模型训练的参数,比如全连接里面的W和bias 占