TensorFlow 生成 .ckpt 和 .pb

原文:https://www.cnblogs.com/nowornever-L/p/6991295.html

1. TensorFlow  生成的  .ckpt 和  .pb 都有什么用?

The .ckpt is the model given by tensorflow which includes all the
weights/parameters in the model.  The .pb file stores the computational
graph.  To make tensorflow work we need both the graph and the
parameters.  There are two ways to get the graph:
(1) use the python program that builds it in the first place (tensorflowNetworkFunctions.py).
(2) Use a .pb file (which would have to be generated by tensorflowNetworkFunctions.py). 

.ckpt file is were all the intelligence is.

2. TensorFlow saving into/loading a graph from a file

正好看到 StackOverflow 上有位仁兄问过相关的问题,整理的不错

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

From what I‘ve gathered so far, there are several different ways of dumping a TensorFlow graph
into a file and then loading it into another program, but I haven‘t been able to find clear examples/information on how they work. What I already know is this:

  1. Save the model‘s variables into a checkpoint file (.ckpt) using a tf.train.Saver() and restore them later (source)
  2. Save a model into a .pb file and load it back in using tf.import_graph_def() (source)
  3. Load in a model from a .pb file, retrain it, and dump it into a new .pb file using Bazel (source)
  4. Freeze the graph to save the graph and weights together (source)
  5. Use as_graph_def() to save the model, and for weights/variables, map them into constants (source)

However, I haven‘t been able to clear up several questions regarding these different methods:

  1. Regarding checkpoint files, do they only save the trained weights of a model? Could checkpoint files be loaded into a new program, and be used to run the model, or do they simply serve as ways to save the weights in a model at a certain time/stage?
  2. Regarding Regarding
    Bazel, can it only save into/load from .pb files for retraining? Is
    there a simple Bazel command just to dump a graph into a .pb?
  3. Regarding freezing, can a frozen graph be loaded in using The
    Android demo for TensorFlow loads in Google‘s Inception model from a
    .pb file. If I wanted to substitute my own .pb file, how would I go
    about doing that? Would I need to change
    any native code/methods?
  4. In general, what exactly is the difference between all these methods? Or more broadly, what is the difference between 
    In short, what I‘m looking for is a method to save both a graph (as in,
    the various operations and such) and its weights/variables into a file,
    which can then be used to load the graph and weights into another
    program, for use (not necessarily continuing/retraining).

    Documentation about this topic isn‘t very straightforward, so any answers/information would be greatly appreciated.


1 Answer

activeoldestvotes


up vote
down voteaccepted

There are many ways to approach the problem of saving a model in TensorFlow, which can make it a bit confusing. The documentation on this topic is taking shape, but doesn‘t cover all of the details in your question. Taking each of your sub-questions in turn:

  1. The checkpoint files (produced e.g. by calling saver.save() on a tf.train.Saver object) contain only the weights, and any other variables defined in the same program. To use them in another program, you must re-create the associated graph structure (e.g. by running code to build it again, or calling saver.save() also produces a file containing athe
    tutorial
     for more details.


  2. tf.train.write_graph() only writes the graph structure; not the weights.

  3. Bazel is unrelated to reading or writing TensorFlow graphs. (Perhaps I misunderstand your question: feel free to clarify it in a comment.)
  4. A frozen graph can be loaded using


    The main change would be to update the names of the tensor(s) that are
    fed into the model, and the names of the tensor(s) that are fetched from
    the model. In the TensorFlow
    Android demo, this would correspond to the outputName strings
    that are passed to 


    The GraphDef is
    the program structure, which typically does not change through the
    training process. The checkpoint is a snapshot of the state of a
    training process, which typically changes at every step of the training
    process. As a result, TensorFlow uses different storage
    formats for these types of data, and the low-level API provides
    different ways to save and load them. Higher-level libraries, such as
    the 
    Keras,
    and skflow build on these mechanisms to provide
    more convenient ways to save and restore an entire model.



share

edited yesterday

answered Aug 15 at 6:07

mrry

28.6k35999

 

   

Does this mean that the C++ API documentation lies, when it says that you can load the graph saved withtf.train.write_graph() and then execute it? – mnicky yesterday 
   

The C++ API documentation does not lie, but it is missing a few details. The most important detail is that, in addition to the GraphDef saved by mrry yesterday

原文地址:https://www.cnblogs.com/Ph-one/p/9516622.html

时间: 2024-07-31 00:29:55

TensorFlow 生成 .ckpt 和 .pb的相关文章

Tensorflow生成唐诗和歌词(下)

整个工程使用的是Windows版pyCharm和tensorflow. 源码地址:https://github.com/Irvinglove/tensorflow_poems/tree/master 代码与上篇唐诗生成基本一致,不做过多解释.详细解释,请看:Tensorflow生成唐诗和歌词(上) 歌词生成 一.读取歌词的数据集(lyrics.py) import collections import os import sys import numpy as np from utils.cle

tensorflow :ckpt模型转换为pytorch : hdf5模型

参考链接:https://github.com/bermanmaxim/jaccardSegment/blob/master/ckpt_to_dd.py import tensorflow as tf import deepdish as dd import argparse import os import numpy as np def tr(v): # tensorflow weights to pytorch weights if v.ndim == 4: return np.ascon

Tensorflow生成唐诗

一.读取诗的数据集(poems.py) import collections import os import sys import numpy as np import codecs start_token = 'G' end_token = 'E' def process_poems(file_name): # 诗集 poems = [] with codecs.open(file_name, "r", encoding='utf-8', ) as f: for line in f

两种从 TensorFlow 的 checkpoint生成 frozenpb 的方法

1. 从 ckpt-.data,ckpt-.index 和 .meta 生成 frozenpb import os import tensorflow as tf from tensorflow.python.framework import graph_util def freeze_graph(input_checkpoint,output_graph): ''' :param input_checkpoint: :param output_graph: PB模型保存路径 :return:

TensorFlow 初探

最近致力于深度学习,希望在移动领域能够找出更多的应用点.其中TensorFlow作为目前的一个热点值得我们重点关注. 机器学习 机器学习是人工智能的一个分支,也是用来实现人工只能的一种方法.简单来说,机器学习就是通过算法,使得机器能从大量历史数据中学习规律,从而对新的样本做智能识别或对未来做预测,与传统的使用特定指令集手写软件不同,我们使用大量数据和算法来"训练"机器,由此带来机器学习如何完成任务.从1980年代末期以来,机器学习的发展大致经历了两次浪潮:浅层学习(Shallow Le

tensorflow机器学习模型的跨平台上线

在用PMML实现机器学习模型的跨平台上线中,我们讨论了使用PMML文件来实现跨平台模型上线的方法,这个方法当然也适用于tensorflow生成的模型,但是由于tensorflow模型往往较大,使用无法优化的PMML文件大多数时候很笨拙,因此本文我们专门讨论下tensorflow机器学习模型的跨平台上线的方法. 1. tensorflow模型的跨平台上线的备选方案 tensorflow模型的跨平台上线的备选方案一般有三种:即PMML方式,tensorflow serving方式,以及跨语言API方

Tensorflow Learning1 模型的保存和恢复

CKPT->pb Demo 解析 tensor name 和 node name 的区别 Pb 的恢复 CKPT->pb tensorflow的模型保存有两种形式: 1. ckpt:可以恢复图和变量,继续做训练 2. pb : 将图序列化,变量成为固定的值,,只可以做inference:不能继续训练 Demo 1 def freeze_graph(input_checkpoint,output_graph): 2 3 ''' 4 :param input_checkpoint: 5 :para

TensorFlow Lite for Android示例

一.TensorFlow  Lite TensorFlow Lite 是用于移动设备和嵌入式设备的轻量级解决方案.TensorFlow Lite 支持 Android.iOS 甚至树莓派等多种平台. 二.tflite格式 TensorFlow 生成的模型是无法直接给移动端使用的,需要离线转换成.tflite文件格式. tflite 存储格式是 flatbuffers. FlatBuffers 是由Google开源的一个免费软件库,用于实现序列化格式.它类似于Protocol Buffers.Th

Windows下TensorFlow+Faster Rcnn 详细安装步骤

参考:https://cloud.tencent.com/developer/news/303081 实现步骤也很简单,实现流程如下: 1.安装前准备: 安装前请确保TensorFlow和相关的依赖库安装成功 2.下载 faster rcnn程序包 连接:https://github.com/dBeker/Faster-RCNN-TensorFlow-Python3 解压,然后可以看一下里面的readme 3.安装python依赖库 注意还是要在TensorFlow虚拟环境下安装,命令: pip