天道酬勤,学无止境

TF Estimator: Can't Load *.pb from Saved Model

I create a simple model using TF Estimator and save the model using export_savedmodel function. I use a simple Iris dataset which has 4 features.

num_epoch = 50
num_train = 120
num_test = 30

# 1 Define input function
def input_function(x, y, is_train):
    dict_x = {
        "thisisinput" : x,
    }

    dataset = tf.data.Dataset.from_tensor_slices((
        dict_x, y
    ))

    if is_train:
        dataset = dataset.shuffle(num_train).repeat(num_epoch).batch(num_train)
    else:   
        dataset = dataset.batch(num_test)

    return dataset

def my_serving_input_fn():
    input_data = {
        "thisisinput" : tf.placeholder(tf.float32, [None, 4], name='inputtensors')
    }
    return tf.estimator.export.ServingInputReceiver(input_data, input_data)

def main(argv):
    tf.set_random_seed(1103) # avoiding different result of random

    # 2 Define feature columns
    feature_columns = [
        tf.feature_column.numeric_column(key="thisisinput",shape=4),
    ]

    # 3 Define an estimator
    classifier = tf.estimator.DNNClassifier(
        feature_columns=feature_columns,
        hidden_units=[10],
        n_classes=3,
        optimizer=tf.train.GradientDescentOptimizer(0.001),
        activation_fn=tf.nn.relu,
        model_dir = 'modeliris2/'
    )

    # Train the model
    classifier.train(
        input_fn=lambda:input_function(xtrain, ytrain, True)
    )

    # Evaluate the model
    eval_result = classifier.evaluate(
        input_fn=lambda:input_function(xtest, ytest, False)
    )

    print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
    print('\nSaving models...')
    classifier.export_savedmodel("modeliris2pb", my_serving_input_fn)

if __name__ == "__main__":
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 
    tf.app.run(main)

After running the program, it produces a folder which contains saved_model.pb. I see many tutorials suggest to use contrib.predictor to load saved_model.pb but I can't. I've use contrib.predictor function to load the model:

def main(a):
    with tf.Session() as sess:
        PB_PATH= "modeliris2pb/1536219836/"
        predict_fn = predictor.from_saved_model(PB_PATH)

if __name__=="__main__":
    main()

But it yields an error:

ValueError: Got signature_def_key "serving_default". Available signatures are ['predict']. Original error: No SignatureDef with key 'serving_default' found in MetaGraphDef.

Is there another way that is better to load *.pb files? Why does this error happen? I'm suspicious it is because the my_serving_input_fn() function, but I don't know why

评论

I was facing same issue , I tried to search on web but there are no explanation about this so i tried different approach:

SAVING :

First you need to define features length in dict format like this:

feature_spec = {'x': tf.FixedLenFeature([4],tf.float32)}

Then you have to build a function which have placeholder with same shape of features and return using tf.estimator.export.ServingInputReceiver

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.string,
                                         shape=[None],
                                         name='input_tensors')
    receiver_tensors = {'inputs': serialized_tf_example}

    features = tf.parse_example(serialized_tf_example, feature_spec)
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

Then just save with export_savedmodel :

classifier.export_savedmodel(dir_path, serving_input_receiver_fn)

full example code:

import os
from six.moves.urllib.request import urlopen

import numpy as np
import tensorflow as tf


dir_path = os.path.dirname('.')

IRIS_TRAINING = os.path.join(dir_path,  "iris_training.csv")
IRIS_TEST = os.path.join(dir_path,   "iris_test.csv") 

feature_spec = {'x': tf.FixedLenFeature([4],tf.float32)}

def serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.string,
                                         shape=[None],
                                         name='input_tensors')
    receiver_tensors = {'inputs': serialized_tf_example}

    features = tf.parse_example(serialized_tf_example, feature_spec)
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)




def main():
    training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
        filename=IRIS_TRAINING,
        target_dtype=np.int,
        features_dtype=np.float32)
    test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
        filename=IRIS_TEST,
        target_dtype=np.int,
        features_dtype=np.float32)

    feature_columns = [tf.feature_column.numeric_column("x", shape=[4])]


    classifier = tf.estimator.DNNClassifier(feature_columns=feature_columns,
                                          hidden_units=[10, 20, 10],
                                          n_classes=3,
                                          model_dir=dir_path)
  # Define the training inputs
    train_input_fn = tf.estimator.inputs.numpy_input_fn(
      x={"x": np.array(training_set.data)},
      y=np.array(training_set.target),
      num_epochs=None,
      shuffle=True)

  # Train model.
    classifier.train(input_fn=train_input_fn, steps=200)


    classifier.export_savedmodel(dir_path, serving_input_receiver_fn)


if __name__ == "__main__":
    main()

Restoring

Now let's restore the model :

import tensorflow as tf 
import os

dir_path = os.path.dirname('.') #current directory
exported_path= os.path.join(dir_path,  "1536315752")

def main():
    with tf.Session() as sess:

        tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], exported_path)

        model_input= tf.train.Example(features=tf.train.Features(feature={
                'x': tf.train.Feature(float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))        
                })) 

        predictor= tf.contrib.predictor.from_saved_model(exported_path)

        input_tensor=tf.get_default_graph().get_tensor_by_name("input_tensors:0")

        model_input=model_input.SerializeToString()

        output_dict= predictor({"inputs":[model_input]})

        print(" prediction is " , output_dict['scores'])


if __name__ == "__main__":
    main()

Here is Ipython notebook demo example with data and explanation :

受限制的 HTML

  • 允许的HTML标签:<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • 自动断行和分段。
  • 网页和电子邮件地址自动转换为链接。

相关推荐
  • 如何将Keras .h5导出到tensorflow .pb?(How to export Keras .h5 to tensorflow .pb?)
    问题 我已经使用新数据集微调了初始模型,并将其保存为Keras中的“ .h5”模型。 现在我的目标是在仅接受“ .pb”扩展名的android Tensorflow上运行我的模型。 问题是在Keras或tensorflow中是否有任何库可以进行此转换? 到目前为止,我已经看过这篇文章:https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html,但还不清楚。 回答1 Keras本身不包括将TensorFlow图导出为协议缓冲区文件的任何方法,但是您可以使用常规的TensorFlow实用程序来实现。 这是一篇博客文章,解释了如何使用TensorFlow中包含的实用程序脚本Frozen_graph.py进行此操作,这是完成操作的“典型”方法。 但是,我个人觉得必须创建一个检查点,然后运行一个外部脚本来获取模型,但我更喜欢从我自己的Python代码中执行此操作,因此我使用了这样的函数: def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True): """ Freezes the state of a session into a pruned computation
  • Graph optimizations on a tensorflow serveable created using tf.Estimator
    Context: I have a simple classifier based on tf.estimator.DNNClassifier that takes text and output probabilities over an intent tags. I am able to train an export the model to a serveable as well as serve the serveable using tensorflow serving. The problem is this servable is too big (around 1GB) and so I wanted to try some tensorflow graph transforms to try to reduce the size of the files being served. Problem: I understand how to take the saved_model.pb and use freeze_model.py to create a new .pb file that can be used to call transforms on. The result of these transforms (a .pb file as well)
  • TensorFlow2.0 Guide官方教程 学习笔记17 -‘Using the SavedModel format‘
    本笔记参照TensorFlow官方教程,主要是对‘Save a model-Training checkpoints’教程内容翻译和内容结构编排,原文链接:Using the SavedModel format 文章目录 一、用Keras创建‘已保存模型’(Creating a SavedModel form Keras)二、在TensorFlow Serving里运行一个‘已保存模型’(SavedModel)三、硬盘上‘已保存模型’的格式(The SavedModel format on disk)四、输出自定义模型(Exporting custom models)五、用Python重新使用‘已保存模型’(Reusing SavedModels in Python)5.1 基本的微调(Basic fine-tuning)5.2 一般的微调(General fine-tuning) 六、‘已保存模型’中的控制流(Control flow in SavedModels)七、用Estimator保存模型(SavedModels from Extimators)八、用C++加载‘已保存模型’(Load a SavedModel in C++)九、‘已保存模型’的命令行交互细节(Detais of the SavedModel command line interface)9.1 安装
  • Restore best checkpoint to an estimator tensorflow 2.x
    Briefly, I put in place a data input pipline using tensorflow Dataset API. Then, I implemented a CNN model for classification using keras, which i converted to an estimator. I feeded my estimator Train and Eval Specs with my input_fn providing input data for training and evaluation. And as final step I launched the model training with tf.estimator.train_and_evaluate def my_input_fn(tfrecords_path): dataset = (...) return batch_fbanks, batch_labels def build_model(): model = tf.keras.models.Sequential() model.add(...) model.compile(...) return model model = build_model() run_config=tf.estimator
  • Prediction from model saved with `tf.estimator.Estimator` in Tensorflow
    I am using tf.estimator.Estimator to train a model: def model_fn(features, labels, mode, params, config): input_image = features["input_image"] eval_metric_ops = {} predictions = {} # Create model with tf.name_scope('Model'): W = tf.Variable(tf.zeros([784, 10]), name="W") b = tf.Variable(tf.zeros([10]), name="b") logits = tf.nn.softmax(tf.matmul(input_image, W, name="MATMUL") + b, name="logits") loss = None train_op = None if mode != tf.estimator.ModeKeys.PREDICT: loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)) train_op = tf.contrib.layers.optimize
  • tensorflow lite model gives very different accuracy value compared to python model
    I am using tensorflow 1.10 Python 3.6 My code is based in the premade iris classification model provided by TensorFlow. This means, I am using a Tensorflow DNN premade classifier, with the following difference: 10 features instead 4. 5 classes instead 3. The test and training files can be downloaded from the following link: https://www.dropbox.com/sh/nmu8i2i8xe6hvfq/AADQEOIHH8e-kUHQf8zmmDMDa?dl=0 I have made a code to export this classifier to a tflite format, however the accuracy in the python model is higher than 75% but when exported the accuracy decrease approximately to 45% this means
  • Locally load saved tensorflow model .pb from google cloud machine learning engine
    I'd like to take the tensorflow model i've trained online and run it locally with a python program I distribute. After training, I get a directory /model with two files /saved_model.pb and a folder /variables. What is the simplest way to deploy this locally? I was following here for deploying frozen models, but I can't quite read in the .pb. I downloaded saved_model.pb to my working directly and tried with tf.gfile.GFile("saved_model.pb", "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) google.protobuf.message.DecodeError: Truncated message. Looking on SO here, they
  • How to get the default Session for an Estimator in TensorFlow?
    I've created an Estimator and exported it to a SavedModel file using the Estimator's export_savedmodel() function. For reproducibility reasons, I would like to be able to re-create the Estimator, load the variables in the SavedModel file then call evaluate() and get the same results. I thought the way to do this is to create my SessionRunHook to do the loading and pass it to the hooks parameter in evaluate(), like so: class myhook(tf.train.SessionRunHook): def begin(self): tf.saved_model.loader.load(tf.get_default_session(), ['serve'], '../best_model/1516075471/') load_best_model_hook = myhook
  • Error trying to convert from saved model to tflite format
    While trying to convert a saved model to tflite file I get the following error: F tensorflow/contrib/lite/toco/tflite/export.cc:363] Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.toco_convert(). **Here is a list of operators for which you will need custom implementations: AsString, ParseExample**.\nAborted (core dumped)\n' None I am using the DNN premade Estimator. from __future__ import
  • Tensorflow, use a tf.estimator trained model within another tf.estimator model_fn
    Is there a way to use tf.estimator trained model A in another model B? Here is situation, Let say I have a trained 'Model A' with model_a_fn(). 'Model A' gets images as input, and outputs some vector floating values similar to MNIST classifier. And there is another 'Model B' which is defined in model_b_fn(). It also gets images as input, and needs vector output of 'Model A' while training 'Model B'. So basically I want to train 'Model B' that need inputs as images & prediction output of 'Model A'. (No need to train 'Model A' anymore, only to get prediction output while training 'Model B') I've
  • no add_to_collection was found when using tensorflowjs_converter
    I am trying to convert a savedModel into TensorFlow.js web format. I installed tensorflowjs via sudo pip3 install tensorflowjs When running tensorflowjs_converter--input_path=full_path_to/saved_model/saved_model.pb --outputpath=full_path_to/js I get an error saying ImportError: cannot import name 'add_to_collection' Even if I run tensorflowjs_converter --help, I get the exact same error. Here is the full error: Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow_hub/tf_v1.py", line 29, in <module> from tensorflow.compat.v1 import * # pylint
  • Tensorflow saved_model.pb 文件转成 saved_model.pbtxt文件
    想把saved_model.pb里的metagraph转成可读。代码如下: #将.pb文件load到session中,导出到.pbtxt可视化 import tensorflow as tf from tensorflow.python.platform import gfile tf.compat.v1.disable_eager_execution() model = './savedmodel/VGG16/1/' export_dir = './test/' with tf.compat.v1.Session(graph=tf.Graph()) as sess: tf.compat.v1.saved_model.load(sess,[tf.compat.v1.saved_model.tag_constants.SERVING],model) builder = tf.compat.v1.saved_model.Builder(export_dir) builder.add_meta_graph_and_variables(sess, [tf.compat.v1.saved_model.tag_constants.SERVING]) builder.save(as_text=True) 来源:https://blog.csdn.net/weixin_42552135
  • 【Tensorflow2.0】8、tensorflow2.0_hdf5_savedmodel_pb模型转换[2]
    文章目录 1、生成分类模型2、分别用h5和saved model做推理2.1 h5模型推理2.2 saved_model进行推理 3、模型转换pb3.1 h5转pb3.2 saved model转pb 4、调用pb模型进行推理4.1 h5 pb的推理4.2 saved pb的推理 5、h5和saved 尝试互转5.1 h5转saved5.1.1 h5 转saved后的模型推理5.1.2 h5转saved后的模型转pb5.1.3 h5转saved模型后转pb推理 5.2 saved model转h55.2.1 saved转h5后的模型推理5.2.2 saved 转h5后的模型转pb5.2.3 saved转h5后模型转pb后的推理 6、互转部分总结6.1 精度对比6.2 模型大小对比总结: 前期工作(一年前,2019年11月)参见https://blog.csdn.net/u011119817/article/details/103264080,随着框架版本的发展,有些结果不尽相同,因此更新本次博客,建意两者都看看,以便给自己的工作提供指导。 本文更新于2020年11月2日,所用代码全部在tensorflow gpu 2.2版本下进行,对于pb的使用,理论上tensorflow 1.14及以上版本都是可以的。关于本次实验的结论,大家可以直接去看结论部分,总的来说就是h5
  • Performing inference with a BERT (TF 1.x) saved model
    I'm stuck on one line of code and have been stalled on a project all weekend as a result. I am working on a project that uses BERT for sentence classification. I have successfully trained the model, and I can test the results using the example code from run_classifier.py. I can export the model using this example code (which has been reposted repeatedly, so I believe that it's right for this model): def export(self): def serving_input_fn(): label_ids = tf.placeholder(tf.int32, [None], name='label_ids') input_ids = tf.placeholder(tf.int32, [None, self.max_seq_length], name='input_ids') input
  • Restoring a model trained with tf.estimator and feeding input through feed_dict
    I trained a resnet with tf.estimator, the model was saved during the training process. The saved files consist of .data, .index and .meta. I'd like to load this model back and get predictions for new images. The data was fed to the model during training using tf.data.Dataset. I have closely followed the resnet implementation given here. I would like to restore the model and feed inputs to the nodes using a feed_dict. First attempt #rebuild input pipeline images, labels = input_fn(data_dir, batch_size=32, num_epochs=1) #rebuild graph prediction= imagenet_model_fn(images,labels,{'batch_size':32
  • Run prediction from saved model in tensorflow 2.0
    I have a saved model (a directory with model.pd and variables) and wanted to run predictions on a pandas data frame. I've unsuccessfully tried a few ways to do this: Attempt 1: Restore the estimator from the saved model estimator = tf.estimator.LinearClassifier( feature_columns=create_feature_cols(), model_dir=path, warm_start_from=path) Where path is the directory that has a model.pd and variables folder. I got an error ValueError: Tensor linear/linear_model/dummy_feature1/weights is not found in gs://bucket/Trainer/output/2013/20191008T170504.583379-63adee0eaee0/serving_model_dir/export
  • 如何将pytorch模型转换为tensorrt能够挂载的模型
    如何将pytorch模型转换为tensorrt能够挂载的模型 github链接 安装必要的包 安装pytorch/tensorflow/onnx/onnx_tfpython环境下,为了便于安装,可以将安装镜像改为国内的镜像,具体方法参见link安装时,直接pip安装即可,以下例子中所用到的版本tensorflow-gpu==1.15,onnx_tf==1.3 模型转换 pytorch模型与tensorflow之间的转换需要中间协议onnx,我们的转换步骤为: pytorch模型转换为onnx模型需要将onnx模型的batch size,由固定转为可变将onnx模型转换为tensorflow模型,此时的tensorflow模型为frozen的形式,当需要tensorflow serving或tensorrt挂载的时候,还需要进一步 pytorch模型转换为onnx模型 import os import sys sys.path.append('../') from resnet import * import torch model = resnet34(num_classes=6, shortcut_type=True, sample_size=128, sample_duration=128) weights = '../model/ct_pos_recogtion
  • Tensorflow MNIST Estimator: batch size affects the graph expected input?
    I have followed the TensorFlow MNIST Estimator tutorial and I have trained my MNIST model. It seems to work fine, but if I visualize it on Tensorboard I see something weird: the input shape that the model requires is 100 x 784. Here is a screenshot: as you can see in the right box, expected input size is 100x784. I thought I would see ?x784 there. Now, I did use 100 as a batch size in training, but in the Estimator model function I also specified that the amount of input samples size is variable. So I expected ? x 784 to be shown in Tensorboard. input_layer = tf.reshape(features["x"], [-1, 28
  • pytorch保存的pth模型通过onnx转换成pb模型记录
    pytorch保存的pth模型通过onnx转换成pb模型记录 一、pth转onnx 主要采用torch.onnx.export() 函数。 pytorch保存的模型有两种,一种是包含网络结构的,一种是不包含网络结构只有权重参数的。对于第一种加载模型后,直接转换即可。对于第二种,需要先实例化模型结构,然后pth文件加载权重参数,然后用torch.onnx.export() 进行转换。 本人转换的模型是最新开源的车道线检测算法RESA,项目地址是https://github.com/ZJULearning/resa 将pt2onnx_resa.py放在resa目录下,然后执行 python3 pt2onnx_resa.py --model_path ./culane_resnet50.pth --input_shape 1 3 288 800 pt2onnx_resa.py的内容如下: import os import torch import argparse import json import torch from models.resa import RESANet from utils.config import Config from datasets import build_dataloader def load_network_specified(net, model
  • Error Trying to Convert TensorFlow Saved Model to TensorFlow.js Model
    I have successfully trained a DNNClassifier to classify texts (posts from an online discussion board). I've created and saved my model using this code: embedded_text_feature_column = hub.text_embedding_column( key="sentence", module_spec="https://tfhub.dev/google/nnlm-de-dim128/1") feature_columns = [embedded_text_feature_column] estimator = tf.estimator.DNNClassifier( hidden_units=[500, 100], feature_columns=feature_columns, n_classes=2, optimizer=tf.train.AdagradOptimizer(learning_rate=0.003)) feature_spec = tf.feature_column.make_parse_example_spec(feature_columns) serving_input_receiver_fn