我有2个ProtoBuf文件,我目前正在加载和转发,分别通过调用 – out1=session.run(graph1out, feed_dict={graph1inp:inp1}) 其次是 final=session.run(graph2out, feed_dict={graph2inp:out1}) 其中graph1inp和graph1out是图1的输
out1=session.run(graph1out, feed_dict={graph1inp:inp1})
其次是
final=session.run(graph2out, feed_dict={graph2inp:out1})
其中graph1inp和graph1out是图1的输入节点和输出节点,图2的类似术语
现在,我想将graph1out与graph2inp连接起来,这样我只需要在使用inp1提供graph1inp时运行graph2out.换句话说,连接2个相关图形的输入和输出张量,使得一次运行足以在两个训练的ProtoBuf文件上运行推理.
假设您的Protobuf文件包含序列化的tf.GraphDef
原型,您可以使用
tf.import_graph_def()
的input_map参数连接两个图:
# Import graph1. graph1_def = ... # tf.GraphDef object out1_name = "..." # name of the graph1out tensor in graph1_def. graph1out, = tf.import_graph_def(graph1_def, return_elements=[out_name]) # Import graph2 and connect it to graph1. graph2_def = ... # tf.GraphDef object inp2_name = "..." # name of the graph2inp tensor in graph2_def. out2_name = "..." # name of the graph2out tensor in graph2_def. graph2out, = tf.import_graph_def(graph2_def, input_map={inp2_name: graph1out}, return_elements=[out2_name])