What is the C++ equivalent of python: tf.Graph.get_tensor_by_name(name) in Tensorflow? Thanks!
Here is the code I am trying to run, but I get an empty output
:
Status status = NewSession(SessionOptions(), &session); // create new session
ReadBinaryProto(tensorflow::Env::Default(), model, &graph_def); // read Graph
session->Create(graph_def); // add Graph to Tensorflow session
std::vector<tensorflow::Tensor> output; // create Tensor to store output
std::vector<string> vNames; // vector of names for required graph nodes
vNames.push_back("some_name"); // I checked names and they are presented in loaded Graph
session->Run({}, vNames, {}, &output); // ??? As a result I have empty output
You create and run a graph in TensorFlow by using tf. function , either as a direct call or as a decorator. tf. function takes a regular function as input and returns a Function .
tf. function is a decorator function provided by Tensorflow 2.0 that converts regular python code to a callable Tensorflow graph function, which is usually more performant and python independent. It is used to create portable Tensorflow models.
What Are Computational Graphs? In TensorFlow, machine learning algorithms are represented as computational graphs. A computational graph is a type of directed graph where nodes describe operations, while edges represent the data (tensor) flowing between those operations.
The TensorFlow Python library has a default graph to which ops constructors add nodes. The default graph is sufficient for many applications. See the Graph class documentation for how to explicitly manage multiple graphs.
From your comment, it sounds like you are using the C++ tensorflow::Session
API, which represents graphs as GraphDef
protocol buffers. There is no equivalent to tf.Graph.get_tensor_by_name()
in this API.
Instead of passing typed tf.Tensor
objects to Session::Run()
, you pass the string
names of tensors, which have the form <NODE NAME>:<N>
, where <NODE NAME>
matches one of the NodeDef.name
values in the GraphDef
, and <N>
is an integer corresponding to the index of the the output from that node that you want to fetch.
The code in your question looks roughly correct, but there are two things I'd advise:
The session->Run()
call returns a tensorflow::Status
value. If output
is empty after the the call returns, it is almost certain that the call returned an error status with a message that explains the problem.
You're passing "some_name"
as the name of a tensor to fetch, but it is the name of a node, not a tensor. It is possible that this API requires you to specify the output index explicitly: try replacing it with "some_name:0"
.
there is a way to get neural node from graph_def directly. if u only want the shape\type of node: "some_name":
void readPB(GraphDef & graph_def)
{
int i;
for (i = 0; i < graph_def.node_size(); i++)
{
if (graph_def.node(i).name() == "inputx")
{
graph_def.node(i).PrintDebugString();
}
}
}
results:
name: "inputx"
op: "Placeholder"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: -1
}
dim {
size: 5120
}
}
}
}
try member functins of the node and get the informations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With