Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference of static Computational Graphs in tensorflow and dynamic Computational Graphs in Pytorch?

When I was learning tensorflow, one basic concept of tensorflow was computational graphs, and the graphs was said to be static. And I found in Pytorch, the graphs was said to be dynamic. What's the difference of static Computational Graphs in tensorflow and dynamic Computational Graphs in Pytorch?

like image 564
user166974 Avatar asked Sep 11 '17 11:09

user166974


People also ask

What is the difference between static and dynamic graph?

A static chart is often created for documentation purposes. Examples include electronic schematics and org charts. A dynamic chart, on the other hand, remains in contact with business data during the display phase and is expected to change over time in response to business-related changes.

What is dynamic computation graph in PyTorch?

On the contrary, PyTorch uses a dynamic graph. That means that the computational graph is built up dynamically, immediately after we declare variables. This graph is thus rebuilt after each iteration of training. Dynamic graphs are flexible and allow us modify and inspect the internals of the graph at any time.

What is dynamic computational graph?

A Dynamic Computational Graph is a mutable system represented as a directed graph of data flow between operations. It can be visualized as shapes containing text connected by arrows, whereby the vertices (shapes) represent operations on the data flowing along the edges (arrows).

What is computational graph in TensorFlow?

What Are Computational Graphs? In TensorFlow, machine learning algorithms are represented as computational graphs. A computational graph is a type of directed graph where nodes describe operations, while edges represent the data (tensor) flowing between those operations.


1 Answers

Both frameworks operate on tensors and view any model as a directed acyclic graph (DAG), but they differ drastically on how you can define them.

TensorFlow follows ‘data as code and code is data’ idiom. In TensorFlow you define graph statically before a model can run. All communication with outer world is performed via tf.Session object and tf.Placeholder which are tensors that will be substituted by external data at runtime.

In PyTorch things are way more imperative and dynamic: you can define, change and execute nodes as you go, no special session interfaces or placeholders. Overall, the framework is more tightly integrated with Python language and feels more native most of the times. When you write in TensorFlow sometimes you feel that your model is behind a brick wall with several tiny holes to communicate over. Anyways, this still sounds like a matter of taste more or less.

However, those approaches differ not only in a software engineering perspective: there are several dynamic neural network architectures that can benefit from the dynamic approach. Recall RNNs: with static graphs, the input sequence length will stay constant. This means that if you develop a sentiment analysis model for English sentences you must fix the sentence length to some maximum value and pad all smaller sequences with zeros. Not too convenient, huh. And you will get more problems in the domain of recursive RNNs and tree-RNNs. Currently Tensorflow has limited support for dynamic inputs via Tensorflow Fold. PyTorch has it by-default.

Reference:

https://medium.com/towards-data-science/pytorch-vs-tensorflow-spotting-the-difference-25c75777377b

https://www.reddit.com/r/MachineLearning/comments/5w3q74/d_so_pytorch_vs_tensorflow_whats_the_verdict_on/

like image 192
Tushar Gupta Avatar answered Oct 16 '22 20:10

Tushar Gupta