At the TensorFlow Dev Summit 2019, Google introduced the alpha version of TensorFlow 2.0. The new version, was redesigned with a focus on developer productivity, simplicity, and ease of use. There are multiple changes in TensorFlow 2.0 to make its users more productive.

It includes many API changes, such as reordering arguments, removing redundant APIs, renaming symbols, and changing default values for parameters. In this article I will try to summarise some of the few changes worth noticing.

Table of Contents:

  • Cleaner API
  • Eager execution
  • Easier Debugging
  • Less Verbose
  • Backwards compatible with TF 1.0
  • Tensorflow Serving
  • Conclusion
  • References

Install TensorFlow 2.0 Alpha

It is recommended if you want to install it on your local computer to create a new separate conda/python environment, activate it and install it there by executing one of the following commands in the terminal:

# CPU version 
pip install tensorflow==2.0.0-alpha0

# GPU version
pip install tensorflow-gpu==2.0.0-alpha0

To verify it installed properly run the following:

import tensorflow as tf
print(tf.__version__)

Another option is to open a Jupyter Notebook and run one of the following commands;

My personal advice would be to use Google Colaboratory as it makes it really easy to setup a Python notebooks in the cloud. It offers free access to a GPU for up to 12 hours at a time. That said Colab has quickly become my go-to platform for performing machine learning experiments. In case you install the GPU version of TF 2.0 in Colab double check that its runtime has “GPU” as the runtime accelerator by clicking Edit>Notebook settings.

When installing TF 2.0 in Colab it would probably ask you to restart runtime, please proceed and restart it.

Cleaner API

It is common word that in many situations when using Tensflow we didn't know exactly which API to use as there's so many different Tensorflow specific naming conventions. This was mainly due to the following reasons:

  • So many new packages being added
  • Lots of deprecated APIs
  • Lots of renaming of existing APIs
alt text
alt text

TF 2.0 address this issue as many APIs are either gone or moved.Some of the major changes include removing tf.app, tf.flags, andtf.logging in favor of the now open-source absl-py, rehoming projects that lived in tf.contrib, and cleaning up the main tf.* namespace by moving lesser used functions into subpackages like tf.math. Some APIs have been replaced with their 2.0 equivalents - tf.summary, tf.keras.metrics, and tf.keras.optimizers.

Eager execution

In my opinion, this is the most important feature of TF 2.0 which allows for rapid prototyping by having what's called eager execution mode as the default mode.

But before going forward and discussing what eager execution is let's first present the problems associated with the concept of the "static computation graph" in TF 1.0.

When running the above example in TF 1.0 the below static computation grpah is being build.

alt text
  • Its modeled after a common programming paradigm called Dataflow
  • In a dataflow graph, the nodes represent units of computation, and the edges represent the data consumed or produced by a computation.

For example, in a TensorFlow graph, the tf.square operation would correspond to a single node with two incoming edges (the same matrix twice to be multiplied) and one outgoing edge (the result of the square).

But why TF 1.0 select to use DataFlow?
  1. Parallelism - easier to execute operations in parallel
  2. Distributed Execution - easier to partition the graph
  3. Compilation - XLA Compiler generates faster code using a graph structure
  4. Portability - Language independent graph.
That said the whole pipeline in TF 1.0 can be summarised in the following steps:
  1. Create data input pipeline.
  2. Build a model; TF 1.0 creates static computation graph
  3. Feed the data through this computation graph, compute loss from loss function and update the weights (variables) by backpropagating the error.
  4. Stop when you reach some stopping criteria.

Let's now proceed and discuss the advantage of TF 2.0 associated with the launch of eager execution mode.

What is eager execution mode?

Eager execution is:

  • An imperative programming paradigm that evaluates operations immediately;
  • Operations return concrete values instead of constructing a computational graph to run later.
  • No sessions or placeholders, instead pass data into functions as an argument

In reality, TF 2.0 creates what's called a dynamic computation graph and this is more pythonic it's more in line with how Python was built and this makes it much easier to debug and to read the code it's less verbose so just having this alone is such an important feature.

PyTorch already does this in fact Chainer did this about three years ago but now this is a native concept in TensorFlow which makes things much simpler to understand.

Running the same exactly as above we now get the following which is very helpful:

There's our output exactly as we wanted to work without any errors. In fact if we try running a session we get an error as there are no sessions anymore right. Therefore, the concept of first constructing a graph and then executing pieces of the graph via tf.Session.run() has been deprecated in TF 2.0.

In simple terms TF 2.0 has Functions, not Sessions. You can now write graph code using natural Python syntax, while being able to write Eager style code in a concise manner, and run it as a TensorFlow graph using tf.function.

Easier Debugging

Undoubtedly, debugging has been very hard up to now in TF. Long hours have been spent trying to debugging situations as below;

Of course, Z will be evaluated as nan and we couldn't see that it was Y  the problem. Even if we print out y it's not gonna tell us that the problem was here because Y hasn't been computed yet; it's waiting until the graph is built. So this is one of the features that was added to TF 2.0.

Here we can quickly print out Y and identify where the problem was making debugging easier (avoid having to wait to run a tf.session and then evaluate Y).

Less verbose

TF 1.0 has so many concepts like Variables, placeholders, servables, tensorboard, sessions, computation graphs, hyperparameter values, formatting conventions which you have to learn before even start talking about deep learning theory (high learning curve). Below we use an example of verbosity; (example of a Deep Convolutional Generative Adversarial Network)

In the above code there's a lot happening here and this can be very hard to follow for beginners as there's so many different tensorflow specific naming conventions here that we have to know beforehand.

Tf 2.o aims to ease this process by implemented the following changes:

  • tf.keras is now the official high level API
  • Distributed training is simple (1 line of code to enable)
  • Deprecated APIs have been removed

The above code is an example of training a model in collab which is super good, super short and the only thing that we need to import is tensorflow. Keras is not imported separately  since it is now built into TF 2.0. So using TF 2.0 we built a neural network in a few lines using the sequential API of Keras.

Backwards compatible with TF 1.0

TF 1.0 code can be converted to 2.0 by simple running the following command.

# simple script converts TF 1.0 code to TF 2.0 code
!tf_upgrade_v2 --infile tf_1_code.py tf_2_code.py

So by running the above command TF will automatically convert your code in a format easily executable by TF 2.0.

TensorFlow Serving

Last but not least TF 2.0 has made great improvements on TF serving which personally speaking is one one of the most powerful tools in the entire machine learning pipeline. It probably needs a separate article in order to cover everything but I will try to summarise the most important features.

alt text
What is TensorFlow Serving?
  • TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments.
  • TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs.
  • TensorFlow Serving provides out of the box integration with TensorFlow models, but can be easily extended to serve other types of models.

As a data scientist you always want a model that continuously learning from new observation and not a static model that it was just trained once and you serve it to a user. TensorFlow serving simplify this process as there's a lot of things that can go wrong.

TF 2.0 introduces the idea of a version control system for the model. Initially, you have a version of the model let's call it model one and it's trained on some data and we deploy it serving a user in the form of a web app. Thus, users are making requests to this model post requests and get predictions.

While this is happening in the background another version of this model (let's called model two) is training on new data and once this model has fully trained on new data it will gracefully phase out the original model (model one) and it will phase in the newly trained model (model two). Once that's process is finished then it will train another model and so on. The most important is that this is happening in a production environment while users are being served as normal.

In reality, you can do this several ways you can have multiple models or you can combine data from multiple outputs to create some ensemble technique. In simple terms, TensorFlow serving allows you to create models that serve users in a production environment that allows you to experiment very fast.

If you think about how the data science pipeline looks and if we look at the production grade the ml code is such a small part of it. There are other components such as:

  • Configuration
  • Monitoring
  • Data Collection
  • etc

All of this are considered DevOps and tensorflow serving takes care of all of that for us.

Conclusion

In this article, we looked at how TensorFlow 2.0's focus on usability, clarity and flexibility. Eager execution and improved high-level APIs abstract away much of TensorFlow’s usual complexity, making it much easier to quickly implement and run a machine learning model.

Thanks for reading and I am looking forward to hearing your questions :)
Stay tuned and Happy Machine Learning.

References