TensorFlow, Machine Learning and Weave Cloud

By bltd2a1894de5aec444
October 10, 2017

This article discusses how InstaDeeps Reinforcement platform can save you time and resources through their visual approach to neural net optimization.

Related posts

SRE vs. GitOps vs. Platform Engineering: A Strategic Comparison

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

What Flux CD GA Means For You & Your Organization

At the September 14th TensorFlow meetup held at the Barclay’s Techstars offices Rise London, two speakers were featured. First up was Karim Beguir from InstaDeep whose talk, “Boosting Deep Learning with Neural Architecture Design”  focused on neural nets and machine learning as well as a live demo of their hosted Reinforcement Learning platform.    

Also that evening, Ilya Dmitichenko from Weaveworks whose talk  “Time Traveling the Universe of Microservices & Orchestration” described how to use Weave Cloud to manage and troubleshoot machine learning applications that use TensorFlow libraries in Kubernetes running on Google Cloud Container Engine.

Deep Learning vs Learning to Learn

InstaDeep’s Karim Beguir discussed the differences between deep learning and Reinforcement Learning (RL).  He outlined that training a neural network is often a manual process that requires experience and can be time very consuming. 

Things to ask yourselves when training a neural net include: which hyperparameters do you specify and what optimization algorithms should you use.  There are also network architectures to consider that are in a state of constant change.  In practise, says Karim, fine-tuning a deep learning algorithm is more art than science.  

Deep Neural Architecture

Although there are many different architectures available for deep learning patterns,  strategies that use Reinforcement Learning (RL) are fairly new. One simple example that illustrates how RL works was shown in how to train a neural net to identify the various outcomes in a game of Pong

In Pong there are generally three different types of movements: UP, STILL and DOWN. At each step, the action is decided by the neural net and it is sampled using a softmax probability distribution which can then be optimized with a policy gradient.  

With Reinforcement Learning an additional dimension is added to the movements, one that informs the neural net of certain outcomes in the game, i.e. whether the game was won or lost.  A win or a loss is also given a certain weighted probability; where a win increases the probability of all the actions chosen and a loss decreases them. 

deep_rl_pong.png

Deep RL for Neural Nets

A recent breakthrough from Google (Zoph and Le 2017) uses similar deterministic weightings to predict the outcome of a game.  Referred to as Recurrent Neural Network, it sets a policy (sometimes referred to as a controller) that accepts a randomly initiated weights/bias’.   Instead of just moving a joystick up and down to see what happens, it builds a network of child nodes and observes the outcomes.  

ample_arch_a.png

Like the pong example, actions are decided sequentially by sampling the softmax probability distribution for each feature.  But instead of a game of Pong, the neural net was trained to recognize and classify the objects in an image. It was trained by feeding it 45,000 CIFAR images where it builds up a multi-layer convolutional neural network or CNN as illustrated below: 

multi-layer.png

Training the neural network in this way and by applying the policy gradient algorithm: REINFORCE obtained an accuracy rate by the neural net to guess the object correctly with only a 3.5% error rate. 

Deep RL for Optimizers

The same sampling concepts can be applied to the policy algorithms themselves to provide a candidate optimizer that can update any of the rule-based optimizers such as Adam, RMSprop, SGD or even other optimization algorithms. 

Possible actions for the neural net in training builds the grammar used by all ‘classical optimizers’ as follows: 

depp_rl_optimizer.png

And like in the game of Pong, actions are decided sequentially by sampling the softmax probabilities for each feature where the neural net builds its own ‘candidate optimizer’: 

deep_rl_2.png

Once the candidate optimization graph is complete, a policy gradient algorithm can be applied to it.  

Training a small convolutional neural net enabled the controller to find and apply two new update rules.  The results of the neural net were more accurate than all of the regular optimizers and it was able to correctly identify objects in an image with an improved accuracy of up to 2%. 

Limitations of this Strategy

  • Requires A LOT of samples! For Neural Architecture Search, authors used 800 GPUs for several weeks!
  • Neural Optimizer Search uses 100 CPUs over days to find good optimizers.
  • REINFORCE Policy Gradient algorithm is quite sample inefficient.
  • This makes the systematic approaches described above a ”heavy weaponry” not useful for everyday needs.
  • Can we do something that would still provide results on a smaller time scale?

InstaDeep Visualizes Neural Networks as a Graph

Instadeep gets around these limitations with a platform that provides users with an improvement on their favorite architectures more quickly.  

Because InstaDeep’s platform sees the networks as a graph and since optimizers are already represented as a graph, layers of the neural network can be easily reconciled with the optimizer’s parameters in a graphical overlay.  Setting up the environment for visualization also makes it possible for the RL agents to perform actions across the board regardless of the underlying architecture. This is in contrast to the RNN controller that must be specifically configured for each network type.  

instadeep_platform.png

seldon_instadeep.png

InstaDeep’s Reinforcement Learning Platform for Training Neural Networks 

Weave Cloud and TensorFlow

Next up was Ilya from Weaveworks who showed us how to manage and visualize infrastructure as well as debug a predictive model running in Kubernetes with Weave Cloud.  He deployed a TensorFlow app to Kubernetes and then used Weave Cloud to visualize it while it was running in a cluster on Google Container Engine (GKE).   

What Weave Cloud Provides

Weave Cloud fills the missing gaps in a Kubernetes install and provides the tools necessary for a full development lifecycle: 

  • Deploy – plug the output of your CI system into cluster so that you can ship features faster
  • Explore – visualize and understand what’s happening so that you can fix problems faster
  • Monitor – understand behavior of running system so that you can identify problems faster using Prometheus

weave_cloud_loop.png

Weave Cloud Development Lifecycle

Deploy with Git: GitOps

The Deploy feature of Weave Cloud automatically detects configuration files and it manages the updates to those files. At present, Deploy only accepts YAML files. If you are using JSON files, you will have to convert them to YAML to work with Weave Deploy.  Ilya provides a JSON to YAML converter for you to do that. 

With Weave Cloud all that is needed is a path to your YAML files in a Github repo, read/write access to the Github repo and Weave takes care of the rest. You can use Weave Cloud with any CI system or Docker registry including a private on-premise registry.  

Changes are pushed to Git, where your CI system takes over and runs the integration tests, and then builds a Docker image. Deploy automatically updates the Kubernetes manifests and releases the newly built image to your running cluster based on the policy you set.  

gitops.png

Specify a Deploy Policy

Set the continuous delivery policy for your team: enable  ‘automatic` to deploy a new image without your intervention or set it to ‘Lock’  to manually update the new image through the GUI or the CLI. 

deploy_policy.png

Weave Cloud Deploy: Set the policy in the UI  

Explore & Manage Tensorflow Applications

Once the new image is deployed, you can use Explore to view the relationships between your app’s services and the infrastructure on which it runs. 

In addition to the visual map, you can drill down on Docker attributes or you can view the logs of microservices to troubleshoot connections or view the messages being passed between services so that you can debug any issues.  

scope.png

container.png

Time Travel

A useful feature is the ability to travel across in time in your app’s lifetime. You can see it’s state at any point in time by enabling time travel and then dragging the timeline on the top of main screen.  

timetravel.png

Monitor

Weave provides a hosted version of the popular open source project, Prometheus. Weave Cloud extends Prometheus by hosting a distributed, multi-tenant, horizontally scalable version of Prometheus. You instrument your app using the Prometheus client libraries and we host the scraped Prometheus metrics for you, so that you don’t have to worry about storage or backups. A GUI for running PromQL queries and for configuring alerts against when things go wrong is also provided.

promql.png

Wrapping Up

We discussed how InstaDeep’s Reinforcement platform can save you time and resources through their visual approach to neural net optimization.   In addition to this, Ilya from Weaveworks discussed how Weave Cloud helps troubleshoot and manage machine learning applications running in Kubernetes.  

Try the step by step tutorial, “Troubleshooting a TensorFlow Predictive Model Microservice with Weave Cloud”  on how to configure and launch a predictive microservice in Kubernetes in Google Container Engine and manage it with Weave Cloud

Check out the documentation for information on Weave Cloud. 

For more information about Seldon see the Seldon Core documentation

Join theWeave Online User Group and the TensorFlow London meetup for more talks like these. 










Related posts

SRE vs. GitOps vs. Platform Engineering: A Strategic Comparison

Empowering Platform & Application Teams: A Closer Look at Weave GitOps Enterprise Features

What Flux CD GA Means For You & Your Organization