In late September of 2017, Microsoft announced some significant updates to Azure Machine Learning. These updates were put in place to allow data scientists the ability to manage, monitor, deploy, and build models at any scale. We’re going to look at the new functions available and what they are capable of.
The Azure Machine Learning Experimentation service was built to allow data scientists and developers the ability to experiment at a faster pace. Any project backed by a Git repository can track data, code, and configuration data that is used for a particular run. The outputs are also monitored, more importantly, giving you a ton of information about your history to use to evolve your models further.
This service is built to use Python frameworks and tools, and can be run locally, inside of a Docker container, or scaling out of Spark. It supports deep learning on computers with accelerated GPUs using frameworks like PyTorch, Microsoft Cognitive Toolkit, and Tensorflow.
Experimentation allows for libraries with in-depth APIs that connect with remote data and deployment targets. The ability to run these models anywhere that Machine Learning Server runs is a significant improvement and lets you build and improve on models nearly anywhere you would like.
The second new feature is Model Management, which provides management, deployment, monitoring, hosting, and versioning for models on-site, in Azure, and on IOT Edge devices. Containers are used to offer a stable location to host your models. They are also exposed to web services so that you can add additional custom logging, state management, advanced logic, or other code into your pipeline. This new feature also includes a hosting infrastructure build for model serving to provide efficient routing.
The models that have been deployed can be monitored at any time through Application Insights, offering you details of model execution. You also get information on versions, link back to particular code, data, and configuration used in the creation of a model. There is also support for managing deployment to provide lack of downtime that can end up wasting business time.
When using the Experimentation and Model Management services together, they provide lineage of models all the way back to the first job used to build that model. Telemetry can also give you the ability to trace decisions back to what created a model. This allows you to see a full debugging story from beginning to end.
This service is a client application that runs on Windows or Mac. It allows easy installation of Python with many services configured and connectivity to backend services found in Azure. This is more of a control panel that lets you see how development occurs and makes it easy to use the new services.
A new set of data wrangling tools were implemented that are powered by artificial intelligence to make preparing data more efficient. A collection of simple libraries is included to handle data sources, letting data scientists spend their time working on the code, not changing file paths and other things as they move between different locations.
Visual Studio Code Tools AI
In addition to the above, Microsoft released Visual Studio Code Tools for AI, which is an extension that offers capabilities for building deep learning and model frameworks based around Theano, Caffe2, Keras, TensorFlow, and Microsoft Cognitive Framework. This item lets you use whatever development tools you like for executing jobs local and in the cloud.
As you can see, there have been a ton of new updates with the latest Microsoft Azure Machine Learning. If you haven’t had a chance to experience it for yourself, there’s no better time than the present.