Elvis and Las Vegas go hand in hand and in that area, as in many others, AWS re:Invent delivered. Now a bellwether for the tech industry, AWS re:Invent 2018 drew over 50,000 customers, partners and analysts to Las Vegas last week to showcase the scale and breadth of the company.
With 140 new capabilities announced spanning a number of areas in cloud services, the firm continues to push the boundaries of the cloud industry and its leadership within it. At its current growth trajectory AWS could feasibly surpass $100 billion of annual recurring revenue within four years, a staggering possibility for a company just 12 years old.
By far the biggest focus at re:Invent, as it was a year earlier, was artificial intelligence (AI) and specifically, machine learning, which continues to be one of Amazon’s biggest strategic bets. Major announcements concentrated on key enterprise challenges with the technology today, such as the high cost of training large data sets, preparing data and complex developer tools.
Let's take a look at some of the moves within the context of AWS's overall AI direction and then go deeper into our assessment on what it means moving forward.
AWS Builds Momentum in 2018
After launching many AI products at re:Invent 2017, including Amazon SageMaker and AWS DeepLense among others, AWS has had a big response to its strategy in 2018, reporting 250 percent growth, more than 10,000 SageMaker customers and launching over 200 new features over the past 12 months. For example, according to the firm, 80 percent of Google TensorFlow workloads run on the AWS cloud, allowing it to make the claim that there is more ML happening on AWS than anywhere else at the moment.
AWS breaks down its capabilities across three domains:
- Frameworks and infrastructure. Designed for skilled researchers and academics, these include its dedicated infrastructure services such as EC2, FPGAs and GPU hardware accelerators for machine learning provided through partner Nvidia and support for a wide array of ML frameworks such as TensorFlow, MXNet and PyTorch and others.
- ML services. These are made for specific developers and data scientists in machine learning and include Amazon SageMaker, its fully managed platform to build, train and deploy models at scale.
- AI services. This suite of off-the-shelf models, developer APIs and services is targeted at app developers with little, if any, experience with machine learning. They include Lex for chat bots; Polly for turning text into speech; Comprehend for natural language processing, speech Translation and Transcription services; and Rekognition for computer vision services such as image and video.
A good way to understand the full breadth of its capabilities is in the below image I took from CEO Andy Jassy’s keynote, which summarizes these areas nicely.
Elvis and AI: Who Knew?
Most of the key machine learning announcements Andy Jassy made in his marathon 3-hour keynote focused on each of the three domains mentioned above. Supported by case studies such as one from Formula 1, the announcements were introduced under the heading from the Elvis Presley song “A Little Less Conversation” by a house band led by rock musician Matthew Sweet. It was a clever reference to the hype currently surrounding AI and the company's focus on lowering the barriers to entry in ML for enterprises. The most impactful announcements included:
One of re:Invent’s most interesting moves overall was Inferentia, a new high-performance ML inference chip that has been custom-designed by AWS as a cheaper and more effective alternative to GPUs. AWS stated that machine learning inference can make up 90 percent of the total cost of machine learning projects. The launch taps into the investment AWS made in 2015 in chip design firm Annapurna. It reflects the current shift toward vertical integration within many artificial intelligence portfolios of cloud suppliers, following moves by Google Cloud with its tensor processing units and Microsoft with its Project Brainwave initiative in 2018. It is going to be fascinating to see the response to this from customers as well as from other silicon players moving forward.
Without doubt, SageMaker has become the most important product in the AWS portfolio, receiving over 90 improvements in 2018. Jassy announced SageMaker Neo, designed to enable developers to train a machine learning model once and run it anywhere with double the performance; it also reduces the model's size by up to a tenth to run on network-edge devices. The firm has pushed hard into enabling the deployment of machine learning on internet of things devices this year with enhancements to Greengrass and the launch of DeepLens. The area remains a big part of its differentiation in machine learning.
Additionally, it launched Amazon SageMaker Ground Truth which enables organizations to automate data labelling and Amazon SageMaker RL enables the building, training and deployment of models using reinforcement learning. One of the highlights of the event was the preview of AWS DeepRacer, a fully autonomous miniature race car driven by reinforcement learning and the unveiling of the AWS DeepRacer League, the world's first global league for autonomous car races, an ongoing competition taking place at AWS events, with the championship held at re:Invent 2019. Both efforts are fun and clever ways to get developers working with the technology, as many find the world of deep learning mundane and complex.
AWS Marketplace for Machine Learning
Another important announcement was AWS Marketplace for Machine Learning. Akin to an Appstore for ML, it provides a platform for developers to share and get revenue from their algorithms and makes it easy for organizations to search a library of paid, free and open-source models that can then be deployed in SageMaker. With over 150 trained models in the marketplace already, AWS now becomes, along with Algorithmia, one of the few environments facilitating the transaction of machine learning intellectual property.
This is an important step in the maturation of machine learning. We have argued before that the market needs a trusted environment to manage the commercial distribution of algorithms at scale. Developers do not only need a place where they can share, collaborate and test their algorithms but also require a place to privately publish quotes, transact and consummate contracts around their intellectual property. As the AWS Marketplace for Machine Learning integrates more deeply with SageMaker in the future, this capability will become a key part of the machine learning process for AWS developers. This could give AWS a leg up over competitors Google Cloud with its Kaggle community and Microsoft which now owns GitHub.
AWS made several important moves in AI Services as well. Amazon Textract extracts text and data from any type of document with a new approach to optical character recognition. Amazon Comprehend Medical is a packaged solution for healthcare that has been pre-trained to extract health information from medical content. Based on its internal AI capabilities within Amazon.com, AWS also unveiled Amazon Personalize, a real-time recommendation and personalization service and Amazon Forecast, a forecasting service which works with any historical time-series.
These important steps in the firm’s AI strategy reveal AWS is starting to bring a “one Amazon” experience to machine learning for its customers, leveraging its internal AI capabilities in its ecommerce platform, logistics, warehouses and retail businesses. These could become formidable weapons against its competitors. Additionally, it is also expanding more aggressively into more applied AI, business and vertical applications, building on its solutions for cyber security with Amazon Macie and contact centers, with Amazon Connect for example. As many customers continue to struggle to customize AI to their specific industries and business challenges, we expect to see deeper moves into business solutions in 2019, such as for the legal sector and applications in manufacturing such as predictive maintenance or finance with fraud detection for example.
Next Steps for AWS in AI: Trust and Governance
AI is an arms race at the moment, but AWS is doing a good job at cutting through the hype and getting businesses and developers started. At the heart of this effort is a relentless focus on customer needs and a rapidly expanding suite of relevant services that have taken off over the past year. Above all, AWS is bringing over 20 years of experience of using machine learning in its retail, logistics and fulfillment operations. Marquee products such as Amazon Alexa, Echo devices or new disruptive services like Amazon Prime Air and Amazon Go already manifest this, but it is now emerging in its new AI services for enterprises as well.
But the firm can’t stand still and the next frontier must be in helping organizations improve the governance of AI. Specifically, SageMaker will need to evolve to incorporate tools that help developers detect bias and build systems that are explainable, compliant, auditable and secure by design. According to results from our forthcoming survey of IT decision-makers, trusting AI systems is seen to be the most significant barrier to the adoption of machine learning in enterprises, cited by 43 percent of decision-makers. Additionally, the ability of systems to ensure data integrity, security and compliance was the second biggest barrier to deployment, cited by 38 percent of respondents.
These imminently important topics received very little attention at the show, perhaps unsurprisingly, given the low level of maturity of customers' efforts in AI today. There are no easy solutions in these fields either, especially within the deep learning discipline, and much work is currently being done in research which will take time to develop into workable solutions.
Altogether, re:Invent 2018 revealed some of the most important advances we have seen in machine learning this year and signaled AWS’ push for growing leadership in this domain. But customers will require more from the firm in the fields of trust and governance in the future, which must now come into focus in 2019.
Learn how you can join our contributor community.