Amazon Web Services (AWS) was true to form last week as over 65,000 customers, partners and analysts descended on Las Vegas for re:Invent 2019.
As has been the case in previous years, the company announced an onslaught of over 100 new capabilities, spanning many areas of cloud services. CEO Andy Jassy’s signature three-hour marathon keynote was once more a whirlwind of customer case studies and digs at the competition backed by an '80s-fueled house band. Van Halen, Billy Joel and other blasts from the past provided the soundtrack to this year's core theme of business transformation and how companies can get value out of the AWS cloud.
Artificial intelligence (AI) and machine learning (ML) once again stole the spotlight, with over 218 sessions covering the topics this year. Major announcements concentrated on the most important enterprise challenges with the technology: the lack of ML-related skills, technology complexity and finding the right use cases.
Below, I take a look at some of the most impactful moves and assess what they mean for AWS’s AI strategy moving forward.
AWS Expanded Its Customer Base in 2019
AWS stated that “tens of thousands” of customers are now deploying its ML tools, which it claims is double the number of its nearest competitor, including flagship customers such as Nascar, Intuit and most recently, the Seattle Seahawks. The NFL club announced it had selected AWS as its preferred cloud vendor in advance of the show, stating its AI is enabling better player tracking, performance analytics and video analysis.
This highlights an important trend we're currently seeing in cloud computing. As the market for data analytics and AI accelerates, the dominant trend towards multicloud strategies is changing. According to my firm, CCS Insight's annual survey of IT decision-makers this fall, 57% of US and European businesses that are deploying AI said they favor either a single cloud or a preferred cloud strategy when it comes to their data and machine learning requirements.
AWS breaks down its capabilities across three domains: frameworks and infrastructure, ML services which includes Amazon SageMaker, and its suite of off-the-shelf models, developer APIs and services. Over a dozen announcements were made in ML spanning these key areas.
SageMaker Advances into ML Lifecycle Management and Explainability
Amazon SageMaker, the company's fully managed plattform to build, train and deploy ML models and AI services, has become one of AWS’s most important products. At re:Invent, it introduced six new capabilities, including SageMaker Autopilot, AWS’s answer to automated machine learning, giving greater visibility and control into the process. Additionally, SageMaker Notebooks allows developers to automate the process of sharing notebooks and SageMaker Experiments helps developers visualize and compare machine learning model iterations, trainings and outcomes in one place.
AWS also unveiled SageMaker Studio, an integrated development environment (IDE) for ML. SageMaker Studio includes the new SageMaker Debugger, a fully managed debugging service for the real-time monitoring of models that warns and provides remediation advice when issues are detected and SageMaker Model Monitor, a service which detects concept drift in models and alerts developers when the performance of a model running in production begins to deviate from the original trained model.
I spoke with one of AWS’s key ML customers, Vueling Airlines, part of the IAG Group, about its experience using the tools. As part of the airline’s transformation and shift towards a data-driven culture, Vueling has hired over 20 data scientists, many of whom are working across teams and business areas. As an already extensive user of AWS services, Vueling sees the new SageMaker releases as being crucial. Vueling’s head of data and analytics told me that the advancements will enable their vision of having, in the future, more machine learning models used by the business units and governed by their data center of excellence.
As the market shifts from experimentation to the operationalization of ML into business processes, SageMaker is evolving quickly to meet this shift, especially in the areas of machine learning lifecycle management, explainability and governance, which are hot areas now. According to our survey of IT decision-makers, transparency into how systems work and are trained is now one of the most important requirements when investing in AI and machine learning, cited by almost 50% of respondents. The survey also found 43% of respondents list tools that support AI operations and life cycle management as being the biggest current gap in the market for suppliers of AI platforms.
During the keynote, Matt Wood, vice president of AI at AWS, also revealed some solid improvements in ML explainability in SageMaker based on SHAP, as seen in the photo below. AWS has clearly been working hard to advance this critical area.
Related Article: From the Lab to Real Life: Operationalizing AI and Data Analytics
With Amazon Kendra, AWS Enters the Search Market
AWS also continued its expansion into business and vertical applications in what we call the fields of applied AI. These include: Amazon Rekognition Custom Labels, which allows organizations to build custom ML-based image recognition capabilities to identify objects or images specific to their business; Amazon Fraud Detector, which detects online identity and payment fraud in customers’ system activity based on technology from Amazon’s consumer business; Amazon Transcribe Medical, which allows developers to add speech-to-text capabilities to medical applications so that doctors can dictate clinical notes into patient’s electronic health records; and Code Guru, which automates the code review processes using models pre-trained by Amazon’s own code review projects.
One of the most intriguing announcements of all was Amazon Kendra, a new enterprise search offering which uses natural language processing to make information searching easier through connectors to data stored in SharePoint online, JDBC and Amazon S3 repositories. Search is an area customers frequently list as being broken in their organizations and since AWS doesn’t have a wide range of SaaS applications which generate a corpus of information that its AI can improve for search, it is an interesting move and part of its strategy in helping firms customize AI to specific industry and business challenges. It will be fascinating to watch AWS take on this big opportunity in a market which currently lacks a clear winner.
Related Article: The Enterprise Search Market Looks Up Following Recent Investment Rounds
The Takeaway: Amazon's Speed Sets it Apart
We argued last year that customers required more from AWS in the fields of ML explainability and governance and this year, the company ticked these important boxes. So how do the moves stack up in terms of advancing AWS’s differentiation in the increasingly crowded AI market?
We see several important areas where Amazon is now moving ahead.
- Cloud Platform – First, the AWS cloud is being rapidly engineered to support a range of ML workloads. AWS claims more ML happens on its cloud than anywhere, supported by running 85% of TensorFlow and MXNet workloads. Additionally, innovations in custom silicon such as Inferentia, announced last year, and Inf1 Instances for EC2, announced this year, are addressing infrastructure requirements for ML inference, specifically where 90% of the cost resides.
- Edge – Another key area is in hybrid and edge computing, where AWS has an early lead through products like Outposts, Greengrass, AWS IoT and dedicated edge ML solutions such as SageMaker Neo and AWS DeepLens. I see this lead accelerating with the introduction of Local Zones and AWS Wavelength, two of its major cloud announcements this year. As 5G and mobile edge computing take hold over the next five years, developers will increasingly look to the best cloud that supports low-latency applications and for hybrid and edge application development.
- Robotics – Another key area where the firm stands out is in robotics. RoboMaker, its platform to enable firms to build and deploy robots, such as unmanned ground vehicles, robotic arms and drones, didn’t receive any major announcements this year but the area is an important opportunity beyond the 200,000 robots it uses in its fulfillment centers. Nasa’s Jet Propulsion Laboratory shared in a session how they were using the platform along with reinforcement learning in SageMaker for an open source, build-it-yourself Rover project they are running. Combining AWS’s edge, AI and robotics capabilities will make a potent mix, particularly for industrial scenarios.
- Speed – Above all, all this points points to the fact that few (if any) are moving faster than AWS in ML in 2019. SageMaker alone has received a staggering 150 updates since the beginning of 2018. This pace can be bewildering for customers, but its stated ambition of being “the broadest and deepest cloud platform” does not look unrealistic when it comes to this area.
A good way to understand the pace of change is by comparing the two images below from CEO Andy Jassy’s 2018 and 2019 re:Invent keynotes. The side by side comparison shows the rapid expansion of the portfolio in just 12 months.
Related Article: Re:Invent Shows Amazon Is Accelerating Its Efforts in AI
What Next for AWS?
AWS is doing a good job at helping developers build, scale and apply ML in their organizations. But the firm can’t rest on its laurels. We see three important areas of expansion that should now come into focus in 2020.
- ML Security – The Wall Street Journal reported in August that in 2018, a voice-based deep fake of a major UK company's CEO emerged and was able to trick a senior employee into wiring over $240,000 to a criminal bank account. It is the world's first case of voice fraud and is an astonishing reminder that requirements for dedicated ML security will escalate in the future. We expect AWS will advance its platform and algorithmic solutions in this area to help protect firms, particularly in regulated industries, from security threats posed to ML such as model inversions, trojans, spoofing and adversarial attacks.
- More on Explainability – AWS has made some good steps into explainability this year, but it's still early days here. We expect AWS will become more proactive in publishing and incorporating research in this area and prescriptive to customers on the use cases where requirements are high and the tradeoffs between greater performance and explainability.
- Business Audiences – Finally, AWS needs to grow its mindshare with business audiences. Much of its storytelling focuses on developers and its vast array of tactical and technical features that make them more productive. However AI is also a business and C-Suite topic that impacts on strategy, company culture, operations and governance. AWS will need to uplevel its positioning and develop a constant cadence of communication to capture the attention of business decision makers as well as technical developers. For this, the expansion of its AWS Solutions Lab and similar will also be crucial.
Re:Invent is a vital portal into Amazon’s future direction and investments. The year it revealed some of the most important moves we have seen in in ML in 2019 and signaled the company's growing differentiation. Customers will require more from the provider however in the fields of ML security, explainability and business strategy in the future if it is to continue its leadership into 2020 and beyond.
Have a tip to share with our editorial team? Drop us a line: