Gartner's recently released HypeCycle for Emerging Technologies identified a range of technologies that are either disrupting the workplace or will do so in the next few years. We've already looked at one of the trends the report highlighted — the rise of democratic AI. Today we dive into another trend: the emergence of digitized ecosystems and the problems they present for applications and data, particularly legacy applications.
Related Article: Is Your Enterprise Ready for 'Democratized AI'?
What Happens When the 'Things' Generate Data?
According to the report, the new ecosystems require advanced compute power, ubiquity-enabling ecosystems and major changes in the foundations that provide the volume of data needed to keep them running. The shift from compartmentalized technical infrastructure to ecosystem-enabling platforms lays the foundation for entirely new business models that are forming the bridge between humans and technology, the report reads.
The trend is being driven by blockchain, blockchain for data security, IoT platforms, knowledge graphs and similar technologies which require massive data sets to drive their respective applications.
The Hype Cycle highlights the issues that arise with how data is now being generated. In past computing eras — personal computing, cloud computing, mobile computing — research reports generally talk about data that has been generated by humans. Human over the last handful of decades “went online” and began generating data directly. This drove enterprises to adopt to that traffic patterns and resulted in vast increases in humans generating video content, work content and business transactions. This in turn meant that infrastructures for data in transit and data at rest had to rapidly adapt. Massive data centers for the cloud and asymmetrical network infrastructure emerged, but the data still needed to be managed. But that will no longer be the case for all data.
“What the Hype Cycle is showing is a new wave of innovation based upon that fact that machines (things, computers, whatever you want to call them) are now coming networked and asked to generate data and process it before giving results to the humans,” Joel Vincent, CMO of Zededa, provider of a real-time enterprise platform. “So as 'things' come online and become promiscuous with their data, other non-human compute devices and apps take that data and process it. And there are billions upon billions of ‘things’ coming online.”
This data is being pushed from machine to machine and is not necessarily in the cloud as applications are being run near the source of data creation to deal with real-time interactions. The result is a lot of innovation that is designed to help enterprises manage augmented and virtual reality applications, IoT platforms, deep learning, machine learning and blockchain solutions.
Related Article: Cloud Computing Takes a Back Seat to Edge Computing. Or Is it Fog?
Managing the Data Overload
Enterprises need to think about data traffic patterns in their organizations, Vincent said, and recognize when the traffic no longer flows through a central point (whether public cloud or private cloud) and ready their corporate networks for a whole new traffic flow as part of their digital transformation.
To help de-saturate the enterprise of data, enterprises need to think about storage and moving inactive data from active enterprise applications to data warehouses, or the cloud. Generally, any workload that can process entities as a single object can be a candidate for object storage. This includes archival and retrieval of database backups or storage of unstructured data, such as images, video and text documents, Ray Johnson, chief data scientist at Chicago-based consultancy SPR said.
Complex objects that can be defined are composites of text, images, sound, video, etc. Applications (Web) may also leverage object storage as there are APIs that allow retrieval and storage of objects with simple service requests. "The use of object storage is going to continue to expand, because information is being produced at an accelerated rate. New types of complex objects and metadata are being generated in the fields of artificial intelligence, astronomy, physics, medical imagery and bio-pharmaceuticals. There are also advances in storage technology that are increasing information density, allowing even more data to be stored with increased efficiency," Johnson said.
Joshua Eichorn, CTO of Pagely, a provider of flexible managed hosting services for WordPress, says its solution has been to build a data lake. “In general,” he said, “data lakes seem to be the solution for managing a lot of data, but that can be challenging for an enterprise where they might have data in both data centers and the cloud, or in multiple cloud regions. Our answer at Pagely is a data lake, all the data in the cloud, and all the CPU they would need. Since AWS gives us elastic storage and CPU it's not really an issue, we just spin up CPU by the data when we need it.”
Related Article: 7 Big Problems With the Internet of Things
Data Security Problems
While there are many advantages to managing data in this way there are also problems. With the rise of next-gen technologies, companies have access to more data than ever before and more opportunities to act on that data. “In order to manage this massive influx of information, organizations are extending their partner network to new third-party vendors. And while this is helping to alleviate the data volume challenge, it’s also presenting new security obstacles,” Kevin Alexandra, principal technical consultant at Avecto, a Bomgar company. Avecto provides endpoint privilege management software.
For organizations to stay ahead of third-party threats, companies must start by assessing their own security strategy. They should enact a multi-layered defense strategy that covers their entire enterprise — all endpoints, all mobile devices, all applications and all data. Following this assessment, companies should evaluate the technology, compliance procedures and security standards that their partner network has in place.