Ah, the '80s. Valley girls haunted like, totally awesome malls, Star Wars struck back and distributed processing was the latest in cutting-edge technology.

Going distributed is a vivid memory for those of us who lived through that period of technology disruption. Once again in vogue, distributed now goes by the clever name “fog computing.” 

Here’s why I think today’s information technologists best beware. You will either learn to embrace fog computing or risk being enveloped by it. 

Fog Computing

Fog computing is closely associated with two of today’s hottest trends — the cloud and the Internet of Things (IoT). Credit goes to Cisco’s Ginny Nichols for coining the term, a play on the fact that fog is the cloud, close to the ground. It is a distributed infrastructure where intelligent devices at the network edge as well as remote data centers in the cloud handle application services and data.

Now that the number of things connected to the Internet surpasses the human population, fog computing is gaining popularity as a means to deal with the immense amount of data that the billions of distributed IoT sensors can generate. Fog is also a way to handle the burden of exponential growth in machine-to-machine communications, as the data that the things of the IoT exchange increases. By handling services at the edge (in the fog), data can in many cases be processed more efficiently and applied more effectively.

Can You See Me Now?

The dictionary defines fog as a “thick cloud of tiny water droplets suspended in the atmosphere at or near the earth's surface that reduces visibility.” Ironically, proponents tout fog computing as a potential boon to improve transparency for rapid decision making.

Organizations are applying analytics tools and using the cloud and distributed computing for business intelligence. They can now create edge systems in such a way that real time data analytics become a reality on a truly massive scale. Lecturer and author Ahmed Banafa said, “With the increase in data and cloud services utilization, Fog Computing will play a key role in helping reduce latency and improving the user experience. This, ultimately, will mean better data access and improved corporate analytics capabilities.”

The vertical industry examples are compelling. Integrating analytic tools with edge computing can help industries get in front of buying trends or infrastructure demands, from power-grid usage to road traffic.

“Fog computing enables new Internet of Everything-based applications and services, such as serving customers better in a retail store, enabling commuters to find parking quickly and efficiently in crowded urban environments, and improving remote diagnostics and maintenance of industrial equipment,” said Arun Saksena, director and lead data scientist, Cisco Consulting Services.

Mobile healthcare applications could benefit, as well as sensor-rich environments where small time lapses have major impacts. With fog computing, companies could have end-to-end visibility across sensors in an energy pipeline virtual network, monitor and act on data quickly for a jet engine in action, or take data from a large number of jet engines to assess risk.

Is there a downside to the fog? The fog in the 75th Hunger Games is ground vapor that turns poisonous. Once released by the evil centralized Capitol authority, it grew, became acrid and destroyed its victims’ muscles. OK, we are not in Panem, and the cloud is not evil, nor is our fog deadly. But make no mistake — while fog computing does promise exciting benefits, it also poses serious requirements that businesses need to address to reap those benefits. Fortunately, useful technologies already exist that can be applied plus some new research advances that can help.

Up Close and Personal

Different countries have different rules about how data needs to be managed, including restrictions on whether the data can leave the country and how to handle personal data. Data sovereignty is the concept that information stored in digital form is subject to the laws of the country in which it is located. This is not a new concept: I dealt with this requirement when introducing distributed databases at GE during the '80s. In similar fashion, today’s cloud providers are already grappling with architectures that can address sovereignty and privacy issues. 

Fog computing introduces another layer of complexity.

Fog computing offers ways to keep local data local and close to the source, but the data needs a way to be integrated and synchronized. Here we can look to standardization efforts like Cisco’s Intercloud approach extended to the fog. This is one way to deal with interconnecting disconnected nodes so workloads can be migrated across different public and private clouds. For multi-national corporations concerned with data sovereignty issues, it helps to navigate between providers that are based in different regions around the globe. While network management tools and techniques will be critical to fog orchestration, content and records management domain expertise will make a big contribution here as well. For example, just as with on-premises and in the cloud data, we can use profiles to classify data for the fog so policy can automate its residence. 

Learning Opportunities

Fog nodes generate and collect highly sensitive information. Privacy-preserving solutions will need to build on existing solutions and creative new ones. Experts have proposed techniques that draw from a number of scenarios including cloud, smart grid, wireless and online social. Researchers are also developing searchable encryption schemes that allow a user to securely search over encrypted data through keywords without decryption, thus maintaining data privacy. To preserve location privacy, researchers are looking at identity obfuscation approaches — enabling the fog node to become aware that a fog client is nearby without being able to identify the fog client.

All information architectures need to address data sovereignty and privacy issues, and these will receive more attention as we increasingly pursue the advantages of distributing data and services with fog computing.

Come to the Edge

Fog computing can help reduce latency and improve the user experience. As Ahmed Banafa so aptly puts it, “We are now truly distributing the data plane and pushing advanced services to the edge.”

As this happens, security needs to be extended through to the edge. Fog computing aside, layered security is already the preferred approach these days. Cybersecurity best practice emphasizes the need for holistic security systems and processes that include mitigating risk at the network’s edge. It is not an easy task. In a recent survey by EiQ Networks, 90 percent of CIOs and other top IT professionals said security breaches were their top concern but only 15 percent said the company was "well prepared" for a breach.

Networking can provide guidance for securing the fog. For example, employing SDN (Software-Defined Networking) for fog computing can ease security implementation and management, while increasing network scalability and reducing costs. 

Dealing with cloud, hybrid cloud and multi-cloud architectures can also inform how we address fog computing issues. Securing the fog will require many of the capabilities that are used today to secure the cloud and Kristin Knapp agrees that looking to cloud providers for guidance makes perfect sense:

“Despite business leaders' concerns, data security is typically more robust in the cloud than it is on-premises. This is because the majority of cloud providers build security — often using a multi-layered approach — into their infrastructures from the ground up.”

Back in Time

Perhaps we all need to board Mister Peabody’s “Wayback Machine” and travel back in time to better appreciate the fog. Much like the distributed technologies of the past, fog computing:

  • Promises increased speed for better decision making, but requires integration with analytics tools
  • Enables interesting data sovereignty approaches, yet needs interconnection and synchronization schemes
  • Offers effective data colocation with the user to optimize delivery if it can be properly secured to the edge

Fog computing inherits much from distributed processing technologies that debuted in the 1980s, but today’s fog computing is also fresh and unique as it builds on a current technology stack that includes cloud and the IoT. Fog computing also signals a new and growing interdependence of network and information architectures. And while some of the issues can be addressed using existing schemes, there are also new challenges due to the distinct characteristics of fog computing, such as sensor heterogeneity and massive scale geo-distributed nodes.

These are all issues that information technology leaders — from CIOs and CTOs to CDOs and CISOs — will need to be aware (if not beware) of as they incorporate fog into their enterprise ecosystems. If businesses can master fog computing, it promises a wide range of benefits. And that is “like, totally awesome.”

Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License Title image by  Eric Kilby 

fa-solid fa-hand-paper Learn how you can join our contributor community.