Scottish author Iain Banks once said, "Ever since the Industrial Revolution, science fiction has been the most important genre there is."
As we stand here on the brink of the Fourth Industrial Revolution, that remark seems extraordinarily prescient. I was in graduate school in 1996 when scientists at IBM created the world's smallest abacus out of 10 atoms. It was a huge milestone in engineering at the nanoscale. At the time, I could not imagine that nanotechnology, IoT, robotics, machine learning and AI would make up a thing we call the Fourth Industrial Revolution.
Yet here we are, 25 years later. Things that once seemed science fiction — like that phone in your pocket with computing power that’s millions of times faster than NASA’s total computing resources in 1969 — are as common as mosquitoes (and sometimes as annoying!).
But you don’t need me to tell you about the possibilities presented by the Fourth Industrial Revolution, you already know them. Neither do you need me to tell you that organizations that survive long-term without undergoing digital transformation will be few and far between.
At the same time we’re trying to manage digital transformation within our organizations (and to make sure our competitors don’t get there first), it’s imperative that we also look outward and think about how the resulting changes will impact our culture and society — and, perhaps more importantly, who gets to decide. As stated in a report from the World Economic Forum, "The human story over the next half century will turn largely on how well societies succeed in collectively defining their priorities, engaging essential questions about values and ethics, and aligning technological development accordingly."
Where are we now?
The 4th Industrial Revolution: The Philosophical Perspective
We’re currently in an amorphous limbo, where technology is miles ahead of the regulations that will eventually be created to control it. We haven’t even identified the values and principles that will form the foundation for future regulations. So the question all of us must answer is both simple and profound: Who do we want to be when we grow up ... when this revolution is over and we’re living in a digital, tech-driven world?
The World Economic Forum suggests two potential futures:
- One where new technologies are tools developed by humans to help us achieve more and lead to a brighter future for all of humanity.
- One where new technologies are assumed to be inevitable, and further innovation is just the “nature of the beast,” so to speak, with little in the way of human guidance or intention.
This dichotomy is playing out today, with a constant stream of technological advances reported against a backdrop of questions about how they will impact society in terms of things like privacy, security and even what it means to be human, as the lines between our bodies and the world around us blur.
Big questions, right? The enormity of the decisions we make today is overwhelming. So let’s take a break from philosophy and get back into the boardroom so we can look at this huge, civilization-altering phenomenon from an actionable perspective.
Related Article: The Two Sides of the 'New Normal'
The 4th Industrial Revolution: The Practical Perspective
As business leaders, we face two primary questions:
- Do we want to rush forward as quickly as we can, seeing how far we can go before somebody puts the brakes on (and hoping that future lawmakers will be reluctant to impose regulations that force businesses to spend a lot of time and money to come into compliance)?
- Or do we want to think about regulatory and societal issues from the ground up, planting the seeds for a regulatory environment that will serve the needs of all?
I think the organizations that manage to do both will come out on top, but it will take a lot more than just out-guessing future regulators.
The key is ethics by design. Rather than exploiting opportunities by rushing forward with digital projects and services so we can be first to market, let’s all take some time to think about the bigger picture. If we think about the purpose of most regulations — ensuring security and privacy, anticipating opportunities for misuse, protecting the world’s most vulnerable, etc. — we can take advantage of early opportunities while also laying the foundation for what future regulators will use to guarantee that we come through this fourth industrial revolution with our ethics and humanity intact. And we can make a profit while doing it.
A Framework for Digital Transformation
We have an unprecedented opportunity presented by the gap between what technology can do and the time when regulation kicks in to control it. We can reduce costs, increase efficiencies, increase access to healthcare, protect our food and water supply, reduce energy consumption — and, if we’re committed, make the world a better place — all without anyone telling us we can’t.
That may sound strange coming from me, since my business is built on digital policies. The truth is that I think regulation is absolutely necessary, in the sense that “Power tends to corrupt and absolute power corrupts absolutely.” But I think it’s best if those of us working in the digital world every day are the ones who make the rules, because we’re the ones who understand both the potential and the dangers of this brave new world.
If we base all of the innovation coming our way on a foundation of sound digital policies — policies designed to be both profitable and ethical — then there’s a good chance that regulators will follow our lead, basing future laws and regulations on policies that are proven to work.
Related Article: Let 'Ethical By Design' Guide Your Use of Consumer Data
The Challenge for Business Leaders
It really comes down to an admonition most of us have heard since we were kids: “Just because you can do it, doesn’t mean you should.” That’s a question we should all be asking ourselves right now, followed closely by, "If we do it, how do we do it right?"
I wish I could whisk the world’s business leaders away on a retreat where we could hammer all of this out, but I can’t. Instead, I’ll give you some of the important topics you and your leadership teams should discuss.
Security
It shouldn’t surprise anyone that security is the most important issue we face. Nearly every day, we hear about everything from data breaches at major companies to hijacked pacemakers.
Part of the problem is that developers are used to concentrating on functionality first and adding security later. That’s not going to work in a world where everything is connected, and it’s expected that there will be 24 billion connected devices by 2030. Since every single connection to the internet is a point of vulnerability, that means there will be 24 billion or more opportunities for hackers and other criminals to wreak havoc far beyond stealing personal data. What about the navigation systems that operate airliners? Or missile guidance systems?
Learning Opportunities
Without a doubt, security is the number one challenge we have to overcome to realize the future we all envision, and security-by-design is the only way to do it. Security can’t be an afterthought.
Related Article: What the SolarWinds Hack Tells Us About the State of Cybersecurity
Privacy
Most of us have already worked through the problem of how we’ll handle customers’ private data from a marketing perspective. But there are other concerns, too. Think about the following scenarios:
- Ingestible devices: Ingestible devices help monitor things like a patient’s compliance with prescription medication. But what else could they monitor? Could an ingestible device designed to detect adherence to a medication schedule also detect the consumption of alcohol or illegal drugs? If so, what are your company’s responsibilities regarding that information? Are there limits to personal privacy? Should you alert law enforcement to illegal activities, for example? And should users be notified that this could happen?
- Wearables: Wearables are so commonplace now that I doubt many of us have thought through the possible implications, but we definitely should. Take accuracy, for example. What is the minimum acceptable level of accuracy when monitoring things like a runner’s heart rate? What level of inaccuracy puts the wearer at risk?
And, accuracy aside, what about regular use? Does the company have an obligation to alert a user whose heart rate gets too high? Should the user be able to override such notifications? If a user turns off a warning or alert and suffers a heart attack, stroke, etc., does that mean the user’s insurance company can refuse to cover it? - Connected vehicles: Today’s connected vehicles already monitor things like speed and location. Newer vehicles also monitor things like driver alertness, sounding an alarm when the vehicle wanders across the lane, for example. Should the vehicle contact the police if a vehicle repeatedly wanders into other lanes?
And then there’s the question of location data. Should that information be stored, or should it be real-time only? If it is stored, how long should it be stored? And who should have access to it? (Imagine someone being able to catch a cheating spouse in the act, so to speak, or law enforcement being able to place a vehicle at the scene of a crime.) What notification should your company provide vehicle owners? And can vehicle owners choose to turn off some of these features?
Those are just a few examples of the privacy dilemmas businesses will have to wrestle with. There’s a new connected mattress, for example, that not only changes positions when you change positions, it also monitors and records your breathing and heart rates. Am I the only one who thinks that could be just a bit too invasive?
Interoperability
As the number of IoT devices grows, so will the number of providers. Should companies focus on exclusivity, trying to corner a particular market (like smart homes) by making devices that only work with their own products? Or should they focus on customer experience, making it easy for users to mix-and-match products from various manufacturers?
If your company decides to go for the mix-and-match approach, what are your obligations as far as vetting the security protocols of your competitors’ devices? And what should you advise consumers to do to increase security when connecting with third-party devices? What mitigation protocols will you put in place in the event that a competitor with products that work with yours has a breach?
Bias
Artificial intelligence already makes a lot of big decisions: who gets hired, who gets approved for a loan, etc. And our own biases, biases we may not be aware of, have a way of sneaking into our coding. As the trend of using AI to make important decisions increases, eliminating bias from the algorithms that make these decisions will become an increasingly important challenge.
One suggested solution is to hire a diverse team of coders who can spot biases in each other’s work. Other companies hire outside auditors to look for unintended biases. What will your organization do to address this issue?
Related Article: Before You Hand Human Resources Over to AI ...
Misuse
You know what they say about good intentions, right? There’s always the possibility that someone will use your product for purposes you didn’t foresee. In fact, if your motivation for developing a particular technology is to make the world a better place, your good intentions could blind you to the evil intentions of others. What will your company due to identify possible misuses of your technology?
Resources and processes
One of the reasons security is an issue is that many of the companies rushing to market with connected devices don’t have that sort of technical expertise as a core competency. As an example, many cooking devices — such as grill thermometers — are available in both analog and connected versions. You have to wonder if those companies went out and recruited IT workers with expertise in digital security or just viewed connectivity as an add-on. (Hint: If connectivity was an add-on, security is probably weak.) How will your company approach product development for connected devices? Will you depend on your existing IT staff to secure connected devices? Or will you hire security experts first so that your connected devices will be secure by design?
And what about your work processes? If you make self-driving cars, for instance, how important is it for your coders to understand mechanics? Do they need to understand how brakes work to design AI that knows when to apply brakes? Will your coders live in IT, or will they be embedded in product-centered teams?
Make Ethical by Design a Reality
The decisions we’re making now will change the world and the way we live. Creating the framework that will shape that world is too big a responsibility for any single party. It involves all of us: businesses, governments, consumers, ethicists, etc.
But we have to start somewhere. In my opinion, that means adopting an ethical-by-design approach, formalizing and documenting policies that govern how we’ll operate during and after this Fourth Industrial Revolution we’re entering. Those policies could include things like:
- How your various functional areas will work together to make sure ethical-by-design becomes a reality and not just a motto.
- The checkpoints and hurdles each new digital product must pass before it goes to market.
- How you’ll educate customers on basic security and privacy protocols, such as changing the default login information for any connected device (or whether you’ll design your product so that consumers will have to create new login credentials before they can proceed with setup).
- What you’ll do when you discover new vulnerabilities (such as whether you’ll alert competitors, notify regulatory authorities, etc.).
- How you’ll handle crisis management, such as when you discover that a medical device consumers depend on to stay alive can be hacked.
The enormity of the changes we’re going through will inevitably require regulation. But the better we regulate ourselves, the better our products will be, the more consumers will trust us, and the less work we’ll have to do later to come into compliance. We have an opportunity that won’t come around again in our lifetimes. I’m ready to grab it. Who’s going to join me?
Learn how you can join our contributor community.