man wearing a mirror mask
PHOTO: Alex Iby

A senior user experience (UX) designer was working on a project to create an in-app checkout for a retail store. Customers would be able to scan items from the app, pay through the app and leave with merchandise. The engineering lead was sure he had a better checkout concept, so he overruled the UX designer. His idea was confusing and had some serious inconsistencies with how the rest of the app worked. He wanted to remove the shopping cart icon only from this experience’s screens and give people a completely different way to check out. 

Why? He liked how a non-competing retailer did their checkout.

The UX designer warned him that if people didn't understand how to do this new and different checkout, they might accidentally shoplift. Customers would think they paid when they hadn’t, and they would try to leave the store with merchandise. Security people might jump them, which would be a bad experience for everybody involved.

The engineering lead and the product manager told the UX designer she was being dramatic. The engineering lead’s version was built and launched into a store to pilot and monitor the idea. Word came back within days that, as predicted, many people had trouble navigating the checkout. Customers thought they had paid, but in fact had not completed the checkout flow and started leaving the store with merchandise. As this was a pilot, security knew not to jump them, but it was clear the process was broken.

The engineering lead and product manager agreed their idea had failed. But instead of allowing the UX designer to go with her original design, they told her to “make their idea work." Despite the unfortunate outcome of this project, it’s an example of where an experienced UX architect was able to correctly predict an undesirable outcome so that it could be planned and designed for.

UX Professionals Predict All Possible Outcomes

Great customer experience (CX) and UX professionals should have an uncanny ability to imagine more than the “happy path,” where everything goes right for the customer and the system. They must be able to envision error states, mistakes customers could make, dead ends they might hit, or what might summon what I call The Four Horsemen of Bad UX: frustration, confusion, disappointment and distraction.

Beyond that, they must also be able to imagine and predict unethical or dangerous uses of any feature or aspect of the product or service. InVisionApp VP Mike Robinson wrote a great article explaining a security flaw in an email client called Superhuman. This “feature” would track every time you opened an email, what your location was at the time, and transmit that back to the email sender. Among other problems, Robinson points out how this could be used for harm if, for example, a stalking ex-boyfriend sends an email to his victim. He now knows how often she opened the email — allowing him to tell himself she’s thinking about him and perhaps still cares for him — and generally where she was when she opened the email.

Superhuman’s response was it hadn’t imagined how a bad actor would use this information. This is no longer acceptable. Between qualified and experienced CX and UX pros who are well-versed in predicting human behavior and possible outcomes, and a company’s risk and security teams, there’s no longer any excuse to not see these things coming. We must partner and go deeper to imagine all of the ways in which our products or services could be used maliciously, to farm data, or for any reason other than “normal” expected uses.

Privacy, security, safety and ethics are no longer optional. Not every visitor or customer is here for saintly reasons. We must be able to predict such malicious intents and proactively architect systems that stop these threats before they become realities.

Related Article: User Experience Design Shouldn't Happen in Isolation

Hold a 'Bad Actor' Brainstorming Session

Get the cross-functional team together. Invite collaborators from product, engineering, security, risk management and all relevant areas. Run a brainstorming session where everybody plays out roles including:

  • Creepy ex- or current romantic partners, friends or family.
  • Foreign hackers.
  • Black market data sellers.
  • Activists who are against your company.
  • Disgruntled employees who seek to cause damage.
  • People who want to take your site or system offline.
  • People hoping to redirect people from your site to theirs.
  • And other trolls.

What are these people in your system looking to do? What data do they want and how might they get it? Use an ideation workshop approach to try to come up with as many easy, hard or wild ways someone could “misuse” what your company creates for customers. Then, of course, create the time and budget for every single one of these to be solved before it’s released to the public.

Related Article: Equifax Breach Drags Open Source Security Into Spotlight Once More

Go Beyond Existing and Typical Solutions

“I don’t understand how my ex keeps posting to my Facebook account and changing my password,” the 40-something woman said. She kept trying to change her password, but her ex seemed to have access to everything, and she had no idea how. She eventually realized he had her previous cellphone, which years later was still logged into her Facebook account, Gmail and other apps. The password changes hadn’t logged him out. Even the addition of two-factor authentication (2FA) to her new phone number hadn’t kept him out. He was now locking her out of things, showing a weakness for 2FA in multiple apps and systems.

Don’t assume you've secured everything by slapping a more complex password requirement or two-factor authentication onto something. Think through more possible outcomes. If an ex is in possession of their former partner's old phone, and 2FA got added, what needs to happen to the accounts on the old phone? How do you ensure the person currently logging in or changing the password now is not the bad actor?

Related Article: Design Thinking Isn't User Experience

Where We Can’t Predict, We Must Still React

Many years ago, eBay found that some sellers were creating listings that overlaid invisible clickable areas of the buying and bidding buttons. These invisible buttons had you leaving the eBay site to buy the item elsewhere.

eBay responded by putting all user-generated content (UGC), like the description of the item for sale, in an iFrame. This way, any scripts trying to overwrite CTA buttons would be outside of the realm of the iFrame. This was brutal for many sellers and third-party eBay software services, but it did solve the problem.

Create an internal cross-functional team from CX/UX, product, engineering, security, risk and others who can predict, architect and react to concerns and threats. Don’t wait for a threat or problem to be real. Be proactive and design to make sure these issues will never be a problem for customers.