skeleton key
PHOTO: everyday basics

What was your first car? What was your first pet’s name? What street did you grow up on? What’s your favorite food? Your mother’s maiden name? Your father’s middle name?

If you tag your parents on Facebook, it’s probably fairly easy to learn your parents’ maiden name or middle names. You might talk about your favorite food on Instagram. Perhaps you’ve tweeted memories of your first car.

These are common security questions a system might ask you to determine if you are really you. However, “you” could also be someone who knows you well, such as a family member, romantic partner or unpleasant former romantic partner. If other people can answer these questions about you, are they still good choices as security questions? If people might be able to Google these facts, do they still make strong security questions?

When Security Questions Get Up-Close and Personal 

Here is a screen shot from 2014. A London concert hall wanted you to fill out this form before you could access its “free” Wi-Fi. Notice that all questions are mandatory.

Anybody looking at this knows this goes beyond typical marketing questions. It is asking for enough information to probably be able to log into your online banking. This is concert venue Wi-Fi: We don’t need to check what someone’s mother’s maiden name is before letting them reset their forgotten password.

The marketing team may have wanted to collect information about you like your exact address, birthday and gender. The security team thought it might be a good idea to have a “memorable question.” The marketing team also wanted you to have no way to opt out to your information being shared with third parties so that they can market to you also. This is pre-GDPR, and probably wouldn’t be acceptable now (though it shouldn’t have been acceptable then).

The answer for most people here would be to decide against getting on the venue Wi-Fi, or to fill the form with fake information in the hopes it doesn't email you to verify your account. Either way, marketing will add fewer real people to the database or sell to third-parties. Is anybody on that team calculating how their goals aren’t being achieved? And what are those goals?

Related Article: How to Handle the Crisis of Consumer Trust

Security Questions Used Inconsistently

PayPal doesn’t ask me security questions to verify that I’m me. It texts me. Another bank account I have requires me to tell it my favorite vegetable every time I log in. My Italian bank account won’t let me in without an authorization code I have to get from an RSA device or its app. The system that runs my company payroll won’t let me in without emailing me a code.

Many sites now use authorization codes they can text or email. This only proves that someone with access to your phone or email intended to log in. If someone’s database is hacked, they might get our passwords and possibly even the answers to our security questions. But for systems using authorization code methods, hackers wouldn’t be able to get in without also hacking into our phone or email.

And with so many databases being hacked, many people are giving false answers to security questions like their birthday, the street on which they grew up, a pet’s name or parent’s maiden name. Then they have trouble answering when asked later because they don’t remember which false answer they gave. If they can’t log in and can’t reset the password, now they have to tax your support team, who will have to verify them and reset the password. It’s all such a waste of time for the customer and the business.

As we focus more on privacy and security, let’s please shift away from the “security question.” They're outdated, cause problems and often include information we wouldn’t want exposed on the black market. If your system is so high-security that my password isn’t enough to verify me, consider texting or emailing codes versus asking people for deeper personal information or childhood memories.

Related Article: What 2020 Holds for UX and Customer Experience