Ever stepped on a scale and hated the number you saw?
A side-by-side comparison was floating around Facebook recently. It showed the same woman taking a selfie. On the left she looked thin but flabby. On the right she was built and beautiful.
The caption? “I weigh 165 lbs. in both of these pictures.”
Readability statistics are like your scale. When you step on it, there’s one measurement: How much your body weighs. But your scale can’t tell you how you look in the mirror or how your clothes fit.
Readability scores are limited in the same way. Let’s look at why:
5 Problems with Readability Formulas
There are five issues with readability formulas:
1. Algorithms don’t measure who is reading
Readability formulas measure the length of a sentence as well as the length of the words. As a mathematical equation, these formulas use different parts of an algorithm designed to measure the readability of a document.
But they don’t measure the comprehension the reader may have. For example, a lawyer can easily (we hope) read and understand a contract, even though the readability scores are through the roof, because he or she is trained to do so. One-size-fits-all doesn’t work when trying to score a document.
2. The formulas are based on old, flawed research
There are more than 200 versions of these algorithms [PDF], and some are based on research that’s more than 60 years old. More recent studies have shown that grade school children today aren’t reading on the same level as they were then. So we’re basing this “grade level” mandate using out-of-touch formulas.
3. What is plain language?
Readability formulas are frustrating when trying to write plain language documents. While shorter sentences do increase comprehension, there is only one way to write “supraventricular tachycardia” or “federal regulatory statutes.”
Either the readers already understand these words, or they will understand them if the writer does a good job. So running a computer algorithm through content like this becomes frustrating. The algorithm doesn’t consider the quality of the content or if it answers the readers’ questions.
4. Is it entertaining?
High-quality content can be entertaining. But it may have long words or long sentences. Speechwriters are trained to write one long sentence, followed by one short sentence and so on. If they are trying to measure their audience’s understanding by using one of these formulas, they will get skewed results.
5. Language is constantly adapting
“LMK if ur k.ily.” If you read that 10 years ago, you would assume I was illiterate. But now, we all know (especially if we have teenagers) that sentence means, “Let me know if you’re okay. I love you.”
Readability formulas were not designed to capture the changing norms of language. The sentence, “May the force be with you, young padwan,” scores 100 on the Flesch Reading Ease Scale and 0.8 on the Flesch-Kincaid grade level. But someone in 7th grade might understand that sentence, and someone in their 70s might be clueless.
Reading levels, comprehension and understanding aren’t always correlated.
Should Your Customers Read or Understand Your Content?
The difference between comprehension and readability is at the heart of my argument to do away with these formulas. As web writing specialist Ginny Redish writes, “A good score does not mean you have a usable or useful document .… To the Flesch Reading Ease Scale, these two sentences have identical scores: ‘I wave my hand,’ and ‘I waive my rights.’ They are not at the same difficulty level for most readers.”
Redish lists what makes a document usable — and these factors are not included in readability formulas:
- Is the content right for the audience?
- Is the document organized so people can find what they want? (Information architecture, information design, mobile layout?)
- Are there visuals that help guide users?
- Is the text divided into chunks and bullets? Does it have headings?
- Are the sentences grammatical?
- Are the words ones that users know?
- Is the voice and tone appropriate?
Ensuring that our customers understand our content is far more important than aiming for a certain readability score.
Knowledge Is Power
I know a lot of you are cringing right now as you read this. You are asking, “What are we supposed to use? We can’t run usability or comprehension studies for every piece of content we create. We have a mandate to write on an 8th grade reading level. What should we do now?”
And I hear you. Go ahead and grieve. I did after I learned this. But it hammered home something we are still trying so hard to convince people about: Are we writing what our audiences really want?
Eradicating Readability Formulas: What to Do Now
Educate your executives and your team. Ask, “Do we really understand the difference between readability and comprehension? Do we really need to hit a certain grade level if we make our content appropriate for our audiences? What are things we can measure about our current and future audiences?”
You can also check out National Assessment of Adult Literacy to get more information about literacy and these guidelines by the Centers for Medicare and Medicaid Services.
Now you may ask, “Ahava, do you use readability scores on your writing?”
Well, this document got a 61.7 percent on Flesch Reading Ease and a 7.8 on Flesch-Kincaid level. But I’ll only be able to measure comprehension, if as a discipline, we decide to stop using fallible formulas to measure our audience’s understanding.
Let me know in the comments or on Twitter — I’m @ahaval.
Learn how you can join our contributor community.