Why traditional web surveys are broken.
Did you ever get a question on a website that asked you to rate, on a scale of 1 to 5, the "interactivity" of the website? And did that sort of question puzzle you? You wouldn't be alone. This sort of question has two fundamental flaws that make any results coming from it not just useless, but misleading.
Let's look at the least serious flaw first. To the vast majority of people a website's "interactivity" is irrelevant. People don't want to interact with websites; they want to complete tasks on them. Interactivity is classic organization-centric language. It is an utterly meaningless thing to try to measure and survey about. Only slightly less useful is asking people about the visual design of the website. Most people simply don't care that much, and they certainly nearly always care much less than the organization.
In study after study we have found a focus on the visual design by web teams, marketers and communicators that borders on obsession, while customers essentially couldn't care less. Much more important to customers are the quality of the search results and the simplicity and clarity of the menus and links.
But the original question about "interactivity" has an even deeper flaw. Asking people to choose from a scale of 1 to 5 or 1 to 10 leads to faulty data because, as Stuart Sutherland puts it in his book Irrationality, "almost everyone is influenced by the two end points of a scale, tending to pick a number that is near the middle". He wrote this in 1992, so this problem has long been known. "Presented with two numbers at either end of a scale, people tend to opt for a number that is near the middle, regardless of whether it is correct."
If you ask a person a question they don't understand or really care about, and tell them to give a score based on a scale, it is even more likely that they will choose a number near the middle. This is because they want to answer the question as quickly as possible and by choosing a score near the middle they are essentially giving no opinion.
"These phenomena are known as 'anchoring effects'," Sutherland wrote. "In picking a number, people tend to pick one close to, or anchored on, any number with which they are initially presented or in the case of a scale one close to the midpoint. The cause of the anchoring effect is probably people's reluctance to depart from a hypothesis. If they start with a number, even one determined by the random spin of a wheel, they adopt that number as a working hypothesis and although they do move away from it, usually in the right direction, they are reluctant to move too far. Similarly, when picking a point on a scale or selecting a number from a series of consecutive numbers, they are reluctant to depart too far from either point and hence plump for a point near the middle. They unconsciously assume that the end points are likely to be approximately equidistant from the true value. Allowing one's judgment to be influenced by the initial anchoring point causes inconsistency: different judgments are given with different anchoring points although the anchoring point has no bearing on the correct judgment."