For the past decade or so, I’ve been researching the psychology of lists. By lists, I’m not referring to “wish lists” or “to-do” lists, but third-party lists created and curated by experts and media outlets, like Poets&Quants’ rankings of the Top 100 MBA Programs and Top 50 Undergraduate Professors. Quite literally, the lists go on and on!
Third-party lists — which can be defined as “a number of connected items or names written or printed consecutively” — are wildly popular. It’s not an exaggeration to say that you can find a list, usually online, on pretty much any topic. For instance, the website toptenz.net offers lists ranging from “Bizarre Drinks People Actually Consume on Purpose” to “10 Apocalypse Survival Plans of the Ultra-Wealthy.”
The popularity of lists relates to their value in condensing information into manageable, bite-sized chunks. People have a tough time tracking and recalling information beyond a few bullet points — psychologist George Miller famously wrote a paper in the 1950s on the limits of human capacity for processing information titled “The magical number seven, plus or minus two.” Given all the stimuli that simultaneously compete for our attention in the modern world, it’s possible that our information processing ability has further deteriorated since Miller’s time.
WHY ALL RANK CHANGES ARE NOT THE SAME
Lists offer an elegant solution to the problem of information overload — it’s no wonder why consumers are grateful when third parties take a large amount of information (e.g., all the universities in the world) and whittle it down to a manageable, vetted, relevant, and often-ranked subset (e.g., the Top 10). In addition to attracting attention, inclusion on a respectable third-party list serves as a reputational badge of honor that has been shown to affect consumer decisions, including applications to a law school, visits to a hospital, and even product sales.
My own research has delved deeper into exactly how consumers interact with and respond to ranked lists in which each item on a list has been ordered, typically from best to worst. At a high level, here are three key insights that have emerged from this work.
1. All rank changes are NOT the same. Sure, every business school on Poets&Quants’ ranking of the Top 100 MBA Programs wants to see its rank increase from year to year. But consumers view a jump of a single spot very differently depending on whether the change is from 12 to 11, from 11 to 10, or from 10 to 9. My work with Dr. Robert Schindler at Rutgers shows that when consumers process large lists of information (e.g., Top 100 MBA Programs), they naturally create mental thresholds or breakpoints at common “round” numbers, like 10, 25, and 50. This means that a change in rank that entails crossing into the Top 10, the Top 25, or the Top 50 has a disproportionate and surprisingly powerful effect on evaluations and choices. Using a dataset from the Graduate Management Admission Council, we found that potential applicants’ interest in a particular B-school increased much more when the school’s U.S. News & World Report rank had crossed into a new “round number” tier (e.g., moving from 26 to 24) versus not, even if the latter change was actually greater (e.g., moving from 24 to 21).
2. Unusual ranked list claims are usually detrimental. If your company earns the number 9 position on a coveted list of the 50 best places to work, it might to tempting to highlight the specific number “9” on your website and marketing collateral. But my research with Dr. Aaron Brough at Utah State and Dr. Kent Grayson at Northwestern Kellogg finds that companies are better off using more common numbers, which we call “comfort tiers.” In other words, it’s wiser to say that you are in the “top 10” rather than in the “top 9” or even “number 9.” Why? We find that consumers who encounter common ranked list claims view the accolade positively and don’t think much more about it — being in the top 10 is a good thing and so evaluations increase accordingly. But learning that a company is in the top 9 leads consumers to think more deeply and make assumptions (“well, it probably wasn’t in the top 5”), thereby introducing counterfactuals that may sometimes be negative (“9th is good, but it’s not as good as being 1st or 2nd”). Altogether, this extra layer of information processing prompted by atypical claims tends to yield less positive judgments. So even if being in the top 9 is numerically superior than being in top 10, the latter seems to have a more positive effect on consumers.
3. Numerical ranks are preferable for small lists, but percentage ranks are preferable for large lists. The way in which the same information is framed can make a big difference to consumers. For example, in a set 50 products, a certain product might be one of the top 10 highest-rated OR one of the top 20%. Even though this information is actually identical, my research with Dr. Julio Sevilla at the University of Georgia and Dr. Rajesh Bagchi at Virginia Tech finds that consumers do not treat these equivalent claims the same way. Our work finds that when the size of the list is relatively small (i.e., under 100), a numerical claim (i.e., top 10) tends to boost product evaluations more than a percentage claim (i.e., top 20%). On the other hand, we find that when the size of the list is relatively large (i.e., over 100), a percentage claim tends to be better received. For example, in a set of 200 products, being one of the top 20 highest-rated products is less attractive than being one of the top 10% highest-rated even though these claims are also the same. This effect occurs because consumers are prone to neglect the format in which information is received, relying too much on the rank’s numerical value. After all, in the domain of lists, 10 is usually better than 20. In a field experiment that we conducted at a famous cheese shop in Seattle, my co-authors and I found that an award-winning cheese generated more sales during weeks when an adjacent sign noted that it was among ~20% of cheeses versus ~400 cheeses to win an award from the American Cheese Society (out of over 2,000 entrants), even though the information was equivalent. Format matters!
Taken together, my list research suggests that list makers and marketers alike need to be judicious when they create and disseminate lists and/or ranked list claims, given the potential for information distortion. Even more importantly, the millions of individuals who peruse and use third-party lists each day need to be exceedingly cautious and recognize the potential for cognitive biases to insidiously influence their evaluations and choices.
Mathew S. Isaac, Ph.D., is the Thomas F. Gleed Chair of Business Administration and chairperson of the Marketing Department at the Albers School of Business and Economics at Seattle University.
Comments or questions about this article? Email us.