Methodology: The making of our lists
With a name like Livability, we take our Best Places to Live list very seriously and we want you to understand the decisions that went into it and the data that drove it.
We rank nearly 2,300 cities with populations between 20,000 and 350,000 according to the latest projections from our data partner, Esri. For each city, we use our proprietary algorithm to calculate a LivScore. The LivScore is based on more than 40 data points grouped into eight categories:
Social and Civic Capital
Transportation and Infrastructure
You can get the full scoop on what each of those categories entails right here.
So how exactly do we craft the ranking?
Our fifth-annual ranking builds on a process initially developed with one of the world’s leading urban theorists, Richard Florida. We worked with his team at the University of Toronto’s Martin Prosperity Institute and later with The Initiative for Creativity and Innovation in Cities at New York University’s Schools of Professional Studies a program he directed along with Steven Pedigo.
But let’s face it, this is a team of wonky people who are surrounded by this research day-in and day-out. We also wanted to get input from a cross-section of Americans about what they value in the communities they live in, and the communities they would consider moving to someday.
So we asked.
Livability again partnered with Ipsos, a leading global market research firm to survey more than 2,000 American adults about what factors are most important in creating a best place. We use this survey to determine how much weight to give to each data point. If the survey shows that health care and affordable housing are more important than transportation and amenities, we weight each section accordingly.
As we’re fielding the survey, we are also working with our data partners to update and augment our data. Each year the methodology shifts slightly as new data sources become available or research shows that we should emphasize or balance out one of our eight categories.
Once we had our theoretical framework in place, we layered in hundreds of thousands of data points. We run a number of simulations and tests to make sure that no one variable or combination of variables created undue influence on the final results. We use a series of statistical measures, checks and benchmarks to make sure the list stays fair and balanced.
Where do we get our data?
We pull in data from the best public and private data sources available. Our trusted sources include public-sector providers such as U.S. Census Bureau, U.S. Department of Housing and Urban Affairs, the Environmental Protection Agency, the Centers for Medicare and Medicaid Services, the U.S. Department of Agriculture, the Federal Aviation Administration, the United States Golf Association, the Federal Communications Commission, the National Oceanic and Atmospheric Administration and the U.S. Department of Education. We also source data from leading private-sector sources including Esri, Great Schools, and ATTOM. We also find great data created by nonprofits such as the Institute of Museum and Library Services and the County Health Rankings and Roadmaps produced by the Robert Wood Johnson Foundation.
What aspects of the methodology changed this year?
We swapped out our broadband measure and refined our college/university sources. We dropped a natural amenities, and our arts data and added in a new measure of natural disaster risk – which seemed especially appropriate as we developed the list while watching coverage of hurricanes Harvey, Irma, Maria, west coast fires and November tornados rolling through the Midwest. We changed up our airport data to include airports within the region. We added new climate measures. The result was a more balanced list than we’ve had in the past – nearly four in five states have cities represented – so we removed a restriction we put in place last year of having no more than two cities per county.
Why do cities’ rankings change year to year?
Every year we tweak our methodology and update our sources, which can affect a city’s ranking. The Liv scores of the Top 100 cities are so close that the addition of a new data point can make a pretty big difference (for example, this year we included a data point measuring natural disaster risk).
A change in ranking may also reflect a shift in values. If one year’s survey found that people were more focused on affordability than cultural amenities, that could cause a great city with a higher cost of living to drop in the rankings, through no fault of its own.
Remember: we analyze nearly 2,300 cities to come up with the Top 100. Therefore all 100 are really A-level cities. The difference between #1 and #100 is kind of like: Did you get the 98% A or the 96% A? They’re all great places. Best places, in fact!
Contributing Editor, chief trend analyst