The Catch-22 of Placemaking: Benchmarking
How do you measure success of these programs?
I like to think of the placemaking initiatives as examples of a new tool box for community leaders and planners. Previously economic development and the creation of amenities focused on the massive projects: highways, casinos, sports complexes, and parks like Millennium Park in Chicago or Klyde Warren in Dallas. These projects have the capability to truly transform a place (the park examples) or become massive tax-payer-funded boondoggles. One way or another, these were the tools in the box and as the proverb says, when you have a hammer every problem becomes a nail. Bang bang bang.
These smaller, more tactical projects offer up a new set of tools. That in itself is a great benefit. These tools are available to a wider group of citizens and groups. Your neighborhood arts council can’t very well install a casino but it can have a game night benefit or start a little library program. These projects have the potential to make a small impact for a small group and occasionally (as in “pop-up” examples) for a small time.
But how do you determine which of these projects works and which don’t?
There are two drawbacks that are barriers to determining best practices. The first is a lack of metrics. How do you define success? Is it the number of people who listen to Bronx music in the Boogie Down Booth? Which is more important for the Little Libraries, getting high-end designers engaged in building them or actually having people “check out” the books? What is the preferred ratio of “clothes washers/artists” in the Laundromat project? How do you benchmark these programs and gains against anything else?
With a massive park project you can look at rents surrounding the park before and after to create a benchmark for economic returns. Or look at attendance (people can be counted) or crime rates to see if safety has improved. You can see how many people are engaged by concerts or other public programs. With smaller-scale placemaking these metrics are generally lacking.
The second draw-back is a bit of a catch-22. It’s great that these projects exist mostly outside of traditional government. But that strength is offset, in my opinion, by a lack of oversight that would be helpful in creating and distributing best practices. What if all of the pop-ups happen in March? What of the “place” the rest of the year? What if one group fails and then another group tries the same thing because there is no mechanism to exchange the learning from one group to another, nor a central place to document findings?
All of this makes it difficult to answer the question of which are making the most of their opportunities? The C4 mapping tool is a great example of this problem. The tool itself is OK, but it’s only worthwhile if it actually gets used in a meaningful way. Too often these sites get built and then ignored. I had a conversation about this with the former chief technology officer for the City of Chicago. He bemoaned that the open data portal was full of great information but people (and journalists) kept asking him to tell them what it meant. He clearly felt that it was his job to provide, not interpret. The C4 tool could suffer the same fate easily. It presents the data with no analysis and little context like demographic data. That’s where the humans need to come in, or else this is a clear lost opportunity. If the success metric is that the site got built correctly and the data is interesting, then great. If the metric is that a great public policy gets crafted based on explorations of that data, that might be harder to track but more meaningful a result in the end.
What do you think? How can these projects demonstrate their value to keep the placemaking movement moving forward?