Tuesday, March 2, 2010

Are Metrics Blinding our Perception?

These days, I eat, sleep, and breathe metrics.  As part of the "metrics movement" in philanthropy and social change, I'm fascinated by trying to measure this stuff, demonstrate effect, and provide evidence for program investors and stakeholders.  (Speaking of movements, there was even talk last week about a metrics index of social movements.  Very cool, but good lord, I can't take another index!  Have fun, guys.)

So while I am a professed metrics maniac, I can't help but remark that the tail is wagging the dog.  The expectation that metrics will save the world, as some seem to believe, is just unrealistic  Behind every metric are assumptions, trade-offs, dirty data, and variables that compromise it. Frankly, this makes quantitative data as guilty of malleability and elasticity, as the much maligned qualitative data.    No numbers without stories, no stories without numbers. 

Yet, far too many people "ooh and ah" just to have a number attached - whether that number has any meaning whatsoever.  Read the footnotes, people! - the ones that say what this number does, and more importantly does not, tell us. Or learn a bit about the methodology used to obtain that number to really understand it.  (Ooops, I'm sorry your eyes are glazing over.  This is one of the problems). 

Sadly, some evaluation methodology and quantitative research and data has corrupted expectations.  Certainly, business has been spoiled by its metrics simplicity - profits and loss, return on investment.  Yet let the fates of Enron and Lehman Brothers be a reminder that these derivatives are stranger than fiction, too! And dollars are just plain more countable than the messy, complicated lives of actual human beings. 

Other areas like clinical trials and operations research have helped set an unrealistic and unattainable bar for social programs metrics.   Even they struggle with the messy trade-offs.  Recent clincial trials for HIV prevention methods - such as an HIV vaccine or a microbicide - grapple with the question is 60% effective good enough for market?  These trials have exorbidant costs and inordinate time investment.  The same bar is not appropriate for small grants, general philanthropy, and your typical social prorgam.

Metrics is an art, as much as a mathematical formula.  So are social programs, for sure.  At some level, there is trusting your gut.  I might be a metrics maniac, but I have much greater faith in my intuition.

2 comments:

  1. I've done a *lot* of work involving the quantification of qualitative data, and I've seen lot of people who do nothing but turn subjective judgments into "objective" data by taking a Likert scale and pinning it onto a piece of textual data. The experience has made me very cynical about quantitative data.

    But then, my undergraduate stats class did that, when they showed that a lot of social science is simply finding a variable that *might* be an indicator of X trend or Y feature and then focusing on the numbers of that variable. So often, people forget to keep in mind just how tenuous or precarious that "might" was. The more abstract the thing one is trying to study, the more dubious one's assigning a particular variable to represent it becomes. Is "the number of people looking for work" really a good index of unemployment? Used to be, but with more and more people not even looking, it's become dubious. But is "the average price of a select group of publicly traded stocks" *really* the best measure of something as amorphous as "the health of the national economy"? Not really, but a lot of people don't seem to question using the Dow Jones or Standard & Poors index as a barometer of whether we're booming or busting.

    ReplyDelete

Related Posts Plugin for WordPress, Blogger...