The Elasticity of the Vote for Dummies

Hausmann and Rigobon’s argument about the non-randomness of the CNE cold-audit sample is based on something they’ve called the “elasticity of the vote.” The concept is borrowed from...

Hausmann and Rigobon’s argument about the non-randomness of the CNE cold-audit sample is based on something they’ve called the “elasticity of the vote.” The concept is borrowed from economics, where price elasticity is a bedrock conceptual tool. This is my academic turf, so I’ll have a stab at explaining it to non-economists.

According to a standard economics textbook, price elasticity of demand is “a measure of the responsiveness of the quantity demanded of a good to changes in price.” (Stiglitz and Boadway, 1994).

Very good, but what the hell does it mean?

A caffeinated illustration

Say a cup of coffee in your town costs $1. At that price, you buy 100 cups of coffee a year. Now say coffee prices go down to 75 cents a cup. At that price, how many cups of coffee will you buy per year? Logically, more than 100. But how many more?

Well, maybe you’re very sensitive to price changes, and you’ll buy 200 cups of coffee a year: your demand is relatively elastic – since a 25% drop in price led to a 100% rise in consumption. Or maybe you’re not so concerned about price changes and you’ll buy just 105 cups a year – then your demand is relatively inelastic.

Of course, economists don’t care how much coffee you personally drink, they care how much coffee a whole bunch of people drink. So the question is not how your demand for coffee will react to a change in price, but how overall demand for coffee will react to a change in price.

Say you do a study and you find that when the cost of coffee goes down 25 cents, on average people will buy 20 cups of coffee more per year than before. The ratio of the coffees-people-used-to-drink to coffees-people-drink-now now is 100:120.

Now, say you take a random sample from that population. How many more cups of coffee per year would you expect the sample to buy?

Well, if the sample is truly random, you would expect it to react in the same way as the overall population – the sample ratio should be 100:120 as well.

But say your sample doesn’t behave that way. Say the people in your sample are now buying 130, or 140 cups of coffee a year. What can you conclude then?

Well, at the very least you can say the coffee drinkers in your sample behave differently from the overall population of coffee drinkers. Your sample acts as though they are more sensitive to coffee price changes than the people in the overall population – in economist-talk, their demand is relatively more elastic than the demand of the rest of the population.

So something screwy is going on…at the very least, you have reason to suspect that your sample was not really randomly selected from the total population of coffee drinkers.

The key thing to remember is that statistical methods allow you to estimate the exact probability that your sample, which seems at first glance to be non-random, actually is random – but just happened to give you a screwy-seeming result by chance.

Applying this idea to the cold-audit data

Now, Hausmann and Rigobon’s study of the cold audit relies on an adaptation of the same train of thought. They have three data-sets: one on the November 2003 signature gathering drive, one with CNE’s total referendum results, and one on the referendum results in the sample selected for the cold audit.

All they’re doing is comparing two ratios: the ratio of 2003 signatures to Si-votes in the overall CNE results, and the ratio of 2003 signatures to Si-votes in the audited sample. If the audit sample truly was random, the two ratios should be reasonably close to one another.

They’re not.

Say you determine that, for the overall population of voters, every 100 signatures obtained in November 2003 yielded 120 Si-votes. The ratio for the entire population is 100:120. Now cold-audit day comes around, and CNE chooses a supposedly random sample of voting centers. But when you compare the signatures-to-Si-votes ratio in the audited voting centers, you realize that it’s different from the signatures-to-Si-votes ratio in the overall population: say 100:140 instead of 100:120.

[Note: these are not the actual numbers – H&R’s analysis was rather more sophisticated, including a correction term to account for the growth of the voting population and a log scale transformation – so the numbers I’m using are just meant to illustrate the point.]

What can you conclude from that difference in the ratios?

The exact same thing you could conclude in the case of the coffee drinkers…and for the exact same reason!

The relationship between willingness-to-sign-in-2003 and willingness-to-vote-Si-in-2004 should be the same for a random sample than for the overall population. Yet, for some not-yet-explained reason, each Nov. 2003 signature yields more Si-votes in the audited center than in the non-audited ones.

The question, then, is what are the chances that a gap that big between the two ratios is the product of pure chance? Luckily, as we’ve seen, statistical methods allow you to estimate this probability quite precisely. In this case, Hausmann and Rigobon peg the chances of these results happening by coincidence at less than 1%.

That means that if you pick a sample at random, more than 99 times out of 100 you’ll end up with a ratio less odd than the one CNE happened to get.

Ergo, there’s it’s more than 99% likely that the audit sample wasn’t really random.

Does that make sense?

Join a moderated debate on this post.