Victory Lap

So, how did the Legislative Election Forecasting Tool perform? In a word, well. In two words, extremely well. Long story short, the Tool estimated we would get 66...

So, how did the Legislative Election Forecasting Tool perform? In a word, well. In two words, extremely well.

Long story short, the Tool estimated we would get 66 seats. When all is said and done, we’re going to get 64 or 65 – possibly even 66.

First, the input. PSUV got 5,399,390 list votes yesterday, while MUD got 5,312,293 list votes. So in the head-to-head contest between PSUV and MUD (that is, ignoring PPT and the microparties) PSUV got 50.41 percent of the votes, while MUD got 49.59%. That 49.59% is the number we need to use in the Forecasting Tool.

It’s in the circuit-by-circuit predictions that the thing was most impressive. Going by the Data CNE has released to this point, the Forecasting Tool got 125 predictions right, and 6 predictions wrong.

One of those predictions didn’t make a difference to the seat distribution: the Forecasting Tool thought we’d get more votes in Sucre List voting than PSUV; actually we came in just behind PSUV. Either way each side gets one list deputy – no harm, no foul.

The tool got five circuits wrong. Here’s the key thing, though: out of those five circuits, the tool got two wrong in one direction, and the other three wrong in the opposite direction.

It estimated that, with last night’s national vote breakdown, MUD would lose in Anaco (Anzoátegui 2) and El Callao-Upata (Bolívar 3) – we actually won those two seats. By the same token, it thought we would win in Ejido (Merida 4), Cocorote (Yaracuy 2) and Cañada de Urdaneta (Zulia 2) – we lost all three of those. So the errors tended to cancel each other out.

In terms of margins, the Estimation Tool was off by more than 5 percentage points in 28 predictions, and yielded estimates within 5 percentage points of the eventual result in 103 cases. It overestimated MUD’s share by more than 5 percentage points in 20 circuits, and it overestimated PSUV’s share by more than 5 percentage points in 8 circuits.

There was only one circuit where the model both misattributed the actual winner and got the estimation wrong by more than 5 percentage points: Cocorote (Yaracuy 2).

As expected, the Forecasting Tool was wrong but not biased.

That is, the places where it got it wrong in our favor were mostly balanced out by the places where it got it wrong the other way. This idea is the meollo del asunto, not to mention a thought that certain well-respected members of the commentariat never quite grasped. Overall, the Forecasting Tool estimated MUD would get 66 seats in this scenario, and we’re likely to get 65.

If you’ll excuse me, I have some gloating to do…