/cdn.vox-cdn.com/uploads/chorus_image/image/2645441/gyi0063506937.0.jpg)
Before I begin, let me openly voice my deep debt for this piece to the rest of the TGP blog lord pantheon, particularly to taco pal, Prof. Cohen, Peter Lyons, and WetLuzinski; obviously, any deficiency here is owing entirely to me, and not to them. Now, to steal a phrase from the current President, let me be clear: this is not a political post.
Well, it is, but it also isn't. I want to clearly affirm that this article is not about stumping for a particular candidate in this Tuesday's election for President. Frankly, even the fact that I am strongly in support of Barack Obama (full disclosure, etc) is somewhat provisional: I tend to locate myself somewhere rather more left than the President in my general thinking, both economically and vis-a-vis international relations. That is, for all of the problems I have with Mitt Romney, I also have some problems with Barack Obama; I'll be voting one particular way, but I certainly don't have any interest here in convincing TGP readers to vote the same way. We all have a million reasons that we vote the way we do - frankly, to try and convince you all to change your political affiliation from my blogging pulpit would be about as fruitful as trying to convince you all to become Rangers fans. It's just not something I'm interested in doing.
That said, all of you should be interested in one particular facet of this election, and that's because it speaks to many of the same politics that we've seen ascendant in the advanced stats-traditional stats wars (or WARs) of the last decade and change. This facet is predictive analysis of the Presidential election, particularly the analysis based on predictive Electoral College models. The most (in)famous of these is the fivethirtyeight blog created, perhaps unsurprisingly, by Nate Silver, creator of important and pathbreaking baseball predictor PECOTA for Baseball Prospectus.
If you're not familiar with PECOTA, it's essentially a weighted predictor of both player and team performance, generated from a number of information data-points like age-related decline, injury history, past performance, and other environmental factors. What PECOTA tells us, essentially, is what is most likely to occur in an upcoming season. So, for example, we can quibble about how many home runs Chase Utley is likely to hit this year, arguing optimism (25-30 easy!) or pessimism (five, maybe, if he can even play 30 games on that knee!), but PECOTA is only interested in the mathematical probability of these outcomes. So, is it possible that Utley hits 30-35 home runs in 2013? Yes, but just about as possible as him hitting only zero to five, which is to say "not very." PECOTA might quantify these in terms of percentile, saying that, say we ran the season 10,000 times, Utley would hit between zero and five and between 30 and 35 homeruns maybe 300 times apiece, making these results each probable at a 3% clip. More probably - let's, arbitrarily, say 50% of the time - Utley would hit between 12-17 home runs. So, PECOTA would see Utley's upcoming season as something of a bell curve - the tail ends of the curve representing the worst possible and best possible seasons, and the giant bulge in the middle representing the most probable result. Just like the voting results in Ohio - low on both sides and high in the middle.
Which, of course, brings us to politics. Silver's model at fivethirtyeight operates on the same basic premise as PECOTA, which is to say that, if the race were run 10,000 times, we'd get a statistically probable winner, as well as, more importantly, a number of probabilities for possible results. Thus, we see on Silver's website that Obama has an 86.3% chance of winning the Electoral College at this moment (netting 307.2 electoral votes), and that Romney has a 13.7% chance of winning the Electoral College (netting 230.8 electoral votes). Even a casual perusal will reveal that this not a clear "lock of the week" style prediction of the election's outcome: first, one cannot get .1 electoral votes; second, and more importantly, one does not win a presidential election by percentages. One wins it by the actual number of votes one gets from swing states, like Ohio, and the material fact of the outcome will be available to us on Wednesday, the 8th (barring shenanigans!). Silver, then, isn't providing us with an image of what will happen on the 6th, but rather, what is most likely to happen. We can look down the sidebar of fivethirtyeight and recognize that Silver has allowed for more outcomes than the highlighted graphs present: there are even the analogues to the outliers above in the Utley example, as Obama will win in a landslide in the model .4 % of the time and Romney will win in a landslide <.1 % of the time. As the 2007 Phillies' improbable-yet-ultimately-impossible push to the postseason ought to show us, <.1 is still a percent.
So, what we have in fivethirtyeight is essentially the same as what we have in PECOTA - a series of predictions of likelihood taken before the fact. The effort, thus, is not to imagine a singular outcome, but to create a bell-curve model of standard deviations in order to determine, to put it crudely, what would be the right result to bet on. And when we put it that way, it's almost trivial: neither model presages the future, nor does it even predict the future with any promise of accuracy. As David Roher writes in a polemically-titled but ultimately insightful analysis at Deadspin, "Silver isn't more sure of himself than his detractors, but he's more rigorous about demonstrating his uncertainty." In other words, the point of the model isn't so much to pose a correct result, but rather to propose a correct methodology. Silver, to put it simply, isn't as much interested in who will win the presidency, so much as he is interested in how we can read the tea leaves about the presidency - namely, polls, but also turnout, economics, and other factors (Sandy, for instance) - more accurately. Silver, much like he did in PECOTA, wants to figure out how best to read the noise surrounding the race itself: he wants to form a more perfect methodology.
But you would not know this if you were to read Silver's detractors. You will forgive me if you think me partisan, but I feel we ought to operate under the Crossing Broad injunction for many of the websites attacking Silver, and as a result, I will not be linking them. This article from the National Review by Josh Jordan gives a good enough idea of what the general beef against Silver is, and is generally bloodless if wrong. Jordan essentially accuses liberals of wishcasting an Obama victory and accuses Silver of base partisanship in his predictions, alleging that Silver's open "rooting" for Obama colors his predictions and the nature of his model. Leaving aside the deeply dubious claim of Silver's "open rooting" -- you can judge how brash Silver is yourself; this is Jordan's smoking gun -- the claim seems pretty cut and dry: one can only believe this prediction if one is a hope-colored glasses wearing Obama supporter. Lest you think me overreaching in terms of rhetoric here, take a look at this Gawker article by (likely a different one than you're thinking) Mobuto Sese Seko. The money quote is here:
"Sounding like a woman or a gay man is effectively the same as being a dead man to the true-believer hard-right audience Chambers and Rush play to, so this is what works. This is how you shoot the messenger. You don't have to worry about Nate Silver, because he isn't even alive."
In other words, according to Sese Seko, the Silver critique boils down to character assassination, an attack on the person, not the methodology. And if you return to that National Review piece and check out the picture they chose to use, you'll realize this isn't too far off the mark - Nate Silver as egg-head, as nerd, as effete, and, yes, as gay are attacks on his appeal to truth. But even giving these attacks credence outside of their ad hominem approach, I'd like to suggest that they're arguing about the wrong things. As Roher suggests, and as I'd like to expand upon, "Forecasts should be judged on their processes, not their results."
So what does this mean? Well, as it happens, we might be able to understand this argument better through baseball than through politics. Because, as many have pointed out, the attacks on Nate Silver, political predictor, look remarkably similar to attacks on Nate Silver, baseball predictor. Take, for instance, this Fire Joe Morgan classic, wherein Junior works to defend PECOTA against the Chicago White Sox's weirdly personal anger against its prediction. The money quote is likely this, in response to a claim about the "surreal world of computers":
"Got it? Anyone who has ever touched a computer is not a real baseball person. They are imaginary, and they hate baseball. And they (cue reality show confessional cam) don't give us enough respect!
(Warning: people who use computers may in fact be computers themselves.)"
The attacks on PECOTA - and, tangentially, PECOTA's followers, believers, acolytes, or, more accurately, supporters - essentially boil down to a question of epistemology, or the philosophical study of human knowledge. Computers cannot give us knowledge, people can - this is, at core, an epistemological claim, one that privileges Paul Konerko and Ozzie Guillen's claims to hard work and continued success over statistical models and predictions. If that last sentence sounds weird, it's because it absolutely is - as we've spent most of the article trying to comprehend the ways in which methodological prediction and material result are different, it seems immediately wrong to assume that we can privilege one over the other. The two are epistemologically distinct, not competitive.
And yet, this does not keep people from assuming that Silver is in competition with someone like David Brooks, who recently opined that "If there's one thing we know, it's that even experts with fancy computer models are terrible at predicting human behavior." Brooks, we might say, can be totally right without Silver being wrong. As Mark Coddington points out in his own work on journalistic epistemology, the specificity of Silver's claims - the rounding to the tenths category in figuring electoral college votes - is not something that journalistic epistemology can produce:
"Journalists get access to privileged information from official sources, then evaluate, filter, and order it through the rather ineffable quality alternatively known as ‘news judgment,' ‘news sense,' or ‘savvy.'"
The scare quotes here notwithstanding, it isn't as if Coddington's vision of how journalists can know is entirely superfluous to the predictive or the statistical: it is simply different. As Coddington points out, the distinction is between "specificity and certainty" - specificity being a kind of particular vision of what might or is likely to happen, while certainty is a vision of what will happen. Even a cursory examination of the kinds of discourse I've detailed above suggests who occupies which position, but let us make the distinction clearer: we can be specific about what Mike Trout is going to do next year by using sabermetrics; we can be certain that he will (or will not) be MVP through punditry.
Now, we should be careful: this distinction does not mean that we ought to throw out journalism and simply rely on predictive models; there is still a place for the kind of insider-based journalism that Coddington critiques. Dan Hodges notes that predictive modeling does not replace political reporting: Silver, he says, could not have cracked Watergate or released the "47 percent" video, and that's likely true. As Hodges says, "The pieces of the jigsaw that form any political campaign still need to be collected" before the predictive statistics will even make sense. Once again, we cannot decide between the specific and the certain, just as much as we cannot decide between the human aspect of the game and the statistical. The reason is simple: there's not a choice to be made, as they both define the field of inquiry.
Of course, the backlash against Silver suggests that there are those who want us to make a choice, just as the Heymans and Bissingers of the world want us to make a choice. As Hodges notes, the information of the campaign has recently become available to a wide public audience - "It's no longer gifted to me by some sleep-deprived, over-worked hack stuck in the cheap seats of a campaign plane, frantically filling copy at three in the morning." With the advent of Pitch f/x, deeper box scores, and greater mobility and access for amateur scouts, we can say the same thing about baseball. As Joe Posnanski has demonstrated, there will always be a place for the reporter who is both good at writing and good at thinking about the subjective side of the game (or the politician, as the case may be). But we can immediately understand why political writers might be threatened or might feel that they have less of a hold on how we understand the election: because they do.
Thus, when we critique a method or a model, be it in baseball or politics, we need to do so with an understanding of what we are critiquing. An ad hominem attack on Silver is regressive, obviously so, but the critique that he's "wrong" because he doesn't understand the race is just as regressive, if more embedded in a believable narrative. Nate Silver isn't "wrong," just as much as the University of Colorado model that suggests a Romney win or this conservative vision of the race isn't "wrong." I find the UColorado model suspect, but I do so based on its emphasis on economic data over polling results, not on its choice of presidential victor. Necessarily one or more of these models will get the result wrong, and when they do, it will not, as some pundits have suggested, mean that they are discredited or shamed or embarrassed (we might parrot Junior from FJM here: computers don't have feelings). It will simply mean one of two things: a) that the models were wrong in their weighting of data, or b) that something unexpected happened.
That the latter possibility exists in these models is something that seems forgotten in the deeply problematic assumption that any model can have political volition. A commenter on an article affirming Silver's predictions writes that it's "cute that you only listed left-leaning models and purposefully avoided the University of Colorado model," as if these models could care at all who won the race. Sure, their creators could, but even a remotely ethical creator would take care to produce success in predictability even if it went against their preferred results, and Silver is undoubtedly ethical in this way (see: his prediction for the 2010 Senate race that was predictable to Republicans). What journalists attacking Silver, both for fivethirtyeight and for PECOTA, miss is that the working out of predictable outcomes doesn't remove the joy of unpredictability. Take, for example, this summary of PECOTA's 2012 predictions by team. Was PECOTA wrong? Well, no, but it certainly didn't get the results right. But by telling us what is likely, PECOTA doesn't take away what is unlikely - it simply allows us to understand its likelihood as a matter of contrast. The revolt of the nerds does not preclude the excitement of the subjective - in reality, the statistical revolution should enhance our appreciation of the unlikely and the unanticipated joy and tragedy of being a volitional human being.