Not too long ago, in the days when I had to work for a living, I was sometimes asked to teach the “wizardry” of applied statistics. An assignment that was fine with me simply because it was fun. It was fun for a couple of reasons: First, it got me out of having to work in dirty, smelly manufacturing facilities where I had to deal with engineers who, as a rule, were not terribly bright. I don’t mean that they were stupid. They were just clueless about why the operations they were responsible for were going to hell in a handbasket. Not to mention the fact that they viewed me as a pointy-headed statistician who’d been brought in by management to “fix the problem”. An interloper. Engineers generally tried to make my life miserable for that reason alone. Their turf had been invaded. So I usually wasn’t welcomed with open arms. It could be a very hostile environment. I never really understood that, either. To me, it was roughly analagous to a sick horse kicking the vet.
But it was also “fun” because I really enjoyed the teaching part. In that environment I had a little bit of control. And that was a good thing, especially if some of those same dimwitted engineers were in the class. I could tweak their indignant little noses at will. If they learned something, great. If not, I at least was able to get a tiny bit of “payback”. Which is why my first lesson was usually “The Challenger Disaster”.
Without fail, that one always drove them a little crazy. First I’d review the specifics of what happened on that dreadful morning in January, 1986. Using pictures, I’d take them from the pre-launch checks of STS-51-L right through to the explosion 73 seconds into the flight. That was simply a way to get their undivided attention. Even today, so many years later, they’re difficult images to view. Then I’d ask them – as engineers – what they thought. Could the accident have been prevented? Did the engineers and flight managers miss some critical information? Specifically, was there any data that said that it was too cold to launch?
Long story short, the answer, of course, was “Yes”. The data was there. They simply needed to look at it in the correct way. They didn’t and 7 astronauts died.
Now, I can tell you from vast experience that this conclusion does not sit well with a class full of engineers. Even when they’re shown the data that points to the potential of catastrophic failure with the O-Rings on the SRB’s (Solid Rocket Boosters) – both in actual flights prior to STS-51-L and in controlled ground tests – many engineers will say that the existing information was inconclusive, that the decision to launch was a rational one and that the problem with the O-rings was only clear in hindsight.
At that point, I’d put up a simple graph that showed the relationship between O-ring flexibility (and the likelihood of failure) and temperature. It was not a straight line. It curved exponentially as temperature decreased – toward failure. At the actual launch temperature of STS-51-L, the likelihood of complete failure was dangerously high. Then, as expected, someone in the group would say, “But that temperature was never tested or reached until that specific launch. How could anyone have known what was going to happen?”
“Well”, I’d say, “would you have taken the chance? Whether actually tested or not, you could still plot the point. You knew what the temperature was at the launchpad that morning. You would have had a hypothetical model to look at. You’d have had a literal picture of what might happen. Heck, a reasonably smart 8th grader could make this graph. It’s the simplest kind of analysis in the world. Why didn’t those NASA engineers do it? If you’d seen this, would you have taken the risk?”
Like I said, it was fun. The lesson would wind up with the engineers usually split into 2 groups. Some had become “believers”. The rest still had little or no regard for statistical analysis (model building) or for statisticians. The latter group I always referred to as “Trolls”. You know, cave dwellers dangerous to humanity. Numerically illiterate.
So what does this have to do with anything? Well, the just finished presidential election for one. Sadly, the trolls are still out there. And there’s a lot of them.
If you follow Nate Silver – and his 538 blog – you know what I’m talking about. For weeks, Nate Silver has been predicting the outcome of the presidential election and a number of other nationally significant contests. Silver, of course, is not a troll. He’s an extremely bright statistician (I won’t call him a “Wizard” – he’d probably object – but it’s a reasonably good descriptor). In case you didn’t know, he correctly predicted the outcomes of the 2008 election and much of the 2010 off-year election. How? By using relatively straightforward statistical models. By combining multiple sampling polls and essentially averaging them across time. Other variables are used as well, but it’s the cumulative polling feature that does most of the work. It ain’t hard. Again, a reasonably intelligent 8th grader could probably do it once he (or she) understood the methodology.
Bottom line, Nate once again nailed the whole thing. Well, he did miss a senatorial race (in North Dakota, I think) but was otherwise right on the money. Based on his model, the presidential race only got close right after the first debate (the one President Obama decided to “skip” for some reason). But after a couple of weeks Obama started to pull away again. By the weekend before election day, the President was given a 90% chance of winning the electoral college. He would win, said Silver, nearly all of the so-called battleground states. In a word, it wouldn’t be close.
Meanwhile, all of the trolls – including virtually all of the media – kept saying that it “was too close to call”. It would be a coin flip. It was razor thin. Unpredictable. We probably wouldn’t know the outcome until, at best, the next day. Or it could be days or even weeks before the whole thing got sorted out. Blah, blah, blah………
The election – for the President – was called at 11:13 on election night when he was projected to be the winner in Ohio. It was all over.
The trolls, including a baffled Karl Rove, who wound up making quite a scene on Fox News, were mystified. The Romney campaign had even posted a new website devoted to “transition” efforts following their anticipated victory. It was quickly taken down. These folks simply had not believed what people like Nate Silver had been telling them. That President Obama was extremely likely to win. Romney’s chances were less than 1 in 10. The data was there. All they had to do was look at it.
Still, they refuse to believe. Obama won, they say, because he played dirty. Obama won because of Hurricane Sandy. Obama won because Chris Christie was “nice” to him. Obama won because Romney wasn’t really a true conservative.
No, Obama won because of changing demographics. Demographics that can be measured, quantified, and analyzed. And put into a working model.
“Wizards” know how to do that.