Most people are probably aware of the current situation in the United States with respect to peanut butter (or, more correctly, peanut paste) and salmonella outbreaks. The outbreaks, linked to a plant in Georgia, are responsible for 8 deaths and at least 500 sickened individuals. One of the questions being asked by the media, of course, is: How can this happen in a country that supposedly has so many controls in place? Shouldn’t this have been caught very early on, if not by the company itself, then reasonably quickly by the FDA? Consider the following excerpt from a story in yesterday’s Washington Post:
Officials at the Food and Drug Administration and the Center for Disease Control and Prevention, which have been investigating the outbreak of salmonella illness, said yesterday that Peanut Corporation of America found salmonella in internal tests a dozen times in 2007 and 2008 but sold the product anyway, sometimes after getting a negative finding from a different laboratory (italics are mine).
Now, I’ve been a statistician for over 30 years. I’ve spent most of my working life analyzing and interpreting data from manufacturing operations. Some of that, of course, has been in the context of “quality control” (a misnomer, to be sure, but that’s another story). What got my attention here is the last part of that last sentence (“sometimes after getting a negative finding from a different laboratory”).
What does that mean? If true, it simply means that they engaged in one of the most common practices in manufacturing – they “retested” batches of product when the initial testing indicated a problem. It’s done every day in almost any business you can think of. The good news is that it usually doesn’t result in people dying (they might get a product that doesn’t work as advertised, but it generally doesn’t cost them life or limb).
Simply put, they made batches of peanut paste and then tested them for whatever characteristics required (the presence of salmonella being just one of them). If the product met all the tests, it was released for sale. Nobody, by the way, retests product that “passes”. If the product failed to meet all of the testing requirements, it was almost certainly retested. If it passed the second set of tests, it was released. If it didn’t, it was probably retested again. Finally, if they couldn’t make it “pass” using their own laboratory, they might ship a sample off to some other lab, again hoping for positive results. As you might imagine, sooner or later you’re bound to get a result that makes you happy. We used to call this practice “torturing the data until it gave you the right answer”.
The problem, obviously, is that the “bad data” is ignored as soon as you get some numbers that you like. If the data you throw out is, in fact, correct, then you have a problem. In this case, a very big problem.
Everybody talks about quality. Everybody says that quality is their most important priority.
Everybody lies. Mostly, it’s all about money.