In the midst of the Reinhart-Rogoff meltdown, a commenter was aghast to learn that their paper was not peer reviewed.* She asked, reasonably, how could the newspapers report findings that had not gone through that process?
It’s a fair question, and I should expand on the too glib remarks from my post:
So the answer is to only accept peer-reviewed work as economic knowledge, right? Nope. That would be a) too limiting, and b) wouldn’t advance the epistemological cause as much as you think. Peers have their own sets of biases, particularly as gate keepers.
First, had R&R gone through the peer-review process, I’m fairly confident that a) the spreadsheet error would NOT have been found, but b) the paper would have been sent back to them for failing to provide even a cursory analysis of the possibility of reverse causality (slower growth leading to higher debt/GDP ratios vs. the R&R claim of the opposite). Re “a,” peer reviewers do not routinely replicate findings, though they should when possible (more work these days is with proprietary data sets which cannot legally be shared).
But given “b”—this influential work would perhaps not have seen the light of day without significant revision—how is it that I’m not calling for more peer reviewing?
That’s where the “too limiting” part comes in. First, a lot of what’s important in economics is fairly simple analysis of trends—descriptive data—without the behind-the-scenes number crunching of the type R&R did. You learn a lot about austerity, for example, by simply plotting GDP growth rate of countries engaged in it, or wage inequality by observing the movements of different wage percentiles over time. Looking back on my own posts, you’ll see employment plotted against employment, real wage trends, growth rates…none of which take hardly any data manipulation and none of which depend on choices that would concern a reviewer.
So to insist that everything gets peer reviewed, including the presentation of descriptive data published by reliable sources (e.g., BLS or BEA), would be to raise the bar unnecessarily high. Of course, that’s not to say that descriptive data presenters can’t make mistakes, and we do. So in the best of all possible worlds, it would be better if such work was checked by peers before it was publicized, even on blogs. Which brings me to the next problem.
PR takes a long time (in my experience, six months to a year), and policy makers and reporters will seek results more quickly. If newspapers had rules that they would only print peer-reviewed findings, there would be a long lag between, say, the release of an employment report and analysis of the results.
Then there are the gatekeepers’ biases, as mentioned above. I’ve published a precious few peer-reviewed papers,** but those more active in that world tell that papers that challenge conventional wisdom have trouble getting very far, regardless of quality. And since many aspiring economists are intimately familiar with the biases of the gatekeepers, they are hesitant to test those boundaries for fear of not making it into the important journals.
So, what’s the answer? The media gatekeepers themselves have to be highly vigilant, especially when the results they’re publishing are not transparent. R&R’s work, for example, involving many different countries over many years, with results put into seemingly arbitrary bins (debt/GDP<30%, 30%-60%, etc.) might be a flag that would lead a non-technical reporter to ask a lot of their sources what they thought before running with it.
In a sense, I’m suggesting more on-the-spot peer reviewing when results are complex, counter-intuitive, or, as in R&R’s case, directional causality is being invoked. This risks the dreaded “he-said, she-said” which can be off-putting to readers, and it can easily be overused. For example, there’s nothing complex, counter-intuitive about work showing, e.g., high unemployment associated with weak wage growth or higher poverty, so a non-PR’ed paper making such connections might invoke less scrutiny and a newspaper write-up of it would not obviously require alternative explanations.
But really, and unfortunately, at the end of the day, when it comes to economics reporting of results, you have to know who to trust. The WSJ editorial page, for example, is often not trustworthy in their use and choice of findings. The NYT’s, much more so. (I should and will find some examples.) It’s better with reporters (vs. editorial writers) but there too you’ll find folks with thumbs on scales. In fact, one of the things I do here (and many other econ-bloggers do the same) is try to catch problems and elevate articles that get it right.
And then there’s the fact that facts don’t matter anywhere near as much as they should anyway these days, a much bigger problem that cannot be solved by any amount of peer reviewing.
*This confused some aficionados, because it appeared in the peer-reviewed journal American Economic Review, but in the “Papers and Proceedings” edition, which is not PRed.
**Like most in my field of think tank work, anything beyond a blog, especially with serious number crunching, is reviewed by as many outsiders as I can get to read it, but the difference is that they don’t have the final say on publication.