Update: Hello and thanks to everyone coming over from Balloon Juice (and elsewhere.) It took me a while to acknowledge y’all as I’ve been enjoying the strangely liberating experience of being on Amtrak and without intertubes for the last several hours.
Also, picking up on the comment below by Joel, let me emphasize that I don’t want to suggest that Acemoglu et al.’s later work contradicts the earlier paper discussed below; rather, as good research/ers does/do pursuing a question in detail leads to a more complex understanding of the problem. The point is that if you are trying to argue from someone else’s work, you can’t just pick and choose the bits you like.
OK, by now it’s clear that this is overkill. One post by Megan McArdle does not need this kind of rant; it’s like using a howitzer to plink a tin can off a fence. [For grotesque demonstration of my logorrhea problem, check out parts one, two, and three of this series]
But in some sense, all I’m doing is channelling my inner John Foster Dulles: McArdle, and her ilk are not going away. Sadly, no amount of day-by-day debunking seems able to evoke the kind of respect for their claimed craft that would produce even a smidgeon more care and honor in their ongoing attempt to write into reality their unexamined assumptions. So, after Dulles, consider this a kind of blogospheric massive retaliation, an attempt to shock and awe the recalcitrant into the virtues of intellectual honesty.
Which brings us to one more thing that McArdle did not do in her attempt to recruit what she claims as the gold-standard of authority, the academic literature, to bolster her assertion that any attempt to control drug expenditures in the US medical system is tantamount to a pact to kill nice old people.
I’ve used two posts so far to ridicule McArdle’s attempt to demonstrate her intellectual chops by basing this argument on a paper by the Rand Corporation, paid for by Pfizer (the world’s largest drug company) that relies on a secret-sauce “model” to produce the conclusion that free market negotiation by large customers (the US government, e.g.) and/or price controls would reduce the pace of innovation in the drug business, resulting in a loss of months of life expectancy.
In other words: I don’t think much of a study paid for by the man that comes to the shocking conclusion that we must pay that man or he’ll shoot grandma.
But having disposed of the follies inherent in taking advocacy research too seriously, I want to point out one last and deeper flaw in McArdle’s dishonest brandishing of the sword and buckler of academic authority.
Recall that her core argument is that she is a truth teller, while her critics are ideologically driven bullies. She writes
Or we could go to the academic literature. Not the literature from advocacy groups which too often fills the pages of political magazines on the left and right, but something from someplace like Rand….
She says, in other words, that we should believe her because she performs research through the academic literature, and not mere advocacy. (She actually contradicts herself below, by saying that we should believe her because she talks to Big Pharma, and thus is willing to dirty her hands in pursuit of truth that those who insist on relying on (presumptively) disinterested research by those “who have never run or even studied businesses”…but never mind.)
But in fact, leaving aside that Rand is in fact a producer of advocacy literature, the Rand paper and McArdle cite a genuine academic source for a crucial part of the argument, a study that they claim demonstrates that changes in pharma revenue produce outsize shifts in the rate of pharmaceutical innovation.
And yet: McArdle did not in fact “go to the academic literature,” for all of her properly provided hyperlink to the paper in question.
How do I know?
Because I checked.
Here’s the deal: in science journalism — in any attempt to write about technical material for the public — it’s not enough simply to read an abstract or even the whole piece and call it done.
You can’t just read the paper and assume –unless you are genuinely expert in that subdiscipline of the field you wish to cover, and often not even then — that you know what its authors’ actually have done and what it means.
That’s why scientists go to conferences, for one thing — because there is more to grasping the meaning of important work than just reading the stylized and usually telegraphically compressed report of a piece of research in the professional press.
And if you are a reporter, then, by gum, you have to report on the piece, which is a much more involved and difficult task than many give it credit for being, at least if you do it right.
I’m not claiming that I did enough of that complicated work to write an independent piece on the very interesting research McArdle pointed to. But I did do enough to confirm a suspicion formed on reading both McArdle and Rand: Acemoglu and Linn’s paper, does not say what they thought or perhaps simply asserted it did.
This is the ultimate point I’ve been laboring towards all this long while. Science writing is hard because of two related issues. The first is that science — and aspiring sciences like economics — is/are hard. Such work involves complicated ideas, intricate, often mathematically complex methods, jargons that can take quite a while to penetrate and so on.
And the second hurdle for good writing about hard stuff for the public is that the goal of science writing requires that you learn not just how to understand what’s being said in the terms of a discipline itself, but also how to identify, and then convey the core ideas in any given bit of science to an audience that doesn’t have the time you’ve taken to figure it all out.
So what you do if you are a properly trained and ethical science journalist/popular writer is read first, of course, with care and attention to all the places you either or both don’t understand and/or get the sense of an important subtlety…and then you call.
You talk to someone, lots of someones if necessary.
You get people in the field to explain what they are doing; you allow yourself to appear dumb to yourself; (you won’t seem stupid to just about any good faith expert source — only the assholes expect you to have mastered every paper in every journal tangentially bearing on their crucial work before calling, and there really aren’t as many of those as legends suggest); you ask simple questions, and then more complicated ones, until you and your interlocutor agree you’ve got what you need.
You have to persist — and if someone says check out this or that, you do, looking up the papers if necessary and then calling back…and so on. You do what a good reporter does: you cover the story.
This McArdle did not do. If she read the Acemoglu and Linn paper with care, and especially if she had talked with someone who was familiar with the work, she would have realized the subtle distinction those authors made. They looked at the role of market size on innovation for each particular market segment — a disease or group of diseases addressed by a set of competing drugs. The Rand authors, with McArdle trailing happily along, conflated that to an argument about the effect of total market size on innovation across all drugs.
Again — this is subtle, and I had to talk at some length with an economist colleague to get why it mattered.* But the essence of the idea is that the shifts in pharma innovation Acemoglu and Linn identified tracked the relative value of the market for individual areas of interest. It does not follow that gross revenue changes produce the differences in innovation overall that both McArdle and Rand cite. Rather, the two MIT economists simply demonstrated that more market share by drug category produced more new drugs within that category.
Or, more simply: when the Rand/Pfizer authors claimed, and what McArdle (deliberately?) uncritically parroted — that a respected academic body of research confirmed that cuts in gross pharma revenue = cuts in innovation overall — they were, to phrase it most kindly, in error.
It actually gets worse, of course.
There is this thing called the internet. It contains things like the homepages of scholars, which often include lists of their publications…which will often reveal ongoing lines of research or areas of interest.
As it happened, Acemoglu and Linn followed up their 2003/4 paper with a subsequent study, published in 2006 with David Cutler and Amy Finkelstein joining the original pair as co-authors. This second paper looked at the impact of Medicare funding on innovation.
McArdle and the Rand folks do not mention this study, and it’s pretty clear why they might have wanted to ignore it. For what did its authors find?
More formally, they wrote, “Our reading of the evidence is that there is no compelling case that Medicare induced significant pharmaceutical innovation.”
That’s not conclusive either, as one of the economists with whom I spoke explained to me. What is clear — and those I asked agreed — is that connection between drug producer prices, market size and innovation is at best a mess (my word). There is no basis on which to assert, as McArdle does, that
The upshot is that the overwhelming weight of the available evidence indicates that the effect of price controls in the US would be real, significant, and bad….The idea that any significant change in the profit margins on drugs sold here [in the US] will have enormous impact on the future of pharmaceuticals, is as close to a fact as we can get in this vale of uncertainty.
That is unproven in the sources she cites, and it is unproven in the real world. On the basis of the academic literature she so proudly proclaims as her guide, she cannot know what she thinks (or wishes) were true.
To cover up this and prior errors, she is reduced to insulting her critics who have pointed out her ignorance, sloppiness and general lack of understanding of what real work looks like in the field in which her competence is supposed to lie. (Economics Editor of America’s Oldest Serious Magazine™!)
It’s time I finished this off, and by now the message, I think, must be obvious. This is one tired horse I’m beating.
But here is a last thought, to try and generalize from one rather minor example of shoddy work on the internet. It is a sign of both ignorance and bad faith to treat the real world and attempts to understand it as cavalierly as McArdle does here, and the right-punditocracy has done so often of late.
But this is where the right is just now. You can see bad faith and sloth too in George Will’s embarrassing attempts to weigh in on climate change.** You can see an almost comical (were it not so willed) misreading of the research in almost any attempt to produce a scientific justification for failing to credit the fact of evolution. You can sure find the attempt to claim unearned authority running through McArdle’s work.
In each case, whatever the variations of motive, method and intent, all of this rests on the writer’s determination to ignore how science actually works — and hence how human beings actually find out useful knowledge about the world. In each case — the root intellectual activity is to cherry pick what ever serves to bolster conclusions reached long before the notorious liberal bias of reality has had any chance to sully their perfect thoughts.
And as for McArdle herself? Her sins are typical, but for that very reason, I guess, hardly worth the bludgeon I’ve tried to wield over the last several thousand words. Except for this: a failure to think clearly about how to repair a deeply flawed health care system kills people. There are significant studies that explore those excess deaths. Here’s one.*** And if you take that work seriously, then you have to see the Panglossian mission of McArdle and her herd of thundering ilk to present chunks of the status quo as best of all possible outcomes as implicated in those deaths.
More broadly: writing about the things that matter in real people’s lives — that may end some of those lives — is not a game.
That McArdle writes as if it were is the true measure of her work.
*I’m not naming by source, because that person dislikes the hurly burly of the blogosphere…and while I know that unnamed sources are more or less worth what you know about them, you have to decide here whether I’m a reliable enough interlocutor to believe what follows.
** Click that link to see why Chris Mooney gets around in public more than I do: he gets done in 800 words what I’ve just spent in excess of 4,000 spouting about. Still, someone at MIT has to take on the Henry Jenkins mantle of ridiculously overextended blogorrhea.
*** For a quick guide to skepticism in the face of research, here are a couple of guide points on this study: It’s funded under a NRSA (NIH) grant — not by an advocacy group. It draws on a history of similar studies engaged with the same question: whether or not uninsured status correlates with excess deaths. The paper contains some detail on their methodology, and crucially, includes a section on limitations and potential sources of error in the work. To gain confidence in its quite commanding conclusion — that lack of insurance is associated with more than 44,000 deaths per year — you (I) would need to do quite a bit more reporting than a simple read of the paper. But my point here is that this piece of work passes several of the smell tests that the Rand study, and McArdle’s writing, did not. You have something to go on here. And with this, the sermon endeth.
Images: Adolf Friedrich Erdmann von Menzel, “Eisenwalzwerk (Moderne Cyklopen)” Iron Mill Work (Modern Cyclops) 1872 1875.
Deutsche Bundespost, designed by Steiner, stamp in honor of the history of post and telecom, 1990.