Posted tagged ‘Health Care Costs’

It’s not that McArdle can’t read…it’s that she can’t (won’t) think: part four (and last, thank FSM).

October 7, 2009

Update: Hello and thanks to everyone coming over from Balloon Juice (and elsewhere.)  It took me a while to acknowledge y’all as I’ve been enjoying the strangely liberating experience of being on Amtrak and without intertubes for the last several hours.


Also, picking up on the comment below by Joel, let me emphasize that I don’t want to suggest that Acemoglu et al.’s later work contradicts the earlier paper discussed below; rather, as good research/ers does/do pursuing a question in detail leads to a more complex understanding of the problem.  The point is that if you are trying to argue from someone else’s work, you can’t just pick and choose the bits you like.

______


OK, by now it’s clear that this is overkill.  One post by Megan McArdle does not need this kind of rant; it’s like using a howitzer to plink a tin can off a fence.  [For grotesque demonstration of my logorrhea problem, check out parts one, two, and three of this series]


But in some sense, all I’m doing is channelling my inner John Foster Dulles:  McArdle, and her ilk are not going away.  Sadly, no amount of day-by-day debunking seems able to evoke the kind of respect for their claimed craft that would produce even a smidgeon more care and honor in their ongoing attempt to write into reality their unexamined assumptions.  So, after Dulles, consider this a kind of blogospheric massive retaliation, an attempt to shock and awe the recalcitrant into the virtues of intellectual honesty.


Which brings us to one more thing that McArdle did not do in her attempt to recruit what she claims as the gold-standard of authority, the academic literature, to bolster her assertion that any attempt to control drug expenditures in the US medical system is tantamount to a pact to kill nice old people.


I’ve used two posts so far to ridicule McArdle’s attempt to demonstrate her intellectual chops by basing this argument on a paper by the Rand Corporation, paid for by Pfizer (the world’s largest drug company) that relies on a secret-sauce “model” to produce the conclusion that free market negotiation by large customers (the US government, e.g.) and/or price controls would reduce the pace of innovation in the drug business, resulting in a loss of months of life expectancy.


In other words:  I don’t think much of a study paid for by the man that comes to the shocking conclusion that we must pay that man or he’ll shoot grandma.


But having disposed of the follies inherent in taking advocacy research too seriously, I want to point out one last and deeper flaw in McArdle’s dishonest brandishing of the sword and buckler of academic authority.


Recall that her core argument is that she is a truth teller, while her critics are ideologically driven bullies.  She writes

Or we could go to the academic literature.  Not the literature from advocacy groups which too often fills the pages of political magazines on the left and right, but something from someplace like Rand….

She says, in other words, that we should believe her because she performs research through the academic literature, and not mere advocacy.  (She actually contradicts herself below, by saying that we should believe her because she talks to Big Pharma, and thus is willing to dirty her hands in pursuit of truth that those who insist on relying on (presumptively) disinterested research by those “who have never run  or even studied businesses”…but never mind.)




But in fact, leaving aside that Rand is in fact a producer of advocacy literature, the Rand paper and McArdle cite a genuine academic source for a crucial part of the argument, a study that they claim demonstrates that changes in pharma revenue produce outsize shifts in the rate of pharmaceutical innovation.


And yet:  McArdle did not in fact “go to the academic literature,” for all of her properly provided hyperlink to the paper in question.


How do I know?


Because I checked.


Here’s the deal:  in science journalism — in any attempt to write about technical material for the public — it’s not enough simply to read an abstract or even the whole piece and call it done.


You can’t just read the paper and assume –unless you are genuinely expert in that subdiscipline of the field you wish to cover, and often not even then — that you know what its authors’ actually have done and what it means.


That’s why scientists go to conferences, for one thing — because there is more to grasping the meaning of important work than just reading the stylized and usually telegraphically compressed report of a piece of research in the professional press.



And if you are a reporter, then, by gum, you have to report on the piece, which is a much more involved and difficult task than many give it credit for being, at least if you do it right.


I’m not claiming that I did enough of that complicated work to write an independent piece on the very interesting research McArdle pointed to.  But I did do enough to confirm a suspicion formed on reading both McArdle and Rand:  Acemoglu and Linn’s paper, does not say what they thought or perhaps simply asserted it did.


This is the ultimate point I’ve been laboring towards all this long while.  Science writing is hard because of two related issues.  The first is that science — and aspiring sciences like economics — is/are hard.  Such work involves complicated ideas, intricate, often mathematically complex methods, jargons that can take quite a while to penetrate and so on.


And the second hurdle for good writing about hard stuff for the public is that the goal of science writing requires that you learn not just how to understand what’s being said in the terms of a discipline itself, but also how to identify, and then convey the core ideas in any given bit of science to an audience that doesn’t have the time you’ve taken to figure it all out.


So what you do if you are a properly trained and ethical science journalist/popular writer is read first, of course, with care and attention to all the places you either or both don’t understand and/or get the sense of an important subtlety…and then you call.


You talk to someone, lots of someones if necessary.




You get people in the field to explain what they are doing; you allow yourself to appear dumb to yourself; (you won’t seem stupid to just about any good faith expert source — only the assholes expect you to have mastered every paper in every journal tangentially bearing on their crucial work before calling, and there really aren’t as many of those as legends suggest); you ask simple questions, and then more complicated ones, until you and your interlocutor agree you’ve got what you need.


You have to persist — and if someone says check out this or that, you do, looking up the papers if necessary and then calling back…and so on.  You do what a good reporter does:  you cover the story.


This McArdle did not do.  If she read the Acemoglu and Linn paper with care, and especially if she had talked with someone who was familiar with the work, she would have realized the subtle distinction those authors made.  They looked at the role of market size on innovation for each particular market segment — a disease or group of diseases addressed by a set of competing drugs.  The Rand authors, with McArdle trailing happily along, conflated that to an argument about the effect of total market size on innovation across all drugs.


Again — this is subtle, and I had to talk at some length with an economist colleague to get why it mattered.*  But the essence of the idea is that the shifts in pharma innovation Acemoglu and Linn identified tracked the relative value of the market for individual areas of interest.  It does not follow that gross revenue changes produce the differences in innovation overall that both McArdle and Rand cite.  Rather, the two MIT economists simply demonstrated that more market share by drug category produced more new drugs within that category.


Or, more simply:  when the Rand/Pfizer authors claimed, and what McArdle (deliberately?) uncritically parroted — that a respected academic body of research confirmed that cuts in gross pharma revenue = cuts in innovation overall — they were, to phrase it most kindly, in error.


It actually gets worse, of course.


There is this thing called the internet.  It contains things like the homepages of scholars, which often include lists of their publications…which will often reveal ongoing lines of research or areas of interest.


As it happened, Acemoglu and Linn followed up their 2003/4 paper with a subsequent study, published in 2006 with David Cutler and Amy Finkelstein joining the original pair as co-authors.  This second paper looked at the impact of Medicare funding on innovation.


McArdle and the Rand folks do not mention this study, and it’s pretty clear why they might have wanted to ignore it.  For what did its authors find?


Bupkis.


More formally, they wrote, “Our reading of the evidence is that there is no compelling case that Medicare induced significant pharmaceutical innovation.”


That’s not conclusive either, as one of the economists with whom I spoke explained to me.  What is clear — and those I asked agreed — is that connection between drug producer prices, market size and innovation is at best a mess (my word).  There is no basis on which to assert, as McArdle does, that

The upshot is that the overwhelming weight of the available evidence indicates that the effect of price controls in the US would be real, significant, and bad….The idea that any significant change in the profit margins on drugs sold here [in the US] will have enormous impact on the future of pharmaceuticals, is as close to a fact as we can get in this vale of uncertainty.

That is unproven in the sources she cites, and it is unproven in the real world.  On the basis of the academic literature she so proudly proclaims as her guide, she cannot know what she thinks (or wishes) were true.


To cover up this and prior errors, she is reduced to insulting her critics who have pointed out her ignorance, sloppiness and general lack of understanding of what real work looks like in the field in which her competence is supposed to lie. (Economics Editor of America’s Oldest Serious Magazine™!)


It’s time I finished this off, and by now the message, I think, must be obvious.  This is one tired horse I’m beating.


But here is a last thought, to try and generalize from one rather minor example of shoddy work on the internet.  It is a sign of both ignorance and bad faith to treat the real world and attempts to understand it as cavalierly as McArdle does here, and the right-punditocracy has done so often of late.


But this is  where the right is just now.  You can see bad faith and sloth too in  George Will’s embarrassing attempts to weigh in on climate change.**  You can see an almost comical (were it not so willed) misreading of the research in almost any attempt to produce a scientific justification for failing to credit the fact of evolution.  You can sure find the attempt to claim unearned authority running through McArdle’s work.


In each case, whatever the variations of motive, method and intent, all of this rests on the writer’s determination to ignore how science actually works — and hence how human beings actually find out useful knowledge about the world.  In each case — the root intellectual activity is to cherry pick what ever serves to bolster conclusions reached long before the notorious liberal bias of reality has had any chance to sully their perfect thoughts.


And as for McArdle herself?  Her sins are typical, but for that very reason, I guess, hardly worth the bludgeon I’ve tried to wield over the last several thousand words.  Except for this:  a failure to think clearly about how to repair a deeply flawed health care system kills people.  There are significant studies that explore those excess deaths.  Here’s one.***  And if you take that work seriously, then you have to see the Panglossian mission of McArdle and her herd of thundering ilk to present chunks of the status quo as best of all possible outcomes as implicated in those deaths.


More broadly: writing about the things that matter in real people’s lives — that may end some of those lives — is not a game.


That McArdle writes as if it were is the true measure of her work.


*I’m not naming by source, because that person dislikes the hurly burly of the blogosphere…and while I know that unnamed sources are more or less worth what you know about them, you have to decide here whether I’m a reliable enough interlocutor to believe what follows.


** Click that link to see why Chris Mooney gets around in public more than I do:  he gets done in 800 words what I’ve just spent in excess of 4,000 spouting about.  Still, someone at MIT has to take on the Henry Jenkins mantle of ridiculously overextended blogorrhea.


*** For a quick guide to skepticism in the face of research, here are a couple of guide points on this study:  It’s funded under a NRSA (NIH) grant — not by an advocacy group.  It draws on a history of similar studies engaged with the same question:  whether or not uninsured status correlates with excess deaths.  The paper contains some detail on their methodology, and crucially, includes a section on limitations and potential sources of error in the work.  To gain confidence in its quite commanding conclusion — that lack of insurance is associated with more than 44,000 deaths per year — you (I) would need to do quite a bit more reporting than a simple read of the paper.  But my point here is that this piece of work passes several of the smell tests that the Rand study, and McArdle’s writing, did not.  You have something to go on here.  And with this, the sermon endeth.


Images:  Adolf Friedrich Erdmann von Menzel, “Eisenwalzwerk (Moderne Cyklopen)” Iron Mill Work (Modern Cyclops) 1872 1875.

Deutsche Bundespost, designed by Steiner, stamp in honor of the history of post and telecom, 1990.


It’s not that McArdle can’t read…it’s that she can’t (won’t) think: part three

October 7, 2009

This is the third part of a ridiculously oversized tome on one example of what I see as a systematic failure on the right to engage science in any meaningful way. [Part one is here; part two, here]


In part two, I noted that serial offender Megan McArdle was trying to defend a claim about how health care reform will kill grandpa by asserting that the scientific literature supported that view.


The literature she cited began and mostly ended with a long paragraph quoted from a study by the Rand corporation…and in the previous post I noted that one of the problems in making the claim that McArdle’s argument was based on a rigorous review of the literature was that this paper was essentially research for hire, where the client was the world’s largest drug company.


While it is not true that just because Pfizer paid for a study that showed cutting Big Pharma revenues would result in a decline in pharma innovation that would lead to a loss in life expectancy*…it does mean that you can’t just do what McArdle did here: say “look — some folks with initials after their names confirm my unexamined conclusions.  Therefore I win!  Yippee.”


Rather, what you have to do with any piece of research, and especially one that is both making a major claim and is doing so from a clear position of interest in the outcome of the research, is to check.  You gotta interrogate the paper, its methods, its claims, its interpretations, its conclusions, the lot.


You know — basic reporting, the basic lesson we make sure each science journalism student we encounter at my shop (and every other good science writing/journalism program too) learns in the first weeks of study.


This McArdle clearly did not do.  How do I know?  I’m not (I promise) going to fisk the Rand paper top to bottom, but there are several issues with it that don’t pass the smell test right off.


The first is that the authors present their results as the output of a complicated model, itself derived from several other models for the behavior of the large variety of inputs needed to understand whether or not a cut in drug company revenue will have an impact on innovation.



A first plausible question is how the model actually works, and to what extent it has been tested.  Not to get too wonky — and not to claim expertise I certainly don’t have — but if this were a serious paper for the professional literature, you’d expect at least some discussion about the underlying logical and mathematical structure and strategy of the model.  It’s not there, at least in the publicly released form of the paper.


Next: check out the authors’ rhetoric .  It doesn’t read like scientific writing…and there’s a good reason for that.


To see what I mean, look to the paper both the Rand folks and McArdle cite as supportive of their arguments, Acemoglu and Linn, written by two MIT economists.  There you can scroll down to the final section and you see a set of graphics supplied to support the discussion above.


Some are labelled “tables” and they contain accounts of the data collected to support the model, complete with explanatory captions to allow a reader to follow the reasoning that led the authors to gather that particular slice of reality and not some other.


Some are called “figures,” and they come in the form of graphs which show what happens to that data when run through a model calculation.


Now go look at the Rand document.  It presents six graphics.  Each presents some feature of the argument they seek to make — how a given approach to pharmaceutical cost control affects innovation and or longevity.  They are easy to read, striking, even, with graphs or bar charts to show the devastating consequence of reducing producer payments to big drug companies.  They should scare anyone who wants to live their fully alloted span — as they appear to have terrified the young and impressionable McArdle.


But if you want to figure out if the graphs represent much of anything beyond conclusions expressing the assumptions with which their creators began, you can’t.  Each has the identical caption:

Source:  Authors’ calculations based on the Global Pharmaceutical Policy Model [the authors’ rather modest signfiier for their black box of an analytical engine].

Just in case you were wondering — that’s the language of advocacy, not research.

The authors are saying “Trust me,” and anyone with even a passing knowledge of the movie business knows that this is the punch line to the old joke:

How does a Hollywood executive say “F*ck you?

And if you needed a yet more obvious clue, check out the label put on each graphic.  It’s not “Figure,” or “Table,” or even “Results.”  Oh no.  This is no mere milque-toast publication of data and the logic that lies behind the authors’ inferences.  That kind of thing is for the intellectually conservative, or those committed to an attempt at disinterested investigation.


The Rand team, hired by Pfizer, knows what it is doing.  It is making a case for a particular policy outcome, and hence its graphics are labeled — and I’m not kidding — “Exhibits.”


Not to belabor this — I’m after McArdle and the approach to argument she embodies, not the well-known habits of the bespoke policy research game — but one of the first and most basic lessons we try to teach our students in the Graduate Program in Science Writing in MIT is that  just because some document looks like a real scientific paper, or that  some result gets published somewhere that looks impressive, you cannot then safely conclude that what it says is true.


Rather, we tell our students, you have to read it not just for the results, but for the degree to which the paper itself does what a serious piece of research should.  Does it at a minimum provide you with enough information to ask intelligent questions about what it purports to show.  If, as here, you see such a broad tell as the word “exhibit,” then you have to know that this demands a lot more digging before you can accept its claims.  The say-so of the paper and its authors isn’t enough; they’ve told you so themselves.


It is tempting simply to ignore any paper like this one — anytime someone tells you that they’ve come up with some complicated model that gives a magic answer, a long life in science writing tells you that they are blowing smoke.  Remember: big claims require big justification.

Over time, with experience in the business (either that of science or science writing) you learn when to get revved up about something, and when to sit back and let shoddy work slide by without close examination.  Life is too short to spend one’s time doing what xkcd so famously documented.


But let’s give the Rand paper, and McArdle yet more benefit of the doubt.  All that I’ve said above suggests that the Rand paper itself is telling you that you need to dig deeper before you rely on it.  Who knows?   Maybe its conclusions are true, even if it is impossible to determine that from the evidence presented.


Well, I haven’t done anything like a proper job of reporting to that depth.  But what I got in a morning’s reading and calling is strong hint that the Rand paper is, as expected,  propaganda, nicely garbed in Rand blue.


For the details….look to part four.

Images:  Rube Goldberg cartoon.

xkcd “Duty Calls




It’s not that McArdle can’t read…it’s that she can’t (won’t) think: part two.

October 7, 2009

So:  on to the bill of particulars on McArdle’s recent attempt to claim the intellectual high ground in her ongoing attempt to convince us that we live in the best of all possible drug markets. [Part one is here]

I’m not going to fisk the entire piece in question.  Instead, I’m going to focus on one passage in which she invokes the research community to defend her assertion that artificially high US drug prices for big pharma are essential to the future of drug innovation.  You can read in the way she treats this literature that she either doesn’t or willfully won’t engage her subject up to the level that would allow her to make believable arguments.

She introduces her bravura display of rigor this way:

…we could go to the academic literature.  Not the literature from advocacy groups which too often fills the pages of political magazines on the left and right, but something from someplace like Rand.  And fortuitously, Rand happens to have published a paper on this very topic!

McArdle goes on to quote at length a passage about what would happen to longevity if the US imposed price controls on pharmaceuticals to bring US costs down to those paid by Europeans (about 20% less than current prices, according to the paper).

McArdle then seeks to emphasize the urgency, even the moral quality of her concern for maintaining the status quo in pricing by citing this conclusion from the Rand group:

…. the introduction of price controls would reduce life expectancy by two-tenths of a year for Americans ages 55-59 alive in 2010 and by one-tenth for Europeans ages 55-59 alive in the same year. In percentage terms, these correspond to 0.8 percent and 0.7 percent declines from the status quo.


And, just to finish laying the groundwork, she adds one more cite from the professional literature to affirm the authority of the quite striking claim above:

If you’re wondering how much levels of spending matter, you could go to Acemoglu and Linn, who estimate that a 1% increase in market size (aka revenue) for pharmaceuticals results in a 3-4% increase in the number of drugs being approved.

Sounds pretty devastating, right?


Well, yes…and that ought to be the clue.  In science, and in common experience too, of course, the rule of thumb is that the more striking the claim the greater the appropriate level of skepticism.   So before you endorse or adopt such positions, you need to test the inference.


There are a number of ways to do so, of course.  Step one is to consider the source.


Did McArdle?  Not really.  A first reality check comes from an inquiry into the background of the Rand study.


Go to what the Rand paper actually says:  It analyzes two cases:   either reduce payments to drug companies, or increase subsidies to consumers to get an effect on consumer pocketbooks (absent the tax consquences of the latter policy) that would be the same.  Reducing drug expenditures though it saves consumers money but, according to this analysis, costs them life expectancy.  Subsidies leave consumer finances unchanged, but do not impose the cost in months of life lost.  As the value of life in the model exceeds that of the saving on drug costs, the conclusion is obvious:  No attempt to reduce drug company receipts should be made, with policy makers concerned about the effects of the cost of health care instead told to focus on further subsidizing the purchase of drugs.


That is:  pay the man, or we will kill grandpa before his time.*


But then, if you go on to read to the end of the study, you find something interesting.  The study was not a piece of social science research undertaken by a body of disinterested researchers. Rather, you are reminded that Rand is a private, nonprofit research shop, available to perform academic-level, but not academic-housed studies for those willing to pay.  The lead funder for this study?


Pfizer.


Which, if you’re interested, is, by a wide margin, the largest pharmaceutical company in the world.


McArdle does not point this out.  I’m not sure if she noticed it in her first reading of the piece.  She does respond in the comment thread to a reader who pointed this out, writing”If you can find articles on the subject that are not funded by an institution with a clear dog in this fight, please send them. Rand is a widely respected institution.”


This is…how to put it…seriously weak sauce.


Juxtapose it with her snark about “the literature from advocacy groups which too often fills the pages of political magazines on the left and right.”


In other words, she’s relying on the argument from authority, again:  Rand is respectable…a member of the village.  The fact that it is an intellectual gun-for-hire does not seem to matter to her, and of course her defense — that everybody does it —  is wrong, a false statement.


You don’t have to go far to find the confounding counter-example.  The other paper she cites, (on which more later), was written by two economists both then at MIT.  The work, published in the Quarterly Journal of Economics, lists its outside funders:  first the National Science Foundation, and then the Russell Sage Foundation, a one hundred year old philanthropic institution with a focus on “the improvement of social and living conditions in the United States.”


Oh well…


Now of course, the fact that Rand was hired by the world’s largest drug company, and then produced a paper which argued that the pharmaceutical industry’s revenue should under no circumstances be cut unless you are willing to accept death and lamentations, is not in itself prima facie evidence that this paper is a put-up job, astroturf research with Rand serving as the cut-out for big Pharma.


But it does, or it should, compel you to interrogate the paper with great care.


And for that:  look to part three of this series.


*Or perhaps, if  you follow the learned doctor M. Python, pay the man and we’ll kill grandpa before his time…;)


A brush with the US Health Care System: one man’s confession.

April 11, 2008

I talk the talk about health care. Do I walk the walk?

Here’s the talk: the US approach to health care is many things, not all of them bad, but taken all-in-all, scientific is not the first epithet that comes to mind.

One of the best ways into the understanding that American medicine still has the norms of a guild craft, rather those of a scientific discipline is to work through the evolution to be found in Atul Gawande’s essays.

Take a look at his first collection, Complications. In it, Gawande documents his observations of medical practice in the context of several attempts to impose some experimental and quantitative rigor on community or individual customs of treatment. Gawande subtitled the book “A Surgeon’s Notes on an Imperfect Science,” and in it he explores the line between the tradition of individual care, a patient by patient piece-work approach, and emergent attempts to apply more abstract scientific methods to the analysis and delivery of medical practice.

Several essays in that collection expose the tension in Gawande’s own mind between the two approaches. In one, “The Computer and the Hernia Factory,” after concluding that the data collection and analysis approach produced more consistently positive results than did a “doctor’s instinct,” master/apprentice approach to teaching and doing medicine, he still brings back the individual doctor to the center stage — not necessarily as the decider(er), but as the mediator between machine and patient: “Mabye machines can decide, but we still need doctors to heal.”

Years later, moving from resident to attending, Gawande is a little less patient. His essay, “The Checklist,” is essential reading for anyone who wants to know why discussions of health care costs are meaningless until we ask the question of how to drive the core ideas of science and engineering into American medical practice.*

In it, Gawande describes how the use of a simple checklist greatly improved quality of care and outcomes in Michigan ICUs, in a protocol designed by Dr. Peter Pronovost of Johns Hopkins. Items on the list were in some cases absurdly basic: the first item was to remind doctors to wash their hands with soap! I won’t go on — the article was extensively blogged and is there for anyone to read at the link above.

The point is that a simple bit of engineering/systems analysis — Gawande compares Provonost’s insight to the approach that created the pilot’s checklist to make flying a plane as complicated as the B-17 possible — demonstrated that decades of ICU habits of ad hoc care, responding to each patient and event as if it each were new on each encounter, actually killed people who did not need to die. Also it saved a ton of money.

That’s the good news: Gawande has come round to the view that I think is correct, that there is a whole lot of discipline needed in medicine that does not come from the traditional medical education. It’s a very hard problem, because there is so much specificity in the things that can go wrong with a human body. But there is a baseline of repeatable, analyzable common practice that suggests we could do hugely better than we do. (I’m not even going into the disparities of outcomes research — see this blog entry for a sample.)

All of which leads me to my own little story. This week I made the mistake of eating some really good seafood at breakfast that had passed its plausible sell-by date.

I got nastier all day, until, at about 5:30, I dragged my queasy, achy carcase to my car to pick up my son from his afterschool gig. About a quarter of a mile from my home, I had to pull the car over, lean on cemetery fence (nice short commute there…), and then slowly sink to the ground.

Big drama. Middle aged guy collapsing on a public street in a quiet Boston suburb.

The good news: somehow, in this den of godless, value free, post modern liberalism, half a dozen people sprung up out of bare asphalt, as far as I could tell, to take care of me. The guy across the street saw me go down, and ushered me into his home. A woman wanted to help so badly that she went back to her house to bring me a glass of water — while I was sitting in the first guy’s kitchen! An older couple joined my first benefactor to help me stumble across the rather busy road and so on. (All this by way of reconfirming that by most social welfare statistics, blue Massachusetts beats the Bible Belt on issues related to “values”.)

Someone called 911, and in a couple of minutes a cop, the EMTs, the Fire Dept, and then finally an ambulance showed up. I was blood pressured and poked and questioned again and again (I finally answered, when asked my address for the third time, “yes, I am oriented to space and time.” They stopped….for about five minutes.) I tried to beg off — I had started to feel better, and I just wanted to go back home once a couple of calls made sure that my son was in good hands — but no one wanted to let me get back behind the wheel of a car (smart).

So it was onto the gurney and into the ambulance, and off to the local ER.

Now – to recap the state of play here. I had eaten a bad piece of fish at 8 a.m. By 6:30, it had declared its full fury, but I had gotten through the excitement and was in fact feeling better. I was drinking water by mouth, and slowly getting my color and my balance back. The EMTs strung me up with a liter of saline, and that was helping speed the case along.

Where I sat — the proper use of a horse rather than zebra approach to my care in the ER would have been to take my vitals (which they did); talk to me for a bit (which they did, impressively quickly); and then let me sit and see how I did for an hour or so with more fluids.

Well what they did was a great big pile of blood work — four or five vials, I lost count; an EKG (reasonable, actually — given that an almost fifty year old, out of shape guy falling down on the street does lead to thoughts of heart attacks), a chest X-ray (?), another liter of saline — because my nurse decided on her own that she wanted to pound more fluids in — and some compazine by iv push.

There is no question that it all worked. By 9:30, they were done with me. I felt better. My nausea was gone — it had been before the compazine, but that sure helped keep the prarie dogs in their holes. I was hydrated. I went home and had no more effects from the fish.

Which is to say that after I had gotten rid of whatever was bugging me in my highly exciting collapse, my body had finished taking care of the problem itself. I did have to be careful rehydrating — but there will be, I’m sure, a couple of thousand bucks of extra billing for my insurer to take care of that could have been at least partly avoided if I had just been put in the corner with some ginger ale under a watchful eye for an hour or two.

That said: I’m glad I got the care, or rather, at the time, once in the embrace of the EMTs, I handed over my autonomy willingly to the doctors, nurses and techs (really, the other way round), who ran me through the mill.

So: what about science, and health care economics. Clearly, once I went down, my caretakers had to eliminate the possibility that my self reporting masked the most obvious dangerous possible alternative, the heart attack scenario. After that — I’d like to know what the data suggest about chest x-rays in these circumstances. If what I received is the standard protocol — has anyone stopped to ask whether it makes sense?

The individual nature of medical care — the fact that one person (me!) receives the treatment, given under orders from one other person, makes doing the rational investigation so hard. Now, I question the care I got, not because it was bad, but because it seems likely to have been overkill (nasty choice of word, that). Then — have at it, folks. Do what you need; I really did not like the feeling of sliding down that burial ground fence.

I offer no real conclusion. This is really just a meditation on why science matters. We can do medicine much better, from the basic understanding that is coming from molecular and genetic level investigations of pathology and health, to the discovery of the simplest abstractions that make everyday medicine as systematic and error-anticipating as possible. That’s the end of medical research, drastically underfunded, that could most likely (certainly Gawande makes a good case) save the most money and lives of any short to medium term innovation imaginable. It might even have got me home an hour or so earlier, and saved somebody a thousand bucks or so. Not bad, I say.

PS: A great shoutout to Belmont, MA emergency services, the ER staff on duty last Tuesday evening at Mt. Auburn Hospital, Cambridge, and to the truly wonderful strangers of Grove St. who made sure I didn’t lie there groaning in the dust.

*(It’s also must reading for anyone with a taste for bitter absurdity. The protocol described in the article, the use of a simple checklist covering the insertion of catheters in ICUs in Michigan hospitals, was briefly banned, despite obvious, overwhelming success for violating patient-consent rules for experimental procedures. It was then reinstated through bureaucratic fiat by declaring the process a clinical rather than a research practice. That sounds fine — except that in fact even seemingly obvious changes in protocols are experimental until tested. As this NEJM article points out — a proper review procedure was available, but was bypassed in the light of the publicity the hasty banning produced.)

Images: Michiel Jansz van Mierevelt, “The Anatomy Lesson fo Dr. Willem van der Meer,” late fifteenth, early sixteenth century. Source: Wikimedia Commons.

Osias Beert, “Still Life with Oysters,” 1610. Source: Wikimedia Commons.