Archive for the ‘brain and mind’ category

With Apologies to …

July 16, 2008

….Brad Delong (and to you readers, for whom this post was promised yesterday)…

UPDATE: Arrrgh. More apologies to all here. Brain bubbles affected my attribution of small pox vaccination to Jonas Salk, who, of course, invented the first effective polio vaccine. Edward Jenner performed the first smallpox vaccinations with a cowpox preparation in 1796. I conflated the two in my head as I have been thinking about the fact the difficulties faced in eradicating polio, compared with the success of the anti small pox campaign — which in fact formed the prompt for this post on the latest reported polio case in Pakistan. I regret the error.


Why oh why can’t we have a David Brooks-free press corps, at least when it comes to bloviating about science?

In his most recent column, Brooks writes (under the pretentious and meaning-free headline, “The Luxurious Growth”) that the research community has grown “more modest about what we are close to knowing and achieving.”

That is, Brooks is once again channeling what “science” thinks — and he’s wrong, of course.

Headline writers may have made the kinds of claims he decries, that genetics would soon explain all of human behavior, but I can’t recall any scientist involved in, say, the genetics of alcoholism, claiming a single gene-behavior connection. Instead, fifteen seconds on Google turns up lots of statements like this.

Alcoholism is a complex, genetically influenced disorder. Multiple phenotypes – measurable and/or observable traits or behavior – contribute to the risk of developing alcoholism, particularly disinhibition, alcohol metabolizing patterns, and a low level of response (LR) to alcohol.

In other words: scientists have known as they do their research that individual studies of particular measurable and or observable phenomena will not produce a synoptic view of any complex behavior. Brooks knows this too. After all, with a magisterial air of explaining the hard truths to resistant materialists, he writes that

It’s now clear that one gene almost never leads to one trait. Instead, a specific trait may be the result of the interplay of hundreds of different genes interacting with an infinitude of environmental factors. must know this too — I can’t believe he’s that bloody ignorant, though perhaps I’m just too much of a polyanna here.

Again — this is a revelation only to those who haven’t been paying attention for years. And I do think that Brooks knows that as well. But if he does, that means he has an ulterior motive for claiming that once arrogant science has learned humility — and he does, the usual one that data-averse ideologues acquire: nasty scientists who seek material explanations are evil:

Starting in the late 19th century, eugenicists used primitive ideas about genetics to try to re-engineer the human race. In the 20th century, communists used primitive ideas about “scientific materialism” to try to re-engineer a New Soviet Man.

And Jonas Salk, that commie, used his “primitive ideas” to invent a smallpox polio vaccine, the key step in what has become the first ever may yet, I hope, become the second eradication of a human viral pathogen….and so on; this is an old and stupid back and forth.

Brooks wants to say that there are other sources of insight into the human condition — that “novels and history can still produce insights into human behavior that science can’t match.”

I’m not sure what he means by “match,” in this case. I suppose we don’t need science to say that happy families are all alike (you sure about that, Leo?) or that England’s Catholic King James II fell not due simply to his religion but because of his political ineptitude. But such insights, no matter how valuable are of a different quality, a different explanatory timber, than that which has investigated, for example, something as material and as essential to the human condition as the evolution of tool use.

But again — I fear it gives Brooks too much credit to engage the debate at this level. His goal is not to examine honestly the power and limits of scientific inquiry into human nature. The goal is to devalue the enterprise to the point that inconvenient facts can be ignored. Brooks gives the game away about half way through the piece. He writes that

There is the fuzziness of the words we use to describe ourselves. We talk about depression, anxiety and happiness, but it’s not clear how the words that we use to describe what we feel correspond to biological processes. It could be that we use one word, depression, to describe many different things, or perhaps depression is merely a symptom of deeper processes that we’re not aware of. In the current issue of Nature, there is an essay about the arguments between geneticists and neuroscientists as they try to figure out exactly what it is that they are talking about.

Brooks takes as evidence of ignorance the fact that different disciplines argue about terms. By that token, as of 1900, the state of play on the nature of matter would have led us to conclude the issue was intractable. Chemists had used the concept of atoms as real material objects to enormous theoretical and practical advantage since the days of Dalton and Berzelius — that is for a century or so.

Histories written from a physicists point of view, by contrast, commonly date the confirmation of the reality of atoms from Einstein’s 1905 papers on molecular dimensions and on Brownian motion. So — I guess for a century all those chemists had no idea what they are talking about.

In fact, of course, there are valuable, vital working definitions of depression, and they are involved in the still imperfect, but real body of knowledge that identifies clinical depression as a material illness of the brain. That understanding is what permits interventions — chemical and surgical — that dramatically reduce human suffering in many cases. Cherry picking disciplinary debates may give the appearance of deep disagreement – but doing so, as Brooks does, is really just garden-variety intellectual dishonesty. Put it another way: acknowledging limits to knowledge is not the same thing as denying the power of the same body of knowledge up to that limit.

But, of course, that’s what Brooks needs to do if he is to make his real point:

This age of tremendous scientific achievement has underlined an ancient philosophic truth — that there are severe limits to what we know and can know; that the best political actions are incremental, respectful toward accumulated practice and more attuned to particular circumstances than universal laws.

Nice sleight of hand, eh? Brooks is back to his most comfortable role, masquerading as the honest broker, while anyone in hearing better hang on to his/her wallet. The con takes place in incremental steps. Limits to knowledge become “severe” — that is, forseeably unsurmountable. Sez who? Sez Mr. Brooks, of course. Trust him — he speaks so nicely and has a marvelous tan.

And then…we are supposed to pass over the lack of logical connection…that due to such scientific lacunae, it is a philisophical truth (no limits to knowledge for those emerging from the cave, eh?) that political incrementalism is best.

This is more than a logical idiocy. It is historical nonsense as well. Incrementalism is good sometimes — perhaps most of the time. But consider: It would have been respectful, of course, not to dismiss the loving succour of King George III, but John Adams, no incrementalist at the moment of truth, persuaded his compatriots otherwise. Humans have owned slaves since earliest human memory; surely, respect for accumulated practice makes the 14th amendment a travesty. Peculiar circumstances can be invoked to justify polygamy and child marriage — and yet it seems possible to object on a range of more abstract and universal grounds, and so on….

That is — Brooks wants to be able to pick and choose, based on criteria known only to him, what change meets some ill-defined criteria of respect and particularity. This is nothing more than a cartoon version of what some conservatives say conservatism is about (though the last few years might give an honest man pause about the incompatibilty of this flavor of conservatism and power). Brooks would rather not have to defend it in detail (see revolution, American in the paragraph above), so instead he comes up with a parody of scientism and hopes that it sounds grand enough to deflect scrutiny.

As Delong says so often, why, oh why, can’t we do better than this codswallop.

That is all.

Image: Vincent van Gogh, “Sorrow,” 1882. Location: Wallsall Museum and Art Gallery, the Garman Ryan Collection. Source: Wikimedia Commons.

More Tragedy: Brain and Mind, Iraq Suicides edition.

July 8, 2008

In this post, published here and over at Cosmic Variance, I looked through the story of Iraq veteran suicides to speculate on the implications of the spread from the neuroscience profession to the public of the idea that what we perceive as mind, as our selves, is actually a phenomenon of our material brains.

That’s an important notion, one taken as a commonplace by just about every neuro researcher I know that will, I still think, have a profound cultural impact, potentially as great as that of the concept of the descent of man from prior forms.

But then this story appears. Another man gone, to remind me and anyone who reads this of fact of tragedy that is the reality, the hard ground of fact and loss.

I have no deeper scientific argument that I want to pursue here, and I am not going to express any of the political thoughts that this story does evoke in me.

This is just a pause, to think about Joseph Patrick Dwyer, and those whose loss should not simply be aggregated into the accumulating totals — both the official count of war dead, and those, like Dwyer who have paid such a terrible price outside the neat categories of conflict caualties.

My deepest sympathy to the family and friends of PFC Dwyer.

Image: Francisco de Goya, “Desastre de la Guerra (Disasters of War)” 1810-11. Source: Wikimedia Commons.

More on Vacation Recovery: Traveling with children edition

June 30, 2008

From the invaluable xkcd:

This reminds me of the old, old joke about the problem with neuroscientists. If you had a planet on which, for some reason, brain scientists dominated and electrical engineering had never taken off — but which was nonetheless awash in EM signals — imagine what would happen when a passing spaceship somehow drops a transistor radio into the hands of the eager savants. What would they do? They might, at some point they would twist the volume knob past the little click and turn the device on (assume, for the sake of the joke, intact, fully charged AAs on hand). Nothing — the device is tuned to a particular frequency, and happens to pick up no signal. So next, the experts would do what neuroscientists do — open the thing up and start taking bits off to see what would happen. First to go is a transistor, and all of a sudden the little plastic box emits a fearsome screech.

Knowledge! Holding up the transistor in a pair of sterile forceps, the team leader proudly instructs his loyal students: “Here we have,” he the gender-uncertain alien says, “the screech suppressor.”

I do remember the shocking experience of walking out of the hospital with our infant son and wondering where the hell the damn manual had gone.

Talking the Mental Illness Talk–OK. Walking the Walk?…

June 10, 2008

…Not so much, according to Michelle S.

Responding to my recent post on the Iraq War suicides and what they can tell us about the question of brains-and-minds, one of my favorite commenters (and a former student, much admired and much missed) weighs in from a position of much greater knowledge than I possess on issues of brains and mental illness. Michelle knows what she’s talking about and says it better than I could — so here is what she has to say:


I would love to think that some hope is justified here. Unfortunately, I don’t think that’s the case.

While it’s a major development (no pun intended) that the military is finally starting to take PTSD and other brain-related maladies seriously, I don’t think they’re anywhere near the level of understanding or action that is necessary to make any real progress. It’s one thing to admit that something exists–or at least sort of admit that it exists–but another to really do something about it. Admittedly, some of the military leaders seem to be trying. On the other hand, a lot of soldiers are still afraid to admit that they might be suffering from a mental illness.

Hell, plain old civilians are afraid to admit it, and for good reason in some cases, I might add. Stigma is far from gone in the US. As a country we offer sympathy and support for anyone suffering from an illness of the heart, the lungs, the kidneys, whatever, but the brain is still different to us somehow. We manage to forget that it is still an organ, albeit a really darned complicated one.

Then there’s the problem of mental illness in general–in fact, nearly any brain-related problem–being grossly misunderstood. I actually had a young mother say to me once, in regard to her two year old, “He’s so moody! I’m just terrified that he might be bipolar!” Dear, your child is not bipolar. He’s two. His “mental illness” is that he is a two year old. (And, I’m sure, by the time he’s five she’ll have him on ritalin–but NO ONE wants me to get started down that road.)

The public needs good, solid information about what mental illness is. The MSM has yet to provide that, in my opinion. Meanwhile, schizophrenics in particular continue to be demonized, even while they’re forced to live in a society that only treats mental illness as an afterthought. Have you heard even one Presidential candidate address the near-crisis that is geriatric mental health in the US? Ok, I’m preaching on a topic that is dear to my heart here, I know, but come ON–experts in that field are shouting at the top of their lungs about this problem. They have been for years. Why is no one listening? Why is insurance coverage for mental health such a joke?

Perhaps things are starting to improve. Perhaps the brain as a whole, with all of its complex subtleties, capabilities and limitations, will get the attention it needs and deserves. I hope that your hope is justified. But I think (and therefore I am–maybe) we’re a long way off.


Image:  Francisco de Goya, detail from No. 62 in the series Los Caprichos, 1799. Source:  Wikimedia Commons.

From Megaflop to Petaflop … An alternate history of the last two decades.

June 10, 2008

This story was the first thing to catch my eye this week, partly because I’m a sucker for supercomputers — and more because, very belatedly, I remembered something I once knew quite well:

Each new computing milestone — just like every instrument, every tool we build — reveals an enormous amount about what we think is important in realms seemingly far removed from the hardcore passions of silicon fanatics. So the fact that IBM and Los Alamos NL put together a machine capable of petaflop speeds — that’s one quadrillion floating point operations per second — seems to me as important as an event in our cultural history as it is in the technological record books. The overt goal is to create a machine capable of highly detailed simulations of the first instants of a fusion explosion…but the computer, dubbed Roadrunner, has an evolutionary descent and a surprising number of lines of connection to the rest of us in ways that have nothing to do with the bomb (or not much).

(I once actually was foolish enough to write a book on this theme. It’s now barely available, but it used various instruments (musical as well as scientific) to retell the classic story of the history of science from the Greeks forward. But even before I wandered into that maze, I had gotten seduced by the ambition that has pushed supercomuting for decades.

My first brush with the supercomputing came in the reporting that led to my first book on climate science. Back in 1986, just as I gave up writing with chisel and slate, I went out to the National Center for Atmospheric Research to see why some of my climate buddies thought they could analyze problems like global warming with any degree of confidence. This was so long ago that NCAR still kept a couple of punch card readers ready to go. By law they were required to retain access to their systems to any qualified researcher who wanted to use them, and their were still a couple of old guys at some midwestern university who liked to batch process their decks of cards. (I think they programmed in Fortran, which was, for those who might not be familiar with languages popular during the age of steam-powered computing, the power-user choice.

I was there at just the right moment to document the transition from their Cray 1A systems to a new Cray X-MP system. The Cray 1 was an impressive machine, complete with its built-in upholstered bench and a design capable of executing a heroic 80 million instructions per second. There were some bottlenecks that could slow it down — not least the time it took a human being to hump one of the two inch video tape reels on which data were stored. But still, it was the machine that made it possible to run some of the earliest plausible three-d climate models in anything remotely like the time a researcher would be willing to wait.

The XMP was heralded as a breakthrough — and a saviour. It was one of the first (if not the first — memory fades after a couple of decades) parallel vector processing supercomputers. It’s four computing cores achieved speeds in excess of a billion operations a second, unbelievable at the time (I write this on a 1.67 ghz laptop — and while the numbers aren’t directly comparable, you get the idea).

In my encounter with the new machine I was struck by all the usual industrial extremes — the need to construct a new power substation, the network of people and lesser machines needed to keep the big dogs happy and so on. But most of all, I was fascinated — I remain so — by what in my naivete I described in this way:

“The fact that the pursuit of apparently simple questions like “Can we predict tomorrow’s weather today?” has culminated in the creation o f its own infrastructure, a vast, expensive, complex set of institutions…without which the machines could not function, the models could not run, and the scientists could not think.”

Hold that idea of an infrastructure, a social, institutional and technological network, and flash forward to the petaflop machine. (I haven’t even tried to count the orders of magnitude gained in speed in twenty years. In technical terms — a boatload)

First: check out the direction of the arrow of technology. Back in the Cray I/XMP days, supercomputers had a few uses, and depended on highly specialized, large, expensive, purpose built components to achieve the results mostly government clients required for mostly military requirements. (Obviously, the fact that NCAR was a bleeding-edge, “we’ll debug the beast for you” customer tells you that there were more than bomb builders interested in big calculations. But the national weapons labs and NASA were clearly driving clients for the business.)

The key point, though, is that in the 70s and 80s it was assumed that such applications were so specialized that there was in essence no contact between the needs of commercial, or heaven forfend, individual, customers and those who wanted to model global atmospheric dynamics or the behavior of the explosive lens in a fission trigger mechanism.

That notion was challenged by (among others) one of the noble failures of the the eighties and early nineties, the Thinking Machines company founded by Danny Hillis and Sheryl Handler. The company built Connection Machines — massively parallel computing systems with as many as 64 thousand processors. By design, TM used the cheapest components it could find to populate such unprecedented arrays – the CM 5 used a standard commercial processor — Sun’s SPARC chips — to perform enormously involved calculations.

Thinking Machines, like most supercomputer makers before it, still lived and died by military contracts, and its path to bankruptcy was paved by (a) the complexity of programming efficiently for a massively parallel computation, and, more importan, (b) drying up of DARPA contracts. But their approach marked at least one milestone in the extraordinary explosion of computing in our culture. Most histories focus on the personal computer, and certainly that’s the visible social change. But consider at least one of the applications to which the CM was put. The fact that a supercomputer could be built out of off-the-shelf components made it cheap enough for Wall Street customers to consider it for use in working on novel kinds securities — these things called mortgage backed securities and derivatives.

No good deed goes unpunished.

Obviously, advances in computing aren’t to blame for the mortgage crisis; but once engineers and entrepeneurs figured out how to bring the cost per calculation down to bring supercomputing, if not to the masses, but to large private endeavors, the solvent impact of quantification had to move through our society faster than its institutions could anticipate.

What has this to do with Roadrunner? The democratization of powerful computing has leapt ahead in the last fifteen years. The late Connection Machines used mass -produced processors — but the SPARC was still a high end chip. Roadrunner, this record breaker, uses computation engines from video games. You know, the three hundred buck-or-so specialized computers that drive Grand Theft Auto around a screen. This machine is specialized, classified, and about to dive behind the armed perimeter at Lawrence Livermore Laboratory. But the underlying idea does not disappear, and XBoxes are cheap. Look for every more detailed simulations to come.

…which leads to simulations of what?

Anything you want. But in the context of this post (here, or here), let me steal what TM’s Danny Hillis used as the company motto: “We’re building a machine that will be proud of us.” It is, of course, very much an open question whether or not it will be possible to construct a plausible simulation of consciousness, or at least anything different observers would agree was such, as some accumulation of circuitry, software, and data.

But the aim of massive supercomputing is to create an abstracted version of reality whose correspondence to the external world is close enough to make events inside the computer predictive or descriptive of processes and outcomes out here.

Which is to say that the nuclear holocausts that will occur electronically inside Roadrunner are only once facet of our (hopefuly never realized) experience; it and its kin will have much more to say, more to anticipate, about how we live now.

Image:  Johannes Christiaan Schotel, “Stormy Weather of the Coast of Vlieland.”  Source:  Wikimedia Commons.

Burrowing into tragedy: a story behind the story of the Iraq War Suicides.

June 5, 2008

Cross Posted at Cosmic Variance (thanks Sean).

My thanks to all here who gave me such a warm welcome on Monday (and, again, to Sean for asking me here in the first place).

This post emerges out of this sad story of a week or so ago.

Over Memorial Day weekend this year there was a flurry of media coverage about the devastating psychological toll of the Iraq and Afghanistan wars. The single most awful paragraph in the round-up:

“According to the Army, more than 2,000 active-duty soldiers attempted suicide or suffered serious self-inflicted injuries in 2007, compared to fewer than 500 such cases in 2002, the year before the United States invaded Iraq. A recent study by the nonprofit Rand Corp. found that 300,000 of the nearly 1.7 million soldiers who’ve served in Iraq or Afghanistan suffer from PTSD or a major mental illness, conditions that are worsened by lengthy deployments and, if left untreated, can lead to suicide.”

(For details and a link to a PDF of the Army report – go here.)

This report, obviously, is the simply the quantitative background to a surfeit of individual tragedy – but my point here is not that war produces terrible consequences.

Rather, the accounts of the Iraq War suicides — 115 current or former servicemen and women in 2007 – struck me for what was implied, but as far as I could find, not discussed in the mass media: the subtle and almost surreptitious way in which the brain-mind dichotomy is breaking down, both as science and as popular culture.

How so? It is, thankfully, becoming much more broadly understood within the military and beyond that “shell shock” is not malingering, or evidence of an essential weakness of moral fiber. PTSD is now understood as a disease, and as one that involves physical changes in the brain.

The cause and effect chain between the sight of horror and feelings of despair cannot, given this knowledge, omit the crucial link of the material substrate in which the altered and destructive emotions can emerge. PTSD becomes thus a medical, and not a spiritual pathology.

(This idea still faces some resistance, certainly. I launched my blog with a discussion of the attempt to court martial a soldier for the circumstances surrounding her suicide attempt. But even so, the Army is vastly further along in this area that it was in the Vietnam era and before.)

Similarly, depression is clearly understood as a disease with a physical pathology that underlies the malign sadness of the condition. (H/t the biologist Louis Wolpert for the term and his somewhat oddly detached but fascinating memoir of depression.)

This notion of the material basis of things we experience as our mental selves is not just confined to pathology. So-called smart drugs let us know how chemically malleable our selves can be.

More broadly, the study of neuroplasticity provides a physiological basis for the common sense notion that experience changes who we perceive ourselves to be.

All this seems to me to be a good thing, in the sense that (a) the study of the brain is yielding significant results that now or will soon greatly advance human well being; and (b) that the public seems to be taking on board some of the essential messages. The abuses (overmedication, anyone?) are certainly there. But to me, it is an unalloyed good thing that we have left the age of shell shock mostly behind us.

At the same time, I’m a bit surprised that the implications of this increasingly public expression of an essentially materialist view of mind haven’t flared up as a major battle in the science culture wars.

Just to rehearse the obvious: the problem with cosmology for the other side in the culture war is that it conflicts with the idea of the omnipresent omnipotence of God. The embarrassment of evolutionary biology is that it denies humankind a special place in that God’s creation, destroying the unique status of the human species as distinct from all the rest of the living world.

Now along comes neuroscience to make the powerful case that our most intimate sense of participating in the numinous is an illusion.

Instead, the trend of current neuroscience seems to argue that the enormously powerful sense each of us has of a self as distinct from the matter of which we are made is false. Our minds, our selves may be real—but they are the outcome of a purely material process taking place in the liter or so of grey stuff between our ears.

(There are dissenters to be sure, those that argue against the imperial materialism they see in contemporary neuroscience. See this essay for a forceful expression of that view.)

I do know that this line of thought leads down a very convoluted rabbit hole, and that’s not where I am trying to go just now.

Instead, the reports of the Iraq suicides demonstrated for me that the way the news of the materiality of mind is is slipping into our public culture without actually daring (or needing) to speaking its name.

That the problem of consciousness is still truly unsolved matters less in this arena than the fact of fMRI experiments that demonstrate the alterations in brain structure and metabolism associated with the stresses of war or the easing of the blank, black hole of depression. The very piecemeal state of the field helps mask its potentially inflammatory cultural implications.

To me this suggests two possibilities. One is that it is conceivable that when the penny finally drops, we might see backlash against technological interventions into the self like that which has impeded stem cell research in the U.S.

On the other hand, I don’t think that the public can be motivated or even bamboozled into blocking the basic science in this field. Too much rests on the work; any family that has experienced Alzheimers knows just how urgent the field may be — not to mention anyone with a loved one in harms way.

This actually gives me hope for a shift in the culture war. For all the time and energy wasted over the last several years defending the idea of science against attacks on evolution, with the cosmologists taking their lumps too – the science of mind could force a shift in the terms of engagement decisively in the right direction.

Or I could be guilty of another bout of wishful thinking. Thoughts?

Image: Brain in a Vat, article illustration. Offered in homage to my friend and source of wisdom, Hilary Putnam, who introduced the brain-in-a-vat thought experiment in this book. Source: Wikimedia Commons.

Lt. Whiteside — sad update

January 31, 2008

This is an update to an update, and sad one too. The Washington Post today reported that Lt. Elizabeth Whiteside attempted to kill herself for a second time. The report, which came as the lede to a larger story on the record level of attempted and completed suicides among active duty soldiers in 2007, goes on to say that she did so the day that the army finally announced that it was dropping charges against her for incidents during her first suicide attempt, during which she pointed her gun at fellow soldiers.

On this second attempt, Whiteside attempted to overdose on antidepressants and other drugs. She left a note that rad, in part, “I’m very disappointed with the Army,” …Hopefully this will help other soldiers.”

Whiteside is now in stable condition, and is due for a discharge from the Army that will preserve her access to mental health benefits. But a central implication of her story is that many others don’t make it.

There is an invaluable review of different strands of that larger story at Mike Dunford’s The Questionable Authority blog.

More than just covering the numbers, Dunford looks at possible causes for the wave of suicides, and gives a brief, depressing introduction to the structural problems within the Army that obstruct attempts to improve the mental health care system available to American service men and women.

Phil Carter’s Intel Dump picks up on the story too, leading with the New England Journal of Medicine report on the link between Traumatic Brain Injury (a blow to or into the brain) and Post-Traumatic Stress Disorder. There is a vigorous comment thread there, mostly populated by current or retired service members that gets at some of the problems in the military’s handling of soldiers/marines with mental injuries suffered in Iraq and Afghanistan.

Given all that, I’ll just make the point I’ve tried to argue in several other posts: that when we talk about problems in the public engagement with science, it seems to me that it isn’t a deficit in specific knowledge that matters. Rather, it’s the habit of mind with which scientists approach empirical questions.

Consider this, from the Army’s first response to Lt. Whiteside’s case:

“Military psychiatrists at Walter Reed who examined Whiteside after she recovered from her self-inflicted gunshot wound diagnosed her with a severe mental disorder, possibly triggered by the stresses of a war zone. But Whiteside’s superiors considered her mental illness “an excuse” for criminal conduct, according to documents obtained by The Washington Post.”

At the hearing, Wolfe, who had already warned Whiteside’s lawyer of the risk of using a “psychobabble” defense, pressed a senior psychiatrist at Walter Reed to justify his diagnosis.”

If your default on neuroscience is that it is “psychobabble” then, Houston, we have a problem.

In particular, neuroscience understands – this is a true banality – that mind is a phenomenon of brain; dozens, hundreds of lines of evidence show that beneath disordered behavior are physiological derangements of the brain. If you are in charge of veterans who have undergone all kinds of stress, you don’t need to know what the latest fMRI study shows – but you do need to know that much.

Image: Francisco de Goya, “The Charge of the Mamelukes,” 1814, The reproduction is part of a collection of reproductions compiled by The Yorck Project. The compilation copyright is held by Zenodot Verlagsgesellschaft mbH and licensed under the GNU Free Documentation License