Archive for the ‘Mathematics’ category

What Does the Public Really Need To Know?: Science/Math edition.

July 14, 2008

So, last week I have the good fortune (a) to junket in LA (thanks, History Channel — look for their latest Einstein documentary sometime between October and the new year) and, thus geographically advantaged, the chance to raise a glass or two with Sean Carroll and Jennifer (new digs) Ouellette (familiar haunt) — two of the brightest lights among those who blog the physical sciences.

Among much other discussion (how to do good science on television, whether there is any useful algorithm available to help navigate LA traffic) we drifted into that hardy perennial: what, really, does the general public need to know about science. Not for the greater good of science, not to secure more complaisant support for big accelerators or stem cell research, but for them/ourselves?

There are lots of facts that I think would give people pleasure — I love knowing that Albert Einstein patented a hearing aid (with Rudolf Goldschmidt); that chimpanzees fashion tools in the wild; that the first reaction written down in something like the modern form of a chemical formula was that describing the fermentation of alcohol. There are ideas that are enormously powerful — and some of them are clearly of value as part of anyone’s mental apparatus in confronting daily life. (Natural selection, offers insights well beyond the history of life, for example, (though great care must be taken, as we know, to our sorrow) and as general a heuristic as Ockham’s Razor would help people deal with silly season stories like this one.)*

But while these and much more are part of what I think any education should provide, the question I asked over something-or-other in martini glasses last week,** and re-ask here, is what the minimal body of knowledge is that every adult should possess.

Regular readers of this blog will guess the answer I gave: the bare minimum is arithmetic, or more broadly, a grasp of quantitative reasoning and a set of simple rules to apply such reasoning in everyday life.

For example — these posts sought to illustrate of the value of remembering to do something as basic as converting a cardinal number into a percentage, to make it possible to compare different data points.

Another example: the habit in this country of focusing on miles-per-gallon as a measure of fuel efficiency leads systematically to bad decision making. If we instead looked at gallons-per-mile (or hundred miles), it would make it clear that replacing a 16 mile per gallon SUV with a 20 mpg station wagon is a much better choice than replacing a 34 mpg compact with a 50 mpg hybrid, assuming equal miles driven for each vehicle. No one reading this needs much help figuring out why — but for the details, listen to the NPR story from which this particular example came. (See — I had to say something nice about NPR after slagging them for their Shakespeare follies.)

In sum: I’ve been at the popular science game for a quarter of a century now. I’ve written about climate change and physics and cancer research and precision guided weapons and big telescopes and the origins of the pentatonic scale and I can’t remember it all now. I hope everything found some audience who got something out of it. But more and more now I look for stories that in their telling express some of the basic habits of scientific thinking — whatever the body of facts with which I may be dealing.

There is much more to such habits than a quantitative turn of mind — notions of observation, of framing answerable questions and lots besides . But more and more the starting point seems to me to be conveying how much mastery of the world one can get from astonishingly simple acts of counting and comparing.

What do y’all think?

Update: See Chad Orzel’s recent post on John Allen Paulos’ Innumeracy for another swipe at the same problem. (h/t Bora)

*For an antidote to the “Who wrote Shakespeare” tomfoolery, you can begin here with James Shapiro’s latest — one of the best of a spate of Shakespeare-as-window-on-the-birth-of-the-modern books that have appeared recenlyy.

**Fortunately, the waiter in the very chic bar in which the three of us chatted had never heard of what I tried to order, a French 75, which is the only reason I remained unfogged enough to have any kind of a conversation that night. Just the mention of it makes me feel a little shaky. Enjoy, but at your own risk.

Image: Codex Vigilanus, 976 C.E., in which Arabic numerals first appeared in a Western European manuscript. Source: Wikimedia Commons.

We Love Math, Electoral College Department.

June 12, 2008

Andrew Sullivan says that this question-and-answer is why he doesn’t do math.

That Sullivan is quantitatively challenged is, of course, no surprise to anyone who reads his blog on a regular basis.  But his excuse here is pretty lame.

The problem posed at FiveThirtyEight.com was “How many unique ways are there to acquire at least 270 electoral votes without any excess?”

The solution to that did indeed turn on a sophisticated application of combinatorial methods.  According to the analysis by Isabel Lugo (posted at 5:22 p.m. on June 10), there are 51,199,463,116,367 different possible  ways to accumulate 270, 271, 272 or 273 electoral votes.

Lugo’s solution does indeed demand both smarts and training, and she received her just due of praise from the comment thread.  Certainly, though I can follow, gasping, the reasoning as she explained it, I can’t claim any greater chance of cracking such a problem than Sullivan could — which is to say, none.

But Sullivan’s surrender — “it’s just too hard, and look at the cute, big number” makes me crazy, and illustrates one of the persistant reasons why our discourse is so bad, why, as Brad DeLong keeps asking, we can’t get a better press corps.

That is: there is a difference between ignorance of advanced math (in which I take second place to no one), and an inability or unwillingness to master the basics of quantitative reasoning.

What’s remarkable, is how far you can get with not that much, just a basic disciplined approach to simple concepts — estimation, use of ratios and so on.

And with such simple tools it is possible to get a handle, if not always a precise result, even for such subtle, complex problems as the electoral vote question that so flummoxed Sullivan.

As Lugo pointed out, introducing her analysis — her exact number was anticipated by a much simpler simulation by commenter Brian at 4:43.  Even if you don’t follow Brian all the way through the simulation, his exercise begins with a simple piece of arithmetic that gives the first hint of the scale of a likely solution, the fact that with 51 jurisdictions there are 2 to the fifty first power, or 2.25 quadrillion possible win/lose outcomes.

That’s enough to tell you from the start that you are dealing with a big number. The next steps take you further, and show how the simulation produces a plausible argument that the number of outcomes where the electoral vote totals hit the desired range (270-273) is going to come in at just a bit under three percent of that huge total number of outcomes, or right in the range of the 51.2 trillion outcomes that Lugo derived.

And my point is that whether or not you can imagine performing this bit of computer-mediated approximation, even the very first step, one that comes from high school math, is enough to get you into the right neighborhood, the right scale in which any answer will have to land.

It’s a necessary skill for any reporter today, I think, really any citizen.  I won’t go here into the same riff I’ve blogged many times before.  I’ll outsource instead to my new blog humor BFF xkcd:

Welcome to Cosmic Variance folks — and a question

March 11, 2008

Welcome to all coming via Sean Carroll’s very kind shout out.

Come on in, look around, enjoy yourselves.

And if you have a moment, consider answering this prompt. In this post written a couple of weeks ago, I wrote a complaint about some lousy reporting on the housing crisis — but my larger point touched on one of the big themes of this blog, how applying even the simplest quantitative reasoning makes a huge difference to one’s ability to make sense of (detect the bullshit in) everyday experience. I argued that this was one of the foundations of what is often miscalled (IMHO) scientific literacy as it applies to the public. I pointed to a couple of examples, one from Freeman Dyson, and another by J.B.S. Haldane to show how such minimal math makes a difference in real science as well.

And then I made this request: Perhaps readers could be persuaded to post examples of what they think are elegant, simple insights about everyday experience such simple applications of math can give us?

Anyone want to belly up to the bar?

In any event — glad to have you all here.

Image: Hans Holbein, “Portrait of the Astronomer Nikolaus Kratzer (detail)” 1528. Image: Wikimedia Commons

I don’t know nuthin’ ’bout economics, but…: NPR/Henri Poincaré/Mortgage follies edition

February 25, 2008

Innumeracy is a problem I have and will come back to a lot here. But as I listen to more and more popular presentations of technical subjects, I still get astonished by the intersection of two structural problems in the media.

That is: many reporters — not so high a proportion of self-described science writers, though still plenty there — have trouble with even the most elementary uses of quantitative approaches to their stories because they just don’t think in numbers at all. That’s the negative way of framing the problem; journalists have a lack that inhibits their capacity to do good work in an ever-more technically imbued world.

Then there’s the affirmative problem. Reporters establish stories by anecdote, by individual bits of data, single episodes. They’re called stories for a reason: the goal is to perform one of the most powerful acts of communication humans have figured out, to convey information that compels belief because its hearer can place themselves right into the narrative.

That’s why, to edge closer to the real subject of this post, so much of the reporting on the mortgage crisis (fiasco) centers on some family that’s about to lose a house, and spend little time, on the meaning of the big numbers, like the implications of a repricing of US housing on a large scale.  The point is that not only do many journalists not know a set of ideas that could help them figure out such things;  what they do know leads them away from the kind of approach to their work that more mathematical sophistication would provoke.

But there’s a wonderful passage that bears on this from the great French mathematician Henri Poincaré in a collection of essays that greatly influenced the young Albert Einstein:

We can not know all facts, and it is necessary to chose those which are worthy of being known.

Choose? Worthy? Surely Poincaré is not going prematurely po-mo on us here?

Not really. The notion embedded in his deliberately provocative turn of phrase is that facts need form, some apparatus that can incorporate a given datum into a richer story — one with a meaning larger than that of a single incident. That apparatus is quantitative.

(BTW — I use the word “quantitative” rather than mathematical, because for a great deal of human experience, the math needed to make sense of what’s going on is not that complicated.  It’s often a matter of counting, sorting, and extracting relationships within the formal limits of what you learn by the end of high school.  I have posted on a couple of such examples from great scientists — Freeman Dyson, for one, and J.B.S. Haldane for another.  There are lots more — perhaps readers could be persuaded to post examples of what they think are elegant, simple insights a bit of math can give us ?)

All of this  into mind while I listened to NPR this morning.

This is the story that got me going — a short (1 minute, 10 seconds) reporter-voiced account of what seemed to the Morning Edition team to be something strange: Even though the Fed is cutting interest rates, mortgage rates went up sharply last week. That ain’t how its supposed to be, according to the reporter, Adam Davidson, because when the Fed lowers its rates, other rates are supposed to drop.

The reason Davidson gave for what he saw as weird is not all wrong: he said that lenders are newly afraid of inflation, and hence want to charge a higher price for money that is going to be paid back over time.

But look at the unexamined assumption: that the Fed can control rates in general. That’s not true.

What’s missing here? An understanding of the real importance of time.

The Fed mostly exerts its influence on interest rates through the shortest of short-term instruments, the overnight federal funds rate — which is just the price banks pay for extremely brief loans required to keep their minimum reserves up to snuff.

But real people borrow money for houses on long time scales, most famously through 30 year mortgages. The enormous difference between the types and uncertainties of risk between those two scales of time serve at least partially to decouple the two rates — see the data to be retrieved here for a survey view of this.

So it is true that fear of inflation could keep push term rates up, whether or not the Fed was playing around with short term rates. But so could lots of other things.

Perhaps that the value of US real estate is unclear in a falling market, and thus lenders demand a risk premium before they lend against such difficult-to-value assets. Perhaps the overall credit worthiness rating of American real estate borrowers has dropped in the aggregate.  Perhaps lenders fear that the secondary market for mortgages is going to get a bit less liquid.  Lots of factors play into long term interest rates that have nothing to do with the reasons the Fed makes its interest rate decisions.

In other words: and the NPR story was either meaningless or misleading. And it failed because the reporter glossed over or did not fully understand what the mortgage rate summarizes as a single number — all the complex calculations of risk and profit that underpin the decision of whether or not to make a loan.

What I would have loved to hear instead of a “this fact is strange” report would be that story: how do interest rates express quantitatively our ideas about the future.  It’s still a good, fully human story:  Those numbers tells us a tale about what we think we know about what’s coming down the pike — and how much in dollars and cents we fear changes in our perception of what we don’t know.

Image: Rembrandt van Rijn, “The Money Changer,” 1627. Source: Wikimedia Commons.

The half a percent solution: More on why we are losing/have lost the war in Iraq

February 22, 2008

Congressman Patrick Murphy of Pennsylvania gave a brutally clear interview on NPR’s On Point program yesterday. The whole thing is worth a listen, but a key comment came fairly early on.

Murphy was promoting his new memoir and talking about his experience as a member of a unit of the 82nd Airborne in the immediate aftermath of the invasion. As the occupation was beginning, a total of 3,500 soldiers from that division had responsibility for a district of Baghdad that was home to about 1.5 million Iraqis.

Bit of background here: Murphy’s dad was (is?) a Philadelphia cop. Murphy had expected to be one himself, but the twists and turns of a somewhat hellraising youth led him to ROTC and a career in the US Army. But he knew from policing, and he’s got Phillie in his bones. The key fact that Murphy gave his listeners is that from a policing point of view, Philadelphia is just about the same size as his area of operation in Baghdad.

How many cops does Philadelphia use to police its 1.5 million residents? 7,000. Oh — and a couple of other things: Police officers in Philadelphia speak the local language, live in neighborhoods in (and, to be sure, around) the city, and many if not most have family roots that go back one or more generations into that community. The 82nd Airborne in Bagdad…not so much.

To Murphy this was just one more example of how badly conceived and led the Iraq operation was from the beginning. That’s certainly true, and the more important meaning of the comparison.

But to me what stood out from that couple of sentences in an almost hour long interview, was the importance of scientific –and more precisely — quantitative reasoning in every day life.

One of the great things about real quantitative reasoning is that it is a very efficient way to think about appropriate problems. Individual military engagements, of course, are all different; there are procedures, training and plans you can make to improve your odds of success, but there is no simple algorithm that is going to get your platoon through every contact with the enemy.

Warfare, however, does have some quantitative approaches that make individual successes more likely and minimize both the liklihood and the consequences of single set backs. That’s the point behind the old cliche often attributed to Omar Bradley: “Amateurs study tactics, professionals study logistics.” (The quote turns up all over — see this for example.)

That is, actually getting right the calculation for the number of spare parts you need to keep a tank running across Russia makes a big difference to your chances of success — see Richard Overy’s excellent Why the Allies Won for details.

Coming back to Murphy’s anecdote, the other virtue of quantitative reasoning as applied to Iraq, (besides being essential — i.e. we see what happens when our leaders ignore it), is that it is efficient.

It enables you to learn a lot about different courses of action, retrospectively or in prospect. And it does so very quickly. It turns out that in many situations you don’t need much knowledge to be able to infer a great deal more, with great confidence. Simple models based on relatively simple and easy to get data actually can do a lot of heavy lifting.

(As an aside — the PBS kids show Cyberchase takes this as its core theme. My seven year old son is addicted, and I’m glad.)

Try this one on for size: what are minimums for policing large urban populations during an occupation? You could start by looking at a few large cities already at peace — Philadelphia for example. We know, thanks to Captain-turned-Congressman Murphy that Philly runs out about one cop for about every 200 citizens.

You can take that as a working average for cities with a diverse populaton with some identity divisions between them, a working civil government, an established rule of law, and a common language, shared history, and a fair number of common civic symbolic unifiers — a disdain for Santa Claus at Eagles games and so on.

In other words, 1/200 is your starting approximation for policing requirements when you begin to think about taking over the responsibility for order in an unfamiliar territory.

You would, if you were the least bit prudent (or if your own skin and those of soldiers under your command were at stake), probably try to work out some of the factors that might alter that number: things like ethnic/sectarian divisions more intense than you were used to; language barriers; the absence of existing civil institutions; the lack of a history of rule of law and so on.

The bottom line is that there is no rational way to come up with a number smaller than that required to police a city at peace whose police force patrols with the active consent of the overwhelming majority of the policed.

Hence, the decision to station just 3,500 soldiers whose duties included but were not limited to maintaining civil order in an occupied city neighborhood as large as Philadelphia was an obvious error — one of a pattern of blunders that has cost so much for so long.

And the key lesson to draw out of all this belaboring of the obvious? It took only two numerical facts to reach that conclusion: the number of police in Phillie; the number of troops in the neighborhoods of Baghdad. You don’t have to be brilliant to think clearly.

You just have to choose to do so, and to employ the intellectual tools human beings have spent millenia developing to do so.

I want a President that can count beyond “one, two, three, many.”

Image: Etienne Jeurat, “Prostitutes transported by the Police,” 1755. Source: (via Wikimedia Commons) The Yorck Project: 10.000 Meisterwerke der Malerei. DVD-ROM, 2002. Licensed under a GNU Free Documentation License.

On Being The Right Size (Hollywood edition).

January 25, 2008

I can’t believe it, but I am going to link to Gregg Easterbrook twice in one day without (too much) snark.

So, while his TMQ column for Monday (sic) did contain an elementary error (the planets move against a background of the “fixed” stars, not the other way round — which Easterbrook honorably corrected at the top of his next column) he gets something else quite right.

In a ramble through absurdities in the movie Cloverfield, he and his correspondents pause for moment on the issue of the monster’s size:

TMQ’s estimate of 100,000 tons for the Cloverfield monster was based on the Empire State Building weighing 340,000 tons; TMQ assumed a biological object the size of that building might weigh less, containing no steel. Kendal Stitzel of Fort Collins, Colo.,, countered, “Therein lies the rub, for there is no known bony material that could support the weight of something that large without collapsing under the creature’s own mass. This is the famous square-cube problem: when a creature gets larger, its weight (which increases in proportion to volume) increases as the cube of the increased dimensions. The animal’s strength, however, can only increase in proportion to the square of the increase in dimension. Just as the Empire State is not supported by its masonry but by the steel and concrete structures inside, you would need some kind of similarly strong biological material to support any giant monster, be [it] Godzilla, Mothra or Cloverfield. There have been giant critters in the past, but no land mammal larger than the woolly mammoth. Whales are big, but their bodies are supported by water. Dinosaurs grew to be perfectly enormous; some were an order of magnitude larger than any other land creature since. Skeletal adaptations let them do this — but they were near the limit of what is possible for critters on our planet, and the largest dinosaurs reached only a fraction of the size of many movie monsters.”

Readers with a taste for both great science writing and the history of modern biology probably know the ur-form of this idea as expressed by the great British biologist J. B. S. Haldane, in his classic essay, “On Being the Right Size.

Read the whole thing. It’s smart, witty, elegantly written, and it contains one of the earliest popular accounts of perhaps the most important single change in the practice of biology in the last century. Haldane himself was one of the pioneers in the mathematical treatment of natural selection and evolutionary theory, and he introduced the general public to the virtues of applying even the simplest quantitative ideas in “On Being the Right Size,” a simple, virtuouso tour through the implications of scale for everything an organism might want to do.

And in making the point that Easterbrook’s correspondent, Kendal Stitzel picks up, Haldane produced one of the truly great passages in all of science writing — the quotation of which is the reason for this entire post:

You can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away, provided that the ground is fairly soft. A rat is killed, a man is broken, a horse splashes.

Splashes!

That’s real writing. Once read, it is impossible to forget the idea within the image.

Image: The Darley Arabian (one of the three founding horses of English thoroughbred brood stock. After 1704. Source: Wikipedia Commons.