Posted tagged ‘nuclear weapons’

From Megaflop to Petaflop … An alternate history of the last two decades.

June 10, 2008

This story was the first thing to catch my eye this week, partly because I’m a sucker for supercomputers — and more because, very belatedly, I remembered something I once knew quite well:

Each new computing milestone — just like every instrument, every tool we build — reveals an enormous amount about what we think is important in realms seemingly far removed from the hardcore passions of silicon fanatics. So the fact that IBM and Los Alamos NL put together a machine capable of petaflop speeds — that’s one quadrillion floating point operations per second — seems to me as important as an event in our cultural history as it is in the technological record books. The overt goal is to create a machine capable of highly detailed simulations of the first instants of a fusion explosion…but the computer, dubbed Roadrunner, has an evolutionary descent and a surprising number of lines of connection to the rest of us in ways that have nothing to do with the bomb (or not much).

(I once actually was foolish enough to write a book on this theme. It’s now barely available, but it used various instruments (musical as well as scientific) to retell the classic story of the history of science from the Greeks forward. But even before I wandered into that maze, I had gotten seduced by the ambition that has pushed supercomuting for decades.

My first brush with the supercomputing came in the reporting that led to my first book on climate science. Back in 1986, just as I gave up writing with chisel and slate, I went out to the National Center for Atmospheric Research to see why some of my climate buddies thought they could analyze problems like global warming with any degree of confidence. This was so long ago that NCAR still kept a couple of punch card readers ready to go. By law they were required to retain access to their systems to any qualified researcher who wanted to use them, and their were still a couple of old guys at some midwestern university who liked to batch process their decks of cards. (I think they programmed in Fortran, which was, for those who might not be familiar with languages popular during the age of steam-powered computing, the power-user choice.

I was there at just the right moment to document the transition from their Cray 1A systems to a new Cray X-MP system. The Cray 1 was an impressive machine, complete with its built-in upholstered bench and a design capable of executing a heroic 80 million instructions per second. There were some bottlenecks that could slow it down — not least the time it took a human being to hump one of the two inch video tape reels on which data were stored. But still, it was the machine that made it possible to run some of the earliest plausible three-d climate models in anything remotely like the time a researcher would be willing to wait.

The XMP was heralded as a breakthrough — and a saviour. It was one of the first (if not the first — memory fades after a couple of decades) parallel vector processing supercomputers. It’s four computing cores achieved speeds in excess of a billion operations a second, unbelievable at the time (I write this on a 1.67 ghz laptop — and while the numbers aren’t directly comparable, you get the idea).

In my encounter with the new machine I was struck by all the usual industrial extremes — the need to construct a new power substation, the network of people and lesser machines needed to keep the big dogs happy and so on. But most of all, I was fascinated — I remain so — by what in my naivete I described in this way:

“The fact that the pursuit of apparently simple questions like “Can we predict tomorrow’s weather today?” has culminated in the creation o f its own infrastructure, a vast, expensive, complex set of institutions…without which the machines could not function, the models could not run, and the scientists could not think.”

Hold that idea of an infrastructure, a social, institutional and technological network, and flash forward to the petaflop machine. (I haven’t even tried to count the orders of magnitude gained in speed in twenty years. In technical terms — a boatload)

First: check out the direction of the arrow of technology. Back in the Cray I/XMP days, supercomputers had a few uses, and depended on highly specialized, large, expensive, purpose built components to achieve the results mostly government clients required for mostly military requirements. (Obviously, the fact that NCAR was a bleeding-edge, “we’ll debug the beast for you” customer tells you that there were more than bomb builders interested in big calculations. But the national weapons labs and NASA were clearly driving clients for the business.)

The key point, though, is that in the 70s and 80s it was assumed that such applications were so specialized that there was in essence no contact between the needs of commercial, or heaven forfend, individual, customers and those who wanted to model global atmospheric dynamics or the behavior of the explosive lens in a fission trigger mechanism.

That notion was challenged by (among others) one of the noble failures of the the eighties and early nineties, the Thinking Machines company founded by Danny Hillis and Sheryl Handler. The company built Connection Machines — massively parallel computing systems with as many as 64 thousand processors. By design, TM used the cheapest components it could find to populate such unprecedented arrays – the CM 5 used a standard commercial processor — Sun’s SPARC chips — to perform enormously involved calculations.

Thinking Machines, like most supercomputer makers before it, still lived and died by military contracts, and its path to bankruptcy was paved by (a) the complexity of programming efficiently for a massively parallel computation, and, more importan, (b) drying up of DARPA contracts. But their approach marked at least one milestone in the extraordinary explosion of computing in our culture. Most histories focus on the personal computer, and certainly that’s the visible social change. But consider at least one of the applications to which the CM was put. The fact that a supercomputer could be built out of off-the-shelf components made it cheap enough for Wall Street customers to consider it for use in working on novel kinds securities — these things called mortgage backed securities and derivatives.

No good deed goes unpunished.

Obviously, advances in computing aren’t to blame for the mortgage crisis; but once engineers and entrepeneurs figured out how to bring the cost per calculation down to bring supercomputing, if not to the masses, but to large private endeavors, the solvent impact of quantification had to move through our society faster than its institutions could anticipate.

What has this to do with Roadrunner? The democratization of powerful computing has leapt ahead in the last fifteen years. The late Connection Machines used mass -produced processors — but the SPARC was still a high end chip. Roadrunner, this record breaker, uses computation engines from video games. You know, the three hundred buck-or-so specialized computers that drive Grand Theft Auto around a screen. This machine is specialized, classified, and about to dive behind the armed perimeter at Lawrence Livermore Laboratory. But the underlying idea does not disappear, and XBoxes are cheap. Look for every more detailed simulations to come.

…which leads to simulations of what?

Anything you want. But in the context of this post (here, or here), let me steal what TM’s Danny Hillis used as the company motto: “We’re building a machine that will be proud of us.” It is, of course, very much an open question whether or not it will be possible to construct a plausible simulation of consciousness, or at least anything different observers would agree was such, as some accumulation of circuitry, software, and data.

But the aim of massive supercomputing is to create an abstracted version of reality whose correspondence to the external world is close enough to make events inside the computer predictive or descriptive of processes and outcomes out here.

Which is to say that the nuclear holocausts that will occur electronically inside Roadrunner are only once facet of our (hopefuly never realized) experience; it and its kin will have much more to say, more to anticipate, about how we live now.

Image:  Johannes Christiaan Schotel, “Stormy Weather of the Coast of Vlieland.”  Source:  Wikimedia Commons.

Program Notes: Who Patented the Bomb? Ask NPR.

March 30, 2008

Check this story out.

Here’s the backstory: Otto Hahn, (without mentioning mentor/partner Lise Meitner) published the news that he and co-workers had identified the element Barium in a sample of Uranium that had been bombarded by neutrons in December, 1938. Meitner, of Jewish background, had of necessity, abandoned her collaboration with Hahn and escaped for Stockholm earlier that year.

Still, she and her nephew, understood what had just happened. Hahn had achieved nuclear fission, the spectacularly unexpected splitting of uranium atoms.

By the happenstance of timing, this news came at almost the final moment for the next seven years that scientific communication would pass freely through the physics community. It was certainly almost the last time that a crucial result about the behavior of the atom would be so blithely broadcast to any and all…

…Or not quite, as the NPR broadcast linked above reveals. I’ve done a bit of reporting on atomic physics and the history of the bomb — not much, but not zero, either — and I never caught a whiff of the fact that the Manhattan Project filed something like 2,000 — two thousand!— patents on every angle they could find of design and engineering of the atomic bomb.

Patents are public documents, as the hero of the NPR story, Harvard graduate student Alex Wellerstein noted. National security can intervene — but even when it does, a secret patent leaves traces behind, decay products as it were. As the story explains, should someone else — a German agent — want to know if America were working on a bomb, all he would have to do is file a patent application of his own on some aspect of nuclear weaponry, and a letter would come back saying, in essence, the proposed invention had collided with a secret patent. Aha!

That never happened.

Do give the story a listen. It’s well done, and can be heard as a sidelight on the strangeness and the paranoia that accompanies every descent into a national security state.

But what gave me the most pleasure was hearing Philip Morrison remembered. Morrison had told Wallerstein that he had in fact filed a patent on the bomb (one that is still secret), and had signed his rights over to the US government for the princely sum of a buck year — which was never paid.

I’m pretty sure that Morrison never tried to collect. I knew him a bit — never that well, but for a few years, his role as advisor to NOVA meant that I would see him and his wife Phyllis on regular occasions. He was a genuinely great man, and the one time, the Morrisons came to my house for supper, I finally got my courage up to ask him what it was like to carry the plutonium core from Los Alamos to Almogodoro for the Trinity test.

He started speaking with a kind of a creak, as if he was resetting his mind to re-enter, and not just recall the event. And then the story took over, and my wife and I just listened as the drive unfolded, and Morrison started bringing to life the feeling, the combination of youth (Morrison was all of twenty nine years old), mastery, urgency — get the damn war done — and concern to make sure the damn thing worked.

Morrison is one of the unequivocally great figures I’ve had the good fortune to meet, smart, committed to right action, a small d democrat in all his doings — he’d talk with pleasure to anyone who was willing to exercise their brains. He became a major figure in the physicists’ movement working to defang the nuclear threat.

But he never hid the fascination and the sheer intensity of emotion and experience that came with working on the Manhattan Project. Sitting there around a dinner table, just the four of us, listening to the journey re-imagined — the guts of the bomb in his hands. Amazing. It was a moment when being a historian seem like the most fun it is possible to have, as so many lives and instants of place and time can, at lucky intervals, suddenly become imaginatively one’s own.

I’m still grateful to Phil (and Phyllis, who should never be left out of any memory of the Morrisons). He was kind to me and very helpful more than once. He deeper relations with and made a much greater impact on lots of other folks, and I don’t want to claim more of an acquaintance nor more influence from than was really there. But hearing a very nice bit of radio reminded me that I’d never acknowledged the real debt I owe him, and the great pleasure I took in the times I did get to hear what he had to say.

(Some other time, I’ll talk about an after dinner talk I heard him give to a very small and bumptious group of TV people who thought they knew about what mattered in 20th century science until they heard Phil’s defense of 1900.)

One last thing – a minor quibble with the NPR story. The story of the American patents on the bomb is, I think, genuinely new. But the broadcast did not mention something known for a while, and discussed in Richard Rhodes’ great book, The Making of the Atomic Bomb. Leo Szilard had been thinking about the possibility of nuclear chain reactions well before Hahn et al. achieved uranium fission. In 1936, living in Britain, he patented the idea — and assigned it to the Admirality to make sure that the weapon implied (obviously, it seemed to him) by the phenomenon would remain secret.

Image: Albin Schmalfuss, Boletus luridus, 1897. Source: Wikimedia Commons