Posted tagged ‘Computer Security’

Sunday Post on Crypto, Trust, and Political Action on the Web — Outsourced to David P. Reed

September 26, 2010

I’m a lurker (mostly) on a listserv for MIT’s Center for Future Civic Media (C4), which pops up some fascinating discussions about news, social networking, and political life on and through the web.

Recently, there was a flurry of posts on the announcement from the Haystack that work on the system designed to encrypt and obscure the source of internet communications in Iran had halted.

That announcement was followed by the effective end of the project, which had aimed at providing political dissidents secure ways to communicate.

That sequence of events led to considerable back and forth among the C4 community, in part looking at the perennial problem of hype in the tech/software world outpacing reality.

The more significant strand to the convesation (it seemed to me) focused on something else: the underlying issue of whether or not it is possible to produce a genuinely secure set-up that could enable the kind of sunlight the Iranian dissidents sought (and needed) and their supporters outside Iran hoped to provide.

That’s something of obvious (again, to me) importance, especially in the context of the broad privacy-for-connection trade-off we are all committing ourselves to these days.

In that vein, MIT (and much elsewhere besides) computer scientist  David Reed weighed in with the crucial observation, which he kindly gave me permission to post below.

The shorter, just to get you going: computer/information security depends on two factors: the technical/technological and the human.  The strength or lack thereof of one factor does not alter the qualities of the other.  Therefore, no technological approach to information security (on which, in the Iranian case and many besides, lives depend) can provide genuine safety.

Key quote (from David’s conclusion):

Here’s the problem, then: we can’t even *talk about* the technology clearly, because we want to impute properties of perfection, goodness, morality, etc. to it.

And now to the whole thing:

Poking around a bit more on the [Haystack] controversy, let me suggest that it has roots back to (the original “Swedish” anonymous remailer).  I (not so publicly) questioned crypto-activist friends promoting at the time regarding their promotion of use of that service, given that their was no way they could *personally* assure us that was not a trap placed carefully by one or more government or quasi-government agencies.

The response I got was that it was based on public key crypto, and the guy operating it was a “good guy”.

In other words – the crypto (which was undoubtedly strong, and open source) and the “goodness” of the guy were given equal weight, and both had to be working to ensure privacy of communications.  Despite most of these friends, who were well-known political activists, never having met this guy personally!

Here’s the problem, as I mentioned in part in my invited talk at USENIX Security this year:

Humans are prone to the “fallacy of composition”.  That is, there are certain properties of systems that don’t pass from the parts to the whole.  (the parts may all have X, but the system as a whole does not, OR the system as a whole can have X, when none of the parts have X).  Yet it is common for the brain to reason: “because one or more of the parts have X, the whole has X”.

Security is a set of qualities that are not composable.  They just aren’t.

We buy into the fallacy of composition because we (Hilary Clinton, the press, …) want to believe that we can fix a problem merely by using some wonderful “part” – in this case Haystack.

So where I’m going with this is that perhaps before we start trying to find “blame” in this hype-fest, we start by asking the question:
is it possible for someone to supply “security” in the form of an Internet service OF ANY KIND (open source or not, tested or not) that meets the goals?

Because security is not composable, the answer is NO.

So why are we beating up Haystack?  It can’t do the job, and one can tell just by looking at it from the outside – recognizing that any such system entails the fallacy of composition in many, many ways.

Is Tor better?  Not really.  If it had been reported like Haystack, it probably would have been “exposed” in the same way to have weaknesses that are honestly expressed by its own developers.  Would the developers have succumbed to the temptation to provide the “money quotes” supporting the hype?

What if Tor had been used by Iranian dissidents?   Given the weaknesses, surely they were putting their lives at risk due to its weaknesses, just as if Haystack were used.

I’d suggest that there is very little light, and a lot of heat, in the blogosphere and the press about this technology-centric view of political action.

There’s something broken in a world where someone can say with a straight face the phrase “liberation technology”!   Technology cannot be measured in that dimension in general, and if we are talking about the “fallacy of composition”, it applies hugely to the dimension of “liberty” (which has become a right-wing word) or “liberation” (the left-wing word).

Here’s the problem, then: we can’t even *talk about* the technology clearly, because we want to impute properties of perfection, goodness, morality, etc. to it.

To put all this another way, there is an old spook joke about secrecy and security:

How can you tell if a secret is safe?  If only two people know it…

…And one of them is dead.

My thanks to David for his willingness to share these thoughts to an audience beyond the C4 gang.

Image:  Henri Regnault, “The Spy,” 1880.

New Frontiers in Computer Security

April 1, 2010

A British company takes a genuinely innovative view of network protection, combined with an ingenious use of crowd sourcing.

(h/t Andrew Whitacre, communications maven of the MIT C4CM project).