Jane Galt bravely into the debate on open v. closed security, which Stephen den Beste and Aziz Poonawalla have engaged on (oddly, den Beste’s piece on the subject has vanished from his site; will change to a specific link if it comes back.)
Jane highlights the central problem here:
So on the one hand, releasing potential security holes to the entire population and allowing the giant processor that is the hearts and minds of the American public is an order of magnitude more likely to produce an innovative way to plug your hole than is asking a small team of experts to come up with the solution.
But on the other hand, some security holes don’t have fixes; or at least, many of the cures are worse than the disease…At that point, releasing the information doesn’t enhance your security; it gives the terrorists ideas, without producing new solutions.
The problem is, there’s no way of telling in advance which problems have solutions that just haven’t been discovered yet, and which are insoluble. They’re all insoluble to your little team of experts.
I think Jane comes real close here, but slightly misses a key point about what open efforts are good at by not distinguishing clearly enough between the two distinct phases of problem solving — the identification of a problem and the resolution of that problem.
Jane seems to argue that naturally, the American public in an open effort is better at solving problems than any one group of expert would be. And I think that’s likely true. But I would argue that the huge advantage that an open effort has in finding problems does not apply nearly as much when it comes to solving problems — open is better, but the gulf is not nearly as dramatic as with problem identification.
I admit freely I’m simply extending my own expertise — of software development — to a problem domain that it may not perfectly apply to here. But in my experience, finding the actual cause of a bug is the truly hard part. Most defects, in software, turn out to be very, very simple at their core — a variable misallocated here, a missing bracket there. And many of them are actually “wrong” — meaning, there was a clearly correct way to do something based on the rules of the software language being utilized, and those rules were violated.
What this means is that generally, fixing a software defect that has been clearly identified doesn’t usually require much creativity. Anybody competent in the language and technologies being used can do it.
And this may point to a key distinction that makes arguing from software development principles dangerous (Aziz in particular takes this approach to defend the idea of an open approach); because when discussing the real life system of “American Security”, I don’t think most of the bugs are terribly simple at all. There are no universal “rules” to be checked against. So, a degree of creativity may well be of benefit.
That caveat granted, though, I still believe that there is reason to conclude an open approach to identifying problems gives a greater gain than an open approach to solving them. Mainly, I would argue that this is because identifying problems in a wide-open domain benefits greatly from an inherently parallel approach. Having lots and lots and lots of people looking at the system just increases the likelihood that you’ll find more defects, because more people can cover more ground more thoroughly.
This benefit doesn’t apply as much to solving problems, though. Once the specific problem is identified, then expertise in that problem’s exact area becomes more valuable — and therefore the untrained masses of an open effort become less useful. Not useless — I still agree that they can provide benefit — but I don’t think it is as dramatic as the benefit added in the identification phase.
So what to do?
My recommendation is to try to get the best of both worlds. Drive the problem-identification piece as an open effort — but then funnel all the identified problems into a closed effort to solve them. I’d like to see a formal push from the government to encourage every citizen to think about their workplace; their home town, their areas of expertise — and come up with possible terrorist threats. Then have them call an 800 number where a task force is sitting in a room, receiving the ideas and routing them out — securely — to the appropriate government agency that should have responsibility for solving them.
Could this result in ‘leakage’ which would give terrorists new ideas? Quite possibly. But I think the benefit would outweigh the risk; it seems about as good a compromise as can be found. And it would have the added benefit of genuinely involving the public in fighting terrorism. Some folks might argue that no politician will suggest this plan, because it will just scare the heck out of folks when they actually start thinking this way.
To me, of course, that’s part of its appeal. We should be scared, and the sooner the public as a whole realizes that, the more serious we’ll get about actually doing what we can to fix our security problems — and taking the fight to the enemies who would exploit them.