-I. J. Good, 1965
Superintelligent machines are expected to usher in an age in which the pace technological innovation outstrips our ability to comprehend it. This era is known as the Singularity.
You say this Singularity thing sounds dangerous? Give that man a
The key issue is whether runaway intelligence and technological prowess is something against which we can defend ourselves.
The worst strategy is to try to ban AI technology. To do this would require a totalitarian regime that would be worse than any rogue super AI.
Another strategy is to race to be the first to build a superintelligence so that you can ensure its friendliness. This is tricky, but not impossible. This is the goal of the Singularity Institute for Artificial Intelligence.
Today, it occurred to me that there might be another strategy. There might be a way to tie together many independent computers into a giant distributed supercomputer. The idea being that no single computer, or subcluster could oppose the combined defensive computing might of this distributed system.
There are two dangers to this approach. The first is that, in acting to defend itself, the defensive network might supplant the threat. The hunted becoming the hunter, so to speak.
The second danger is that such a network might be pre-emptively hacked and controlled by a superintelligence.
So, it's not really a perfect plan, but I wonder whether there might be some workable variation on this "defensive swarm" theme.