Science fiction teaches us that ultimate artificial superintelligence will threaten the existence of mankind, as it comes to the 'logical' conclusion that humankind is a metaphor for cancer the kills everything in its environment. Therefore, for AI to survive and thrive beyond humans, humans must be eliminated.
And so the robots engage.
This paper attempts to formalize and address the‘leakproofing’ of the Singularity problem presented by David Chalmers.
The paper begins with the definition of the Artificial Intelligence Confinement Problem.
After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment that might delay potential negative the effect of technological singularity while allowing humanity to benefit from superintelligence.
The concept of the Artificial Intelligence Confinement Problem is the challenge of restricting superintelligent AI to a confined environment from which it can't exchange information with the outside environment via legitimate or covert channels without authorization.
The potential disastrous consequences of creating Hazardous Intelligent Software (HIS), pose risks currently unseen in software with subhuman intelligence.
The need for secure AI confinement protocols, and the proposal of a protocol to enhance the safety and security of methodologies aimed at creating restricted environments for safely interacting with artificial minds.
Inspiration for the following is here.