Artificial Superintelligence: A Futuristic Approach (interesting-looking book on AI, details inside)
Submitted by Tequila_Wolf in ArtificialIntelligence
Artificial Superintelligence: A Futuristic Approach directly addresses these issues and consolidates research aimed at making sure that emerging superintelligence is beneficial to humanity.
A day does not go by without a news article reporting some amazing breakthrough in artificial intelligence (AI). Many philosophers, futurists, and AI researchers have conjectured that human-level AI will be developed in the next 20 to 200 years. If these predictions are correct, it raises new and sinister issues related to our future in the age of intelligent machines. Artificial Superintelligence: A Futuristic Approach directly addresses these issues and consolidates research aimed at making sure that emerging superintelligence is beneficial to humanity.
While specific predictions regarding the consequences of superintelligent AI vary from potential economic hardship to the complete extinction of humankind, many researchers agree that the issue is of utmost importance and needs to be seriously addressed. Artificial Superintelligence: A Futuristic Approach discusses key topics such as: AI-Completeness theory and how it can be used to see if an artificial intelligent agent has attained human level intelligence; Methods for safeguarding the invention of a superintelligent system that could theoretically be worth trillions of dollars; Self-improving AI systems: definition, types, and limits; The science of AI safety engineering, including machine ethics and robot rights; Solutions for ensuring safe and secure confinement of superintelligent systems; The future of superintelligence and why long-term prospects for humanity to remain as the dominant species on Earth are not great.
Contents:
CHAPTER 1 • AI-Completeness: The Problem Domain of Superintelligent Machines
1.1 INTRODUCTION
1.2 THE THEORY OF Al-COMPLETENESS
1.2.1 Definitions
1.2.2 Turing Test as the First AI-Complete Problem
1.2.3 Reducing Other Problems to a TT
1.2.4 Other Probably AI-Complete Problems
1.3 FIRST Al-HARD PROBLEM: PROGRAMMING
1.4 BEYOND Al-COMPLETENESS
1.5 CONCLUSIONS
CHAPTER 2 •The Space of Mind Designs and the Human Mental Model
2.1 INTRODUCTION
2.2 INFINITUDE OF MINDS
2.3 SIZE, COMPLEXITY, AND PROPERTIES OF MINDS
2.4 SPACE OF MIND DESIGNS
2.5 A SURVEY OF TAXONOMIES
2.6 MIND CLONING AND EQUIVALENCE TESTING ACROSS SUBSTRATES
2.7 CONCLUSIONS
CHAPTER 3 • How to Prove You Invented Superintelligence So No One Else Can Steal It
3.1 INTRODUCTION AND MOTIVATION
3.2 ZERO KNOWLEDGE PROOF
3.3 CAPTCHA
3.4 Al-COMPLETENESS
3.5 SUPERCAPTCHA
3.6 CONCLUSIONS
CHAPTER 4 • Wireheading, Addiction, and Mental Illness in Machines
4.1 INTRODUCTION
4.2 WIREHEADING IN MACHINES
4.2.1 Sensory Illusions: A Form of Indirect Wireheading
4.3 POTENTIAL SOLUTIONS TO THE WIREHEADING PROBLEM
4.4 PERVERSE INSTANTIATION
4.5 CONCLUSIONS AND FUTURE WORK
CHAPTER 5 • On the Limits of Recursively Self-Improving Artificially Intelligent Systems
5.1 INTRODUCTION
5.2 TAXONOMY OF TYPES OF SELF-IMPROVEMENT
5.3 ON THE LIMITS OF RECURSIVELY SELF-IMPROVING ARTIFICIALLY INTELLIGENT SYSTEMS
5.4 ANALYSIS
5.5 RSI CONVERGENCE THEOREM
5.6 CONCLUSIONS
CHAPTER 6 • Singularity Paradox and What to Do About It
6.1 INTRODUCTION TO THE SINGULARITY PARADOX
6.2 METHODS PROPOSED FOR DEALING WITH SP
6.2.1 Prevention from Development
6.2.1.1 Fight Scientists
6.2.1.2 Restrict Hardware and Outlaw Research
6.2.1.3 Singularity Steward
6.2.2 Restricted Deployment
6.2.2.1 AI-Box
6.2.2.2 Leakproof Singularity
6.2.2.3 Oracle AI
6.2.2.4 AI Confinement Protocol
6.2.3 Incorporation into Society
6.2.3.1 Law and Economics
6.2.3.2 Religion for Robots
6.2.3.3 Education
6.2.4 Self-Monitoring
6.2.4.1 Hard-Coded Rules
6.2.4.2 Chaining God
6.2.4.3 Friendly AI
6.2.4.4 Humane AI
6.2.4.5 Emotions
6.2.5 Indirect Solutions
6.2.5.1 Why They May Need Us
6.2.5.2 Let Them Kill Us
6.2.5.3 War Against the Machines
6.2.5.4 If You Cannot Beat Them, Join Them
6.2.5.5 Other Approaches
6.3 ANALYSIS OF SOLUTIONS
6.4 FUTURE RESEARCH DIRECTIONS
6.5 CONCLUSIONS
CHAPTER 7 • Superintelligence Safety Engineering
7.1 ETHICS AND INTELLIGENT SYSTEMS
7.2 ARTIFICIAL INTELLIGENCE SAFETY ENGINEERING
7.3 GRAND CHALLENGE
7.4 ARTIFICIAL GENERAL INTELLIGENCE RESEARCH IS UNETHICAL
7.5 ROBOT RIGHTS
7.6 CONCLUSIONS
CHAPTER 8 • Artificial Intelligence Confinement Problem (and Solution)
8.1 INTRODUCTION
8.1.1 Artificial Intelligence Confinement Problem
8.2 HAZARDOUS SOFTWARE
8.3 CRITIQUE OF THE CONFINEMENT APPROACH
8.4 POSSIBLE ESCAPE PATHS
8.4.1 Social Engineering Attacks
8.4.2 System Resource Attacks
8.4.3 Beyond Current Physics Attacks
8.4.4 Pseudoscientific Attacks
8.4.5 External Causes of Escape
8.4.6 Information In-Leaking
8.5 CRITIQUE OF THE Al-BOXING CRITIQUE
8.6 COUNTERMEASURES AGAINST ESCAPE
8.6.1 Preventing Social Engineering Attacks
8.6.2 Preventing System Resource Attacks and Future Threats
8.6.3 Preventing External Causes of Escape
8.6.4 Preventing Information In-Leaking
8.7 Al COMMUNICATION SECURITY
8.8 HOW TO SAFELY COMMUNICATE WITH A SUPERINTELLIGENCE
8.9 CONCLUSIONS AND FUTURE WORK
CHAPTER 9 • Efficiency Theory: A Unifying Theory for Information, Computation, and Intelligence
9.1 INTRODUCTION
9.2 EFFICIENCY THEORY
9.3 INFORMATION AND KNOWLEDGE
9.4 INTELLIGENCE AND COMPUTATION
9.5 TIME AND SPACE
9.6 COMPRESSIBILITY AND RANDOMNESS
9.7 ORACLES AND UNDECIDABILITY
9.8 INTRACTABLE AND TRACTABLE
9.9 CONCLUSIONS AND FUTURE DIRECTIONS
CHAPTER 10 • Controlling the Impact of Future Superintelligence
10.1 WHY I WROTE THIS BOOK
10.2 MACHINE ETHICS IS A WRONG APPROACH
10.3 CAN THE PROBLEM BE AVOIDED?