The Social Impact of Artificial Intelligence: End of the Human Era

Topics: Technological singularity, Alan Turing, Artificial intelligence Pages: 6 (2080 words) Published: November 11, 2013
The Social Impact of Artificial Intelligence
By Matt Mahoney, Nov. 17, 2007
When we are able to create machines smarter than humans, then those machines could do likewise, but much faster. The result will be an explosion of intelligence, and according to Vernor Vinge, the end of the human era. The Friendly AI Problem

The Singularity Institute was founded to counter the threat of unfriendly artificial intelligence, losing control of the machines we have built, but the problem remains unsolved. Shane Legg proved that a machine (such as your brain) cannot predict (and thus cannot control) a machine of greater algorithmic complexity, which is a bound on a formal measure of intelligence. Informally, we cannot tell what a smarter machine will do, because if we could, we would already be that smart. As a consequence, AI is an evolutionary process: each generation experimentally creates modified versions of itself without knowing which versions will be smarter. Friendliness is hard to define. Do you want a smart gun that makes moral decisions about its target, or a gun that fires when you pull the trigger? Eliezer S. Yudkowsky proposed coherent extrapolated volition as a model of friendliness: a machine should predict what we would want if we were smarter. But this is only a definition. It does not say how we can program a machine to actually have the goal of granting our (projected) wishes, or how this goal can be reliably propagated through generations of recursive self improvement in the face of evolutionary pressure favoring only rapid reproduction and acquisition of computing resources. An analogy is helpful. Your dog does not want to get a vaccination, but it does not want to get rabies either. How does your dog know if you are acting in its best interests? Our problem is even harder. It is like asking the dog to choose an owner whose descendents will act in its best interest. But in my view, the problem is even more fundamental. Retaining control over a superintelligence would be the worst thing we could do. Modern humans do not have a lower suicide rate than humans living in medieval squalor, or even over lower animals. In a utopian world where machines might serve our every need, answer all our questions, cure all disease, protect us from hazards, end aging and death, and make us smarter by upgrading our brains with more computing power, would we be happier? If the brain is a machine that can be simulated on a computer, then it could also be reprogrammed. When a rat can electrically stimulate certain parts of its brain by pressing a lever, it will forgo food, water, and sleep until it dies. Do you really want a future where you can have everything you want, including the ability to change what you want? Uploading

If you could not control a godlike intelligence, then could you become one? My view is that the outcome is the same either way. Suppose a computer simulated your brain, with all the same memories and goals, so that everyone who talked to it, including you, was convinced that it was you. Would such a machine be conscious? Would it be you? If you died, would your consciousness live on through this machine? If you stepped into a duplicating machine that reproduced a copy of you down to the last atom, and then the original you was killed, would the copy be you, or would it be a philosophical zombie? If the two copies are identical, does it matter which one dies? Chalmers' fading qualia argument asks if the neurons in your brain were replaced one by one with artificial but functionally equivalent devices, at what point do you become a zombie? These questions are hard to answer because it exposes a conflict between logic, which says that there is no physical basis for consciousness, and your brain's hardwired belief that consciousness exists, that there is a "you" inside your brain that experiences your perceptions, feels pleasure and pain, and controls your thoughts and actions. But like the P-zombie,...


References: 1. Landauer, Tom, "How much do people remember? Some estimates of the quantity of learned information in long term memory", Cognitive Science (10) pp. 477-493, 1986.
2. Hopfield, J. J., "Neural networks and physical systems with emergent collective computational abilities", Proceedings of the National Academy of Sciences (79) 2554-2558, 1982.
Continue Reading

Please join StudyMode to read the full document

You May Also Find These Documents Helpful

  • Social impact of artificial intelligence Essay
  • Artificial Intelligence Essay
  • Social and Ethical Impact of Artificial Intelligence Research Paper
  • The Impact of Artificial Intelligence Essay
  • Essay on Artificial Intelligence
  • Artificial Intelligence Essay
  • Artificial Intelligence Essay
  • Essay about The Artificial Intelligence

Become a StudyMode Member

Sign Up - It's Free