Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This is the question that AI researchers are most interested in answering. It defines the scope of what machines will be able to do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it doesn't matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking. The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth Conferences of 1956: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet can't be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible. The first step to answering the question is to clearly define "intelligence." Intelligence
Main article: Turing test
Alan Turing, in a famous and seminal 1950 paper, reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks." Turing's test extends this polite convention to machines: If a machine acts as intelligently as human being, then it is as intelligent as a human being. Human intelligence vs. intelligence in general
One criticism of the Turing test is that it is explicitly anthropomorphic. If our ultimate goal is to create machines that are more intelligent than people, why should we insist that our machines must closely resemble people? Russell and Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'" Recent AI research defines intelligence in terms of rational agents or intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent. If an agent acts so as maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent. Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they don't also test for human traits that we may not want to consider intelligent, like the ability to be insulted or the temptation to lie. They have the disadvantage that they fail to make the commonsense differentiation between "things that think" and "things that don't". By this definition, even a thermostat has a rudimentary intelligence. Arguments that a machine can display general intelligence
The brain can be simulated
Main article: artificial brain
An MRI scan of a normal adult human brain
Marvin Minsky writes that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose...