OVERVIEW OF ARTIFICIAL INTELLIGENCE
Artificial intelligence is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial Intelligence (AI) is also defined as a computer-based analytical process that exhibits behavior and actions that are considered “intelligent” by human observers. AI attempts to mimic the human thought process including reasoning and optimization. Such systems also exhibit logic, reasoning, intuition, and the just-plain-common-sense qualities that we associate with human beings. Artificial intelligence includes:
* Games playing: programming computers to play games such as chess and checkers * Expert systems: programming computers to make decisions in real-life situations for example, some expert systems help doctors diagnose diseases based on symptoms. * Natural language: programming computers to understand natural human languages * Neural networks: systems that simulate intelligence by attempting to reproduce the types business of physical connections that occur in animal brains * Robotics: programming computers to see and hear and react to other sensory stimuli Reasons Business is interested in Artificial Intelligence
* They take over routine and unsatisfying jobs held by people. * They suggest solutions to specific problems that are too massive and complex to be analyzed by human beings in a short period of time. * They store information in an active form as organizational memory, creating an organizational knowledge base that many employees can examine and preserving expertise that might be lost when an acknowledged expert leaves the firm. * They create a mechanism that is not subject to human feelings such as fatigue and worry. This may be especially useful when jobs may be environmentally, physically, or mentally dangerous to humans. * They are also useful advisers in times of crisis.
Branches of AI
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. Search
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains. Pattern recognition
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. These more complex patterns require quite different methods than do the simple patterns that have been studied the most. Representation
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used. Inference
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. Ordinary logical reasoning is monotonic in that the set of conclusions that can be drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning. Common sense knowledge and reasoning
This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950. The Cyc system contains a large but spotty collection of common sense facts. Learning from experience
Programs do that. The...
Please join StudyMode to read the full document