INF103: Computer Literacy
Professor: Shane Lauber
November 4, 2012
Even though computers might take over the world, the study and advancement of artificial intelligence (AI) will create how mankind lives in the future. With unbelievable advancements in technology, the use of artificial intelligence is going to be everywhere. Artificial intelligence (AI) is generally considered to be a subfield of computer science, which is concerned to attempt simulation, extension and expansion of human intelligence. Artificial intelligence techniques are advanced controls which are able to better complement classical techniques for the new industrial enterprise that continually seek to lower costs and shorten product development cycles. In this document will be a brief history of artificial intelligence and who is credited for coining the term. In addition it will show where we are at now in (AI) development and discuss some of the current uses.
The history of artificial intelligence dates back to the 1950’s. The nominal birth of (AI) is considered to have occurred at a conference held at Dartmouth College in the summer of 1956. The conference was organized by Marvin Minsky, who later helped found the (AI) Laboratory at MIT and currently at the MIT Media Laboratory and is famous for his work, the Society of Minds. John McCarthy who is the creator of the LISP programming language proposed the field artificial intelligence for funding purposes at that time, so John McCarthy is credited for coining the term artificial intelligence. During the mid-1950’s, Herbert Simon and Allen Newell had already implemented an automatic theorem proving program at the Rand Corporation called the Logic Theorist. These four men are considered the grandfathers of (AI). (Shi, Z-Z., Zheng, N-N. (2006). Pg. 810).
In a 1977 article, the late (AI) pioneer Allen Newell foresaw a time when the entire man-made world would be permeated by systems that cushioned us from dangers and increased our abilities: smart vehicles, roads, bridges, homes, offices, appliances, even clothes. Systems built around (AI) components will increasingly monitor financial transactions, predict physical phenomena and economic trends, control regional transportation systems, and plan military and industrial operations. Artificial intelligence has a long history of producing valuable spin-off technologies. (AI) researchers tend to look very far ahead, crafting powerful tools to help achieve daunting tasks of building intelligent systems. Laboratories whose focus was (AI) first conceived and demonstrated such well-known technologies as the mouse, time-sharing, high-level symbolic programming languages (Lisp, Prolog, Scheme), computer graphics, the graphical user interface (GUI), computer games, the laser printer, object-oriented programming, the personal computer, email, hypertext, symbolic mathematics systems (Macsyma, Mathmatica, Maple, Derive), and most recently software agents which are popular on the World Wide Web. (Waltz, D. (1996). Intelligent (AI) research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic (Waltz, D.). In addition to tools and insights, (AI) includes technologies of artificial neural networks (ANNs), expert systems (ESs), fuzzy logic (I), genetic algorithms ((GAS), and others still to come. ANN’s learn by training, ES’s are reason based on rules and experiences of experts, and FL works with uncertainty and partial truth. (Frank, J.B. (1997).
Moving on to where we are now in Artificial Intelligence’ development from the 1990’s to 2012. The IBM Corporation has made huge steps in their research and development. On May 11, 1997, Deep Blue became the first computer chess playing system to beat a reigning world chess champion, Garry Kasparov. The...
Please join StudyMode to read the full document