The technological field has advanced to something far more than what people could have imagined just a half a century ago. The technological revolution has changed the lifestyle of societies just as the Industrial revolution changed the lifestyle of Europe. Who would have imagined the Internet and computers in most homes, when a computer could barely fit into an entire building, much less intelligent machines? Artificial Intelligence is an intriguing technology that will shape the human lifestyle of the future. Restricting research and progress in the field is hardly a feasible task in today's world. More realistically, we should monitor and keep the technology in a realistic and safe progression.
Artificial intelligence folklore has been traced back to the times of Ancient Egypt. But the "birth of artificial intelligence" as some would call it, was in 1956 at the Dartmouth conference. The conference was based on two theories, the principle of feedback theory and the Logic Theorist. The principle of feedback theory was observed by Norbert Wiener. He theorized that all intelligent behavior was the result of a feedback mechanism. An example would be a temperature control system that simply checks the temperature of the room, compares the reading to the desired temperature, and adjusts the flow of heat to bring the room to the desired temperature. Then in 1955, Newell and Simon developed The Logic Theorist. The Logic Theorist was a program that represented every problem as a tree. The program would attempt to solve a problem by selecting the branch that would most likely result in the correct solution. Then in 1956, John McCarthy1 organized the Dartmouth Conference to draw interest and talent to the field of artificial intelligence.2 Finally almost a decade after the Dartmouth Conference, Centers for artificial intelligence research began to form at Carnegie Mellon and MIT. Further advancements were made in the field. The General Problem Solver (GPS) was developed based on the Wiener's feedback principle. The GPS was capable of solving a greater range of common sense problems. As the field progressed, the LISP language was created. LISP became the language of choice among the artificial intelligence developers. The in 1963, the Department of Defense's Advanced research projects Agency (ARPA) gave MIT a 2.2 million dollar grant to be used in researching "Machine-Aided Cognition" or artificial intelligence. This move by the US government was to ensure that the United States have the technological advantage over the Soviet Union. Over the next few decades steady advancements were made. Programs were able to solve algebraic story problems (STUDENT) and understand simple English sentences (SIR). The 1970's brought forth the advent of the expert system. The Expert system was capable of predicting the probability of a solution under set conditions. Due to the amount of storage space available, the program was able to store the solutions to each conditional statement. Machine vision was also discovered in the 1970's. Machines were able to differentiate between shapes, color, shading, and texture. By 1985, hundreds of companies offered machine vision systems to perform quality control on assembly lines. The 1980's showed us that the technology of artificial intelligence had real-life uses. The US military put the artificial intelligence based hardware to the test during Desert Storm. Artificial intelligence technology was used in the missile systems and other areas of combat. The present state of the art can be found at MIT in the humanoid robotics group. An example of the humanoid robotics group is Coco; Coco is also the newest member of the humanoid robotics group. Coco is fully mobile which helps in social interactions and intelligence. Independence from "a human caregiver" allows Coco to exhibit behaviors "closer to their evolutionary origins." When avoidance is...
Please join StudyMode to read the full document