Advancement in Computer Technologies had led to the development of what is referred to as Artificial Intelligence (AI), which is now changing the way things is being done in virtually all human endeavours, and very slowly - the content and practice of education are beginning to follow suit. AI is contributing new approaches to education and learning. The hallmark of AI applications in education is that they attempt to explicitly represent some of the reasoning skills and knowledge of expert practitioners, and to exploit that expertise for teaching and learning. In business we see growing evidence that information technologies are leading to substantial improvements in productivity by automating routine activities (Zuboff, 1988). Similarly, it seems that if we can impart basic cognitive skills of teachers to computers we might delegate some teaching to machines and thus improve educational outcomes. But, what is this AI? We need to know about it – AI is the ability of a digital computer or computer-controlled robot (any automatically operated machine that replaces human effort) to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems (robot) endowed with the intellectual processes or characteristics of humans (though it may not resemble human beings in appearance or perform functions in a humanlike manner), such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks – such as discovering proofs for mathematical theorems or playing chess among others – with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition. The earliest substantial work in the field of AI was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing. In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that also is stored in the memory in the form of symbols. This is Turing's stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing's conception is now known simply as the universal Turing machine. All modern computers are in essence universal Turing machines. Though, Turing could not turn to the project of building a stored-program electronic computing machine until the cessation of hostilities in Europe in 1945. Nevertheless, during the World War II he gave a considerable thought to the issue of machine intelligence (computer intelligence). This was made known in his earliest public lecture (London, 1947), where he said, “What we want is a machine that can learn from experience,” and that the “possibility of letting the machine alter its own instructions provides the mechanism for this.” So, one of Turing's original ideas was to train a network of artificial neurons to perform specific tasks, an approach described in the section Connectionism (an attempts to understand how the human brain works at the neural level and, in particular, how people learn and remember things).
In 1945 Turing predicted that computers would one day play very good chess, and just over 50 years...
Please join StudyMode to read the full document