1ST GENERATION (MACHINE CODE):
The first generation of codes used to program a computer, was called machine language or machine code, it is the only language a computer really understands, a sequence of 0s and 1s that the computer's controls interprets as instructions, electrically. First-generation languages required the writing of long strings of binary numbers to represent such operations as “add,” “subtract,” “and compare.” Later improvements allowed octal, decimal, or hexadecimal representation of the binary strings.
2ND GENERATION (ASSEMBLY LANGUAGE):
Because writing programs in machine language is impractical (it is tedious and error prone), symbolic, or assembly, languages—second-generation languages—were introduced in the early 1950s. Assembly language turns the sequences of 0s and 1s into human words like 'add'. They use simple mnemonics such as A for “add” or M for “multiply,” which are translated into machine language by a computer program called an assembler. The resulting machine language programs, however, are specific to one type of computer and will usually not run on a computer with a different type of central processing unit (CPU).
3RD GENERATION (HIGH LEVEL LANGUAGE ):
The lack of portability between different computers led to the development of high-level languages—so called because they permitted a programmer to ignore many low-level details of the computer's hardware. Hence, in the mid-1950s a third generation of languages came into use. These algorithmic, or procedural, languages are designed for solving a particular type of problem. Unlike machine or symbolic languages, they vary little between computers. They must be translated into machine code by a program called a compiler or interpreter. The first high-level language, Fortran [Formula translation], was developed (1953–57) for scientific and engineering applications by John Backus at the IBM Corp. A program that handled recursive algorithms better, LISP [List...
Please join StudyMode to read the full document