A Comparative Evaluation Of Symbolic Learning Methods and Neural Learning Methods Shravya Reddy Konda Department of Computer Science University of Maryland, College Park Email: email@example.com
In this paper, performance of symbolic learning algorithms and neural learning algorithms on different kinds of datasets has been evaluated. Experimental results on the datasets indicate that in the absence of noise, the performances of symbolic and neural learning methods were comparable in most of the cases. For datasets containing only symbolic attributes, in the presence of noise, the performance of neural learning methods was superior to symbolic learning methods. But for datasets containing mixed attributes (few numeric and few nominal), the recent versions of the symbolic learning algorithms performed better when noise was introduced into the datasets.
1. Introduction The problem most often addressed by both neural network and symbolic learning systems is the inductive acquisition of concepts from examples . This problem can be briefly defined as follows: given descriptions of a set of examples each labeled as belonging to a particular class, determine a procedure for correctly assigning new examples to these classes. In the neural network literature, this problem is frequently referred to as supervised or associative learning. For supervised learning, both the symbolic and neural learning methods require the same input data, which is a set of classified examples represented as feature vectors. The performance of both types of learning systems is evaluated by testing how well these systems can accurately classify new examples. Symbolic learning algorithms have been tested on problems ranging from soybean disease diagnosis  to classifying chess end games . Neural learning algorithms have been tested on problems ranging from converting text to speech  to evaluating moves in backgammon . In this paper, the current problem is to do a comparative evaluation of the performances of the symbolic learning methods which use decision trees such as ID3  and its revised versions like C4.5  against neural learning methods like Multilayer perceptrons  which implements a feed-forward neural network with error back propagation. Since the late 1980s, several studies have been done that compared the performance of symbolic learning approaches to the neural network techniques. Fisher and McKusick  compared ID3 and Backpropagation on the basis of both prediction accuracy and the length of training. According to their conclusions, Backpropagation attained a slightly higher accuracy. Mooney et al.,  found that ID3 was faster than a Backpropagation network, but the Backpropagation network was more adaptive to noisy data sets. Shavlik
et al.,  compared ID3 algorithm with perceptron and backpropagation neural learning algorithms. They found that in all cases, backpropagation took much longer to train but the accuracies varied slightly depending on the type of dataset. Besides accuracy and learning time, this paper investigated three additional aspects of empirical learning, namely, the dependence on the amount of training data, the ability to handle imperfect data of various types and the ability to utilize distributed output encodings. Depending upon the type of datasets they worked on, some authors claimed that symbolic learning methods were quite superior to neural nets while some others claimed that accuracies predicted by neural nets were far better than symbolic learning methods. The hypothesis being made is that in case of noise free data, ID3 gives faster results whose accuracy will be comparable to that of back propagation techniques. But in case of noisy data, neural networks will perform better than ID3 though the time taken will be more in case of neural networks. Also, in the case of noisy data, performance of C4.5 and neural nets will be comparable since C4.5 too is resistant to...
Please join StudyMode to read the full document