Preview

Learning Dfa from Simple Examples

Powerful Essays
Open Document
Open Document
7800 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
Learning Dfa from Simple Examples
Learning DFA from Simple Examples
TR 97-07 Rajesh Parekh and Vasant Honavar March 18, 1997

ACM Computing Classi cation System Categories 1991: Keywords:

I.2.6 Arti cial Intelligence Learning | language acquisition, concept learning; F.1.1 Theory of Computation Models of Computation | Automata; F.1.3 Theory of Computation Complexity Classes | Machine-independent complexity. grammar inference, regular grammars, nite state automata, PAC learning, Kolmogorov complexity, simple distributions, universal distribution, language learning, polynomialtime learning algorithms. Arti cial Intelligence Research Group Department of Computer Science 226 Atanaso Hall Iowa Sate University Ames, Iowa. IA 50011-1040. USA

Learning DFA from Simple Examples
Rajesh Parekh and Vasant Honavar Department of Computer Science 226 Atanaso Hall Iowa State University Ames IA 50011. U.S.A. March 18, 1997
We present a framework for learning DFA from simple examples. We show that e cient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt 's seminal paper: Are DFA 's PAC-identi able if examples are drawn from the uniform distribution, or some other known simple distribution?. Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an e cient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states N  of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for e cient PAC learning of DFAs.

fparekh|honavarg@cs.iastate.edu
Abstract

1 Introduction
The problem of learning a DFA with the smallest number of states that is consistent with a given sample i.e., the DFA accepts each positive example and rejects each negative example has been actively



References: Angluin, D. 1981 . A Note on the Number of Queries Needed to Indentify Regular Languages. Information and Control, 51, 76 87. Angluin, D. 1987 . Learning Regular Sets from Queries and Counterexamples. Information and Computation, 75, 87 106. Chomsky, N. 1956 , 113 124. Denis, F., D 'Halluin, C., & Gilleron, R. 1996 . PAC Learning with Simple Examples. STACS '96 - Proceedings of the 13th Annual Symposium on the Theoretical Aspects of Computer Science, 231 242. Dupont, P. 1996 . Gold, E. M. 1978 , 302 320. Hopcroft, J., & Ullman, J. 1979 . Cryptographic Limitations on Learning Boolean Formulae and Finite Automata. Pages 433 444 of: Proceedings of the 21st Annual ACM Symposium on Theory of Computing. ACM, New York. Lang, K. J. 1992 nyi, P. 1993 Pao, T., & Carr, J. 1978 . E cient Learning of Regular Languages using Teacher Supplied Positive Examples and Learner Generated Queries. Pages 195 203 of: Proceedings of the Fifth UNB Conference on AI. Pitt, L. 1989 . Inductive Inference, DFAs and Computational Complexity. Pages 18 44 of: Analogical and Inductive Inference, Lecture Notes in Arti cial Intelligence 397. Springer-Verlag. Pitt, L., & Warmuth, M. K. 1988 . Reductions among prediction problems: on the di culty of predicting automata. Pages 60 69 of: Proceedings of the 3rd I.E.E.E. Conference on Structure in Complexity Theory. Pitt, L., & Warmuth, M. K. 1989 . The minimum consistency DFA problem cannot be approximated within any polynomial. Pages 421 432 of: Proceedings of the 21st ACM Symposium on the Theory of Computing. ACM. Trakhtenbrot, B., & Barzdin, Ya. 1973 . Finite Automata: Behavior and Synthesis. Amsterdam: North Holland Publishing Company. Valiant, L. 1984 . A Theory of the Learnable. Communications of the ACM, 27, 1134 1142.

You May Also Find These Documents Helpful