Preview

Fast and Robust Fixed-Point Algorithms for Independent Component Analysis

Powerful Essays
Open Document
Open Document
8254 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
Fast and Robust Fixed-Point Algorithms for Independent Component Analysis
Fast and Robust Fixed-Point Algorithms for Independent Component Analysis
Aapo Hyvärinen Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 5400, FIN-02015 HUT, Finland Email: aapo.hyvarinen@@hut.fi IEEE Trans. on Neural Networks, 10(3):626-634, 1999.
Abstract Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s information-theoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixed-point algorithms for practical optimization of the contrast functions. These algorithms optimize the contrast functions very fast and reliably.

1

Introduction

A central problem in neural network research, as well as in statistics and signal processing, is finding a suitable representation or transformation of the data. For computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. Let us denote by x = (x1 , x2 , ..., xm )T a zero-mean m-dimensional random variable that can be observed, and by s = (s1 , s2 , ..., sn )T its n-dimensional transform. Then the problem is to determine a constant (weight) matrix W so that the linear transformation of the observed

You May Also Find These Documents Helpful

  • Good Essays

    Nt1310 Unit 7 Lab Report

    • 493 Words
    • 2 Pages

    First apply the inputs to the network and work out the output. This initial output could be anything, as the initial weights are random numbers.…

    • 493 Words
    • 2 Pages
    Good Essays
  • Good Essays

    Nt1310 Unit 3 Study Essay

    • 3921 Words
    • 16 Pages

    |Singular Value |Closely related to principal components analysis, it reduces the overall dimensionality of the input |…

    • 3921 Words
    • 16 Pages
    Good Essays
  • Powerful Essays

    Solutions Chapter 7

    • 7531 Words
    • 30 Pages

    Objective Topic Edition Edition 31 LO 2 Gain recognition and basis computation Unchanged 31…

    • 7531 Words
    • 30 Pages
    Powerful Essays
  • Powerful Essays

    Fingerprint Identification

    • 4430 Words
    • 18 Pages

    References: [1] W. F. Leung, S. H. Leung, W. H. Lau, And A. Luk,Fingerprint Recognition Using Neural Network, Neural Networks For Signal Processing – Proceedings Of The 1991 IEEE Workshop…

    • 4430 Words
    • 18 Pages
    Powerful Essays
  • Best Essays

    Spiral Structure

    • 3835 Words
    • 16 Pages

    References: [1] Touretzky, DS, and Pomerleau, DA. What’s hidden in the hidden layers? Byte 1989; August issue:227-233. [2] Fahlman, SE. Faster-learning variations on back-propagation: An empirical study. In Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988. [3] Fahlman, SE and Lebiere, C. The cascade-correlation learning architecture, In Advances in neural information processing systems 2, Touretzky, DS (ed.), Morgan Kaufmann, 1990. [4] Lang, KJ and Witbrock, MJ. Learning to tell two spirals apart, In Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988. [5] Tay, LP and Evans, DJ. Fast learning artificial neural network (FLANN II) using the nearest neighbour recall. Neural, Parallel and Scientific Computations 1994; 2(1):17-27. [6] Sun, CT and Jang, JS. A neuro-fuzzy classifier and its applications, In Proceedings of the IEEE International conference on fuzzy systems, 1993, vol. 1, pp. 94-98. [7] Chua, H, Jia, J, Chen, L and Gong, Y. Solving the two-spiral problem through input data encoding, Electronics letters 1995; 31(10):813-14. [8] Jia, J and Chua, H. Solving two-spiral problem through input data representation, In Proceedings of the IEEE International conference on neural networks, 1995, vol. 1, pp. 132-135. [9] Ulgen, F, Akamatsu, N and Iwasa, T. The hypercube separation algorithm: a fast and efficient algorithm for on-line handwritten character recognition, Applied Intelligence 1996; 6(2):101-116. [10] Singh, S. A single nearest neighbour fuzzy approach for pattern recognition, (in press, International Journal of Pattern Recognition and Artificial Intelligence, 1999). [11] Singh, S. 2D spiral recognition using possibilistic measures, Pattern Recognition Letters 1998; 19(2):141-147.…

    • 3835 Words
    • 16 Pages
    Best Essays
  • Powerful Essays

    The IEC61508 standard serves as the main regulatory framework for all safetyrelated systems and provides the basis for the creation of application- and industryspecific standards. Moreover it defines certain safety integrity levels depending on…

    • 3367 Words
    • 21 Pages
    Powerful Essays
  • Satisfactory Essays

    Triple Learning

    • 434 Words
    • 2 Pages

    Contents[hide] * 1 Model * 2 Single-Loop Learning * 3 Double-Loop Learning * 4 Triple-Loop Learning…

    • 434 Words
    • 2 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Gauss Markov Theorem

    • 295 Words
    • 2 Pages

    Let [pic]be any [pic]constant matrix and let [pic]; [pic] is a general linear function of [pic], which we shall take as an estimator of [pic]. We must specify the elements of [pic]so that [pic]will be the best unbiased estimator of [pic]. Let [pic] Since [pic] is known, we must find [pic]in order to be able to specify [pic]. For unbiasedness, we have…

    • 295 Words
    • 2 Pages
    Satisfactory Essays
  • Powerful Essays

    Hello World

    • 3828 Words
    • 16 Pages

    Yiqiang Q. Zhao, W. John Braun, Wei Li† Department of Mathematics and Statistics University of Winnipeg Winnipeg, MB Canada R3B 2E9 September 2, 2004…

    • 3828 Words
    • 16 Pages
    Powerful Essays
  • Powerful Essays

    Anacor Algorithm

    • 1200 Words
    • 5 Pages

    References: Benzécri, J. P. 1969. Statistical analysis as a tool to make patterns emerge from data. In: Methodologies of Pattern Recognition, S. Watanabe, ed. New York: Academic Press. Bishop, Y. M. M., Fienberg, S. E., and Holland, P. W. 1975. Discrete multivariate analysis: Theory and practice. Cambridge, Mass.: MIT Press. Eckart, C., and Young, G. 1936. The approximation of one matrix by another one of lower rank. Psychometrika, 1: 211–218. Gifi, A. 1981. Nonlinear multivariate analysis. Leiden: Department of Data Theory. Golub, G. H., and Reinsch, C. 1971. Linear algebra, Chapter I.10. In: Handbook for Automatic Computation, Volume II, J. H. Wilkinson and C. Reinsch, eds. New York: Springer-Verlag. Greenacre, M. J. 1984. Theory and applications of correspondence analysis. London: Academic Press. Heiser, W. J. 1981. Unfolding analysis of proximal data. Doctoral dissertation. Department of Data Theory, University of Leiden. Horst, P. 1963. Matrix algebra for social scientists. New York: Holt, Rinehart, and Winston. Israëls, A. 1987. Eigenvalue techniques for qualitative data. Leiden: DSWO Press. Nishisato, S. 1980. Analysis of categorical data: dual scaling and its applications. Toronto: University of Toronto Press. Rao, C. R. 1973. Linear statistical inference and its applications, 2nd ed. New York: John Wiley & Sons, Inc. Rao, C. R. 1980. Matrix approximations and reduction of dimensionality in multivariate statistical analysis. In: Multivariate Analysis, Vol. 5, P. R. Krishnaiah, ed. Amsterdam: North-Holland. Wolter, K. M. 1985. Introduction to variance estimation. Berlin: Springer-Verlag.…

    • 1200 Words
    • 5 Pages
    Powerful Essays
  • Better Essays

    it is as Y = X1 X2 . . . Xs so as minimize the total sparsity si=1 π(Xi ).…

    • 8763 Words
    • 43 Pages
    Better Essays
  • Powerful Essays

    Nowlan, S. J., & Hinton, G. E. (1992). Simplifying neural networks by soft weight-sharing. Neural…

    • 4443 Words
    • 18 Pages
    Powerful Essays
  • Powerful Essays

    IJLTEMAS VOLUME I ISSUE VII 2ICAE-2012 GOA Steganalysis and Image Quality Measures Neha Singh Assoc. Prof., Department of Electronics and Communication Engineering Institute of Engineering and Technology Alwar, India Abstract—Steganography is the art/ science of covert communication and steganalysis is the counter to it. Though the first goal of steganalysis is detection of hidden message, there can be additional goals such as disabling, extraction and /or manipulating the original hidden message.…

    • 2605 Words
    • 11 Pages
    Powerful Essays
  • Powerful Essays

    open source

    • 1160 Words
    • 5 Pages

    Biomed Tech 2013; 58 (Suppl. 1) © 2013 by Walter de Gruyter · Berlin · Boston. DOI 10.1515/bmt-2013-4182…

    • 1160 Words
    • 5 Pages
    Powerful Essays
  • Powerful Essays

    Zhang, R., Zhou, Y, & Ishino, F. (2008). A preliminary study on prediction models for…

    • 3969 Words
    • 16 Pages
    Powerful Essays