# Java and Web Technologies

**Topics:**Linear algebra, Vector space, Matrix

**Pages:**246 (37194 words)

**Published:**November 2, 2012

www.jwjobs.net

NEURAL NETWORKS

Ivan F Wilde

Mathematics Department

King’s College London

London, WC2R 2LS, UK

ivan.wilde@kcl.ac.uk

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Contents

1 Matrix Memory . . . . . . . . . . . . . . . . . . . . . . . . . .

1

2 Adaptive Linear Combiner

....................

21

3 Artiﬁcial Neural Networks

....................

35

..........................

45

5 Multilayer Feedforward Networks . . . . . . . . . . . . . . . . .

75

6 Radial Basis Functions

95

4 The Perceptron

......................

7 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . 103 8 Singular Value Decomposition

Bibliography

. . . . . . . . . . . . . . . . . . 115

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

Chapter 1

Matrix Memory

We wish to construct a system which possesses so-called associative memory. This is deﬁnable generally as a process by which an input, considered as a “key”, to a memory system is able to evoke, in a highly selective fashion, a speciﬁc response associated with that key, at the system output. The signalresponse association should be “robust”, that is, a “noisy” or “incomplete” input signal should none the less invoke the correct response—or at least an acceptable response. Such a system is also called a content addressable memory.

3

mapping

stimulus

response

Figure 1.1: A content addressable memory.

The idea is that the association should not be deﬁned so much between the individual stimulus-response pairs, but rather embodied as a whole collection of such input-output patterns—the system is a distributive associative memory (the input-output pairs are “distributed” throughout the system memory rather than the particular input-output pairs being somehow represented individually in various diﬀerent parts of the system). To attempt to realize such a system, we shall suppose that the input key (or prototype) patterns are coded as vectors in Rn , say, and that the responses are coded as vectors in Rm . For example, the input might be a digitized photograph comprising a picture with 100 × 100 pixels, each of which may assume one of eight levels of greyness (from white (= 0) to black 1

www.jntuworld.com

www.jntuworld.com

www.jwjobs.net

2

Chapter 1

(= 7)). In this case, by mapping the screen to a vector, via raster order, say, the input is a vector in R10000 and whose components actually take values in the set {0, . . . , 7}. The desired output might correspond to the name of the person in the photograph. If we wish to recognize up to 50 people, say, then we could give each a binary code name of 6 digits—which allows up to 26 = 64 diﬀerent names. Then the output can be considered as an element of R6 .

Now, for any pair of vectors x ∈ Rn , y ∈ Rm , we can eﬀect the map x → y via the action of the m × n matrix

M (x,y) = y xT

where x is considered as an n × 1 (column) matrix and y as an m × 1 matrix. Indeed,

M (x,y) x = y xT x

= α y,

where α = xT x = x 2 , the squared Euclidean norm of x. The matrix yxT is called the outer product of x and y . This suggests a model for our “associative system”.

Suppose that we wish to consider p input-output pattern pairs, (x(1) , y (1) ), (x(2) , y (2) ), . . ., (x(p) , y (p) ). Form the m × n matrix p

y (i) x(i) T .

M=

i=1

M is called the correlation memory matrix (corresponding to the given pattern pairs). Note that if we let X = (x(1) · · · x(p) ) and Y = (y (1) · · · y (p) ) be the n × p and m × p matrices with columns given by the vectors x(1) , . . . , x(p) and y (1) , . . . , y (p) , respectively, then the matrix p=1 y (i) x(i) T is just Y X T . i

Indeed, the jk -element of Y X T is

p

p

T

T

Yji Xki

Yji (X )ik =

(Y X )jk =

i=1

i=1

p

(i) (i)

yj xk

=

i=1

which is precisely the jk...

Please join StudyMode to read the full document