Hopfield Memory


Advertisements

One of the earliest recurrent neural networks reported in literature was the auto-associator
independently described by Anderson (Anderson, 1977) and Kohonen (Kohonen, 1977) in 1977.
It consists of a pool of neurons with connections between each unit i and j, i 6= j.

Hopfield Neural Network

The auto-associator network. All neurons are both input and output neurons, i.e., a
pattern is clamped, the network iterates to a stable state, and the output of the network consists of
the new activation values of the neurons.

All connections are weighted.
In 1982, Hopfield (Hopfield, 1982) brings together several earlier ideas concerning these
networks and presents a complete mathematical analysis based on Ising spin models (Amit,
Gutfreund, & Sompolinsky, 1986). It is therefore that this network, which we will describe in
this chapter, is generally referred to as the Hopfield network.

The Hopfield network consists of a set of N interconnected neurons which update
their activation values asynchronously and independently of other neurons. All neurons are
both input and output neurons. The activation values are binary. Originally, Hop eld chose
activation values of 1 and 0, but using values +1 and -1 presents some advantages discussed
below. We will therefore adhere to the latter convention.

The state of the system is given by the activation values1 y = (yk). The net input sk(t+1)
of a neuron k at cycle t+1 is a weighted sum

Weighted Sum Hopfield Network

A simple threshold function is applied to the net input to obtain the new activation
value yi(t + 1) at time t + 1:

Threshold Neural Network Hopfield

Hopfield network as associative memory

A primary application of the Hopfield network is an associative memory. In this case, the
weights of the connections between the neurons have to be thus set that the states of the system
corresponding with the patterns which are to be stored in the network are stable. These states
can be seen as ‘dips’ in energy space. When the network is cued with a noisy or incomplete test
pattern, it will render the incorrect or missing data by iterating to a stable state which is in
some sense ‘near’ to the cued pattern.

i.e., if xp
j and xp
k are equal, wjk is increased, otherwise decreased by one (note that, in the original
Hebb rule, weights only increase). It appears, however, that the network gets saturated very
quickly, and that about 0:15N memories can be stored before recall errors become severe.
There are two problems associated with storing too many patterns:

  1. the stored patterns become unstable;
  2. spurious stable states appear (i.e., stable states which do not correspond with stored
    patterns).

The first of these two problems can be solved by an algorithm proposed by Bruce et al. (Bruce,
Canning, Forrest, Gardner, & Wallace, 1986):

Algorithm 1 Given a starting weight matrix for each pattern xp to be stored and
each element xp
k in xp de ne a correction ╬Ák such that

Now modify wjk by Repeat this procedure until all patterns are
stable.

It appears that, in practice, this algorithm usually converges. There exist cases, however, where
the algorithm remains oscillatory (try to find one)!