Therefore, the Lyapunov function always decreases if the system b

Therefore, the Lyapunov function always decreases if the system behaves according to (Equation 5), (Equation 6) and (Equation 7).

Since the function is limited from below, the stable states GW 572016 of the system are described by the local minima of the Lyapunov function. The cost function for the threshold-linear neuronal activation function g(u) = [u – θ]+ shown in Figure 5A can be approximated by a linear function when the firing rates of the GC are not too large: equation(Equation 11) C(a)≈θa,a≥0. For negative values of firing rates a, the cost function is infinitely big, reflecting the fact that negative firing rates are not available. For our model to have a Lyapunov function, a more general condition than Wmi=εW˜im may hold. Indeed, the sufficient condition for the existence of the Lyapunov function is that the network weight matrix

Gik=∑iW˜imWmk is symmetrical (Hertz et al., 1991). This is true if, for example, Wmi=W˜inEnm, where Enm   is an arbitrary symmetrical M  -by-M   matrix. Thus, condition Wmi=εW˜im is sufficient but not necessary for the network to have a Lyapunov function. However, we argue that this condition is necessary for the system to be described by the Lyapunov function in the form of Equation 2. We will distinguish two types of Lyapunov functions. First, we consider PS-341 the homogeneous case, when the Lyapunov function has the following form: equation(Equation 12) L0(a→)=12ε∑m=1M(xm−∑iWmiai)2. The homogeneous case corresponds to a vanishingly small threshold for the activation of the GCs. In this case, we still constrain the firing rates to be nonnegative. The inhomogeneous Lyapunov function is equation(Equation 13) L(a→)=L0(a→)+∑iθiai,with the same constraint

ai≥0ai≥0. We prove here the two theorems that limit the number of coactive GCs (i.e., the ones for which ai≠0ai≠0). The response of MCs is rm=xm−i∑Wmiairm=xm−∑iWmiai. Assume that the M  -dimensional receptive fields of the GCs W→i are the about vectors of the general position; i.e., every subset of M   vectors W→i are linearly independent. Then, in the minimum of the homogeneous Lyapunov function ( Equation 12), either rm = 0 for all m (i.e., MCs do not respond, and GC representation is complete) or fewer than M GCs are active. In the former case (rm = 0), all of the GCs may be active. Proof:   Assume that M   or more GCs are simultaneously active. Let us vary slightly the activity of only one active GC: Δak=εΔak=ε. The corresponding variation in the Lyapunov function is ΔL0=(r→⋅W→k)ε+O(ε2). Because we are considering the minimum of the Lyapunov function, all of the scalar products (r→⋅W→k) must be zero, which is possible only if r→=0 or the number of vectors W→k is less than M. Consider the set of N   (M   + 1)-dimensional vectors Ω→i=(W→i,θi). Assume that these vectors are of the general position; i.e., any subset of M + 1 of these vectors is linearly independent.

Comments are closed.