You are on page 1of 7

Chapter 4

Emergence of Small-World Structure in Networks


of Spiking Neurons Through STDP Plasticity

Gleb Basalyga, Pablo M. Gleiser, and Thomas Wennekers

Abstract In this work, we use a complex network approach to investigate how a neural network
structure changes under synaptic plasticity. In particular, we consider a network of conductance-
based, single-compartment integrate-and-fire excitatory and inhibitory neurons. Initially the neurons
are connected randomly with uniformly distributed synaptic weights. The weights of excitatory con-
nections can be strengthened or weakened during spiking activity by the mechanism known as spike-
timing-dependent plasticity (STDP). We extract a binary directed connection matrix by thresholding
the weights of the excitatory connections at every simulation step and calculate its major topological
characteristics such as the network clustering coefficient, characteristic path length and small-world
index. We numerically demonstrate that, under certain conditions, a nontrivial small-world structure
can emerge from a random initial network subject to STDP learning.

4.1 Introduction

In recent years there has been a growing interest in modeling the brain as a complex network of
interacting dynamical systems [1–4]. Data on anatomical and functional connectivity shows that the
brain, at many different levels, manifests as a small-world structure, characterized by the presence of
highly clustered modules and short distances between nodes [5, 6]. This complex network structure
is hypothesized to be an optimal configuration that allows for the localization of function, such as
visual or auditory, to specific brain areas, a concept known as functional segregation. At the same
time, such a network structure is thought to maximize information flow across different areas; the
latter is termed as functional integration [7, 8]. The brain’s ability to rapidly and efficiently combine
specialized information from different brain areas, is called information integration property and is
even considered to be crucial for consciousness [9, 10].
The complex network approach [11–13] applies the mathematical methods of graph theory to the
analysis of the brain networks. Two statistical measures are often used in order to characterize the
functional properties of a complex network [14]: the clustering coefficient C and characteristic path
length L. The clustering coefficient measures how densely neighboring neurons are connected in the
network and, therefore, can be used to quantify the functional segregation property of the network.
The clustering coefficient of an individual neuron is equivalent to the fraction of connections of the
neuron’s neighbors that are also connected to each other [14, 15]. The network clustering coefficient is
defined as the mean clustering coefficient averaged over all neurons in the network. The characteristic
path length (or average shortest distance) is defined as the shortest connection path length between two
neurons, averaged over all pairs of neurons in the network [14], and, therefore, can be used to quantify

G. Basalyga ()
Centre for Robotics and Neural Systems (CRNS), University of Plymouth, Plymouth, PL4 8AA, UK
e-mail: gbasalyga@plymouth.ac.uk

C. Hernández et al. (eds.), From Brains to Systems, Advances in Experimental Medicine and Biology 718, 33
DOI 10.1007/978-1-4614-0164-3_4, © Springer Science+Business Media, LLC 2011
34 G. Basalyga et al.

the integration property of the net. Highly integrated networks have short characteristic path length.
For example, neurons, arranged into a ring, form a regular network with high clustering coefficient
but long characteristic path length and activity needs longer time to be integrated by the network. In
contrast, random networks have short characteristic path length. This leads to fast spreading of activity
over the entire network. However, random networks have small clustering coefficient and, therefore,
limited ability to segregate and specialize. Small-world networks are formally defined as networks
that present a characteristic path length of the same order as a random network, but are significantly
more clustered than random networks [13]. In order to quantify this property, we use the small-world
index S, which is defined for a given network as follows [16]:
C/Cr
S= , (4.1)
L/Lr
where C and L are the clustering coefficient and characteristic path length of a given network; Cr
and Lr are the clustering coefficient and characteristic path length of a random reference network.
This reference network is usually created from the given network by a random rewiring algorithm that
keeps the degree distribution and connection probability the same [12, 17]. For a random network,
C = Cr and L = Lr and clearly S = 1. For a small-world network, the structure is more clustered
than a random network, thus C > Cr , also, the mean distance is L ≈ Lr , and as a consequence S is
greater than 1. Experimental data on functional cortical connectivity estimate the small-world index
of cat visual cortex to be in the range from 1.1 to 1.8 (see [6], Table 1). For anatomical connectivity
of cat and macaque cortices, the estimated range of S is from 1.4 to 2.6 (see [7], Table 1).
Understanding the basic processes that allow for the emergence of these nontrivial cortical struc-
tures can allow for much insight in the study of the brain. In particular, in this work we will be
interested in a research line which indicates that a small-world structure can evolve from a random
network as a result of specific synaptic plasticity [18–20]. This plasticity, known as spike-timing-
dependent plasticity (STDP) [21, 22], manifests itself in changes in the synaptic weights, depending
of spiking activity of the network. In this work we use a complex network approach [12, 13] to
study how the underlying network topology changes during STDP learning. As quantitative measures
to characterize the time evolution of the network structure, we consider the distribution of synaptic
weights, the clustering coefficient and the mean path length at every simulation step. We observe that
under specific conditions a nontrivial small-world structure emerges from a random initial network.

4.2 Model
The model consists of 100 conductance-based single-compartment leaky integrate-and-fire (LIF) neu-
rons (80% excitatory and 20% inhibitory), connected randomly with a connection probability of 10%.
Figure 4.1(a) presents a visualization of the network in neuroConstruct [23, 24].
The equation for the membrane potential Vm for each neuron is:
dVm
Cm = −gL (Vm − EL ) + S(t) + G(t), (4.2)
dt
where Cm = 1 µF/cm2 is the specific capacitance, gL = 5 × 10−4 S/cm2 is the leak conductance den-
sity and EL = −60 mV is the leak reversal potential. Total surface area of a neuron is 1000 µm2 . The
function S(t) represents the spiking mechanism, which is based on the implementation in NEURON
of the conductance-based LIF cells as described in Brette et al. [25]. A spike is emitted when Vm
reaches a threshold of −50 mV. After firing, the membrane potential is reset to −60 mV. The function
G(t) in (4.2) represents the conductance based synaptic interactions:

G(t) = − gj i (t)(Vi − Ej ), (4.3)
j
4 Emergence of Small-World Structure in Networks of Spiking Neurons 35

Fig. 4.1 (a) Three-dimensional visualization of the model in neuroConstruct. The excitatory neurons are shown in
blue, the inhibitory neurons are shown in red. Links between the nodes represent the synaptic connections. (b) Raster
plot of the spiking activity of 80 excitatory neurons during 5 seconds of STDP learning, driven by a 50 Hz Poissonian
spiking input

where Vi is the membrane potential of neuron i, gj i (t) is the synaptic conductance of the synapse
from neuron j to neuron i, and Ej is the reversal potential of that synapse. Ej = 0 mV was set for
excitatory synapses, and Ej = −60 mV for inhibitory synapses. Each cell in the model has three
types of synapses: fixed excitatory synapses, plastic (STDP) excitatory synapses and fixed inhibitory
synapses. The fixed excitatory synapses receive Poissonian random spike inputs and are described by a
input
double exponential “alpha” function with maximum conductance, gmax = 0.0005 µS, time constants
of rise, τrise = 1 ms, and decay constant τdecay = 5 ms. The inhibitory synapses are not plastic and
the inhibitory conductances gj i are described by a double exponential “alpha” function with fixed
inh = 0.067 µS, rise time constant, τ
maximum conductance, gmax rise = 1 ms, and decay constant τdecay =
10 ms.
The excitatory recurrent connections in the network are plastic and the synaptic conductances
change at every firing event in the following way:
gij = gmax
exc
wij (t), (4.4)
exc = 0.006 µS is the maximum excitatory conductance;
where gmax
wij = wij + wij (t), (4.5)
and the amount of the synaptic modification wij is defined by the STDP function [22, 26], which
depends on the time difference between pre- and post-synaptic spikes, t = tpost − tpre ,

Ap exp(−t/τp ) if t ≥ 0
wij (t) = (4.6)
−Ad exp(t/τd ) if t < 0.
In order to avoid weight divergence during learning, the synaptic weights wij are bounded in the
range, 0 ≤ wij ≤ wmax , with wmax = 2. The constants Ap and τp set the amount and duration of long-
term potentiation. The constants Ad and τd set the amount and duration of long-term depression. We
set Ap = Ad = 0.1. Experiments suggest that the STDP potentiation time window is typically shorter
than the depression time window: for example, τp = 17 ± 9 ms and τd = 34 ± 13 ms [21]. Therefore,
we set τp = 10.0 ms and τd = 20.0 ms. Further details on implementation of synaptic interactions can
be found in [24].
The model was constructed using the neuroConstruct software [23, 24] and simulated using NEU-
RON v6.2 [27]. Complex network analysis was performed in Matlab (The MathWorks) using the
Brain Connectivity Toolbox [12].
36 G. Basalyga et al.

Fig. 4.2 The distribution


of weights, wij /wmax , of
STDP synapses at the
beginning (a), and at the
end (b) of STDP learning.
The corresponding
adjacency matrix Aij
(c)–(d) is obtained after
thresholding the
connection matrix wij (t)
with wc = 0.01

4.3 Results

We start from a randomly connected network of 100 LIF neurons, stimulated by 50 Hz Poissonian
spike trains. The spiking activity of the excitatory neurons is illustrated in Fig. 4.1(b). Initially the cou-
pling strengths were uniformly distributed (see Fig. 4.2(a)). However, after a certain period of STDP
learning, some synapses are strengthened to the maximum weight value, wmax , while the majority of
the synapses are weakened to near zero. Therefore, as shown in Fig. 4.2(b), the resulting distribution
of synaptic weights becomes bimodal.
A binary directed adjacency matrix Aij (t) can be constructed at every simulation step t by thresh-
olding the real values of the connection matrix wij (t). If the synaptic weight of the connection between
cells i and j is larger than a threshold value wc , then the connection is regarded to be functional and
Aij (t) is set to 1. On the other hand, if wij (t) is less than wc , then Aij (t) is set to 0. In Fig. 4.2(c)–
(d) we present the networks corresponding to the adjacency matrices obtained by thresholding with
wc = 0.01 at the beginning (t = 0 s) and at the end (t = 5 s) of the simulation. The figures clearly
show that, after STDP learning, the network becomes sparser. This effect can be quantified by mea-
suring the average connection density kden , which is defined as the number of connections K present
in Aij out of all possible, kden = K/Kmax (where Kmax = N 2 − N for a directed graph, excluding
self-connections). Figure 4.3(a) shows how kden drops quickly during STDP learning and reaches
a minimum value near 0.01. However, neurons that appear to be completely disconnected from the
network at a particular time, may reconnect at other times due to STDP weight modification.
The temporal evolution of the network clustering coefficient is shown in Fig. 4.3(b). Network
weights wij (t) are sampled every t = 50 ms and, after thresholding with wc = 0.01, the clustering
coefficient C(t) is calculated from the obtained adjacency matrix Aij (t). A random reference network
is generated from Aij (t) by arbitrarily rewiring the connections but keeping the degree distribution and
connection probability the same [12, 17]. In order to avoid the statistical fluctuations due to rewiring,
we usually generate 50 reference networks from a given network Aij (t), and calculate the mean values
of Cr (t) and Lr (t), averaged over all generated reference networks. As we see from Fig. 4.3(b) and
4 Emergence of Small-World Structure in Networks of Spiking Neurons 37

Fig. 4.3 The temporal evolution of the connection density kden (t) (a) and the network clustering coefficient C(t) (b)
during STDP learning. Input spike frequency is 50 Hz. The connection delays in the network are set to 5 ms everywhere.
The values of the clustering coefficient of corresponding random network Cr (t) are averaged over 50 reference networks
at every 50 ms

Fig. 4.4 The temporal evolution of the complex network measures during STDP learning for the same model as in
Fig. 4.3. (a) The temporal evolution of the ratios for the clustering coefficient, C(t)/Cr (t), and characteristic path
length, L(t)/Lr (t). The values Cr (t) and Lr (t) are averaged over 50 random reference networks at every 50 ms.
(b) The temporal evolution of the small-world index, S(t). The data points are sampled every 50 ms and averaged over
every 10 sample points. The mean value of S, averaged over the entire stimulation time, is 1.86

Fig. 4.4(a), during STDP learning, the clustering coefficient of the network becomes larger than that
of a typical value calculated from random reference networks, and the ratio C(t)/Cr (t) grows to
exceed 1. At the same time, the characteristic path length becomes less or similar to that of a random
network, as shown in Fig. 4.4(a). Therefore, the small-world index S(t) grows above 1, as illustrated
in Fig. 4.4(b), and the functional structure organized by STDP becomes more small-world like.
The simulations for longer times (up to 50 seconds) show that the small-world index fluctuates
significantly during learning. Figure 4.5(a) shows the mean values of S, averaged over the entire
simulation time, for different input spike frequencies. One can see that the values of S are greater
than 1 only in the medium range of input spike frequency, from 10 to 50 Hz. This can be explained
in the following way. For low input frequencies (< 10 Hz), there are not enough spikes to reinforce
small-world connectivity. For high input frequencies (> 60 Hz), there are too many spikes so that the
emerging small-world connectivity gets quickly destroyed by noisy spikes and the small-world index
just fluctuates around 1 during the entire simulation time. The numerical simulations indicate that the
effect of emergence of small-world structure also depends on the choice of model parameters such as
connection delays. For the model in Fig. 4.5(a), all connection delays are fixed at 5 ms. Figure 4.5(b)
shows that the effect is different for a model with randomly distributed connection delays.
38 G. Basalyga et al.

Fig. 4.5 The mean values of the small-world index (averaged over 50 s simulation time), as function of the input spike
frequency for two models with different connection delay distributions. (a) The standard model described in Sect. 4.2
with all connection delays fixed at 5 ms. (b) The model with random uniform distribution of delays in the range from 1
ms to 10 ms

4.4 Discussion

In this paper, we analyzed how a neural network structure evolves under spike-timing-dependent plas-
ticity using the complex network approach. We started from a typical random neural network and
demonstrated that a small-world structure can emerge through STDP learning under certain condi-
tions. However, the numerical simulations indicate this emergence is sensitive to the choice of model
parameters. Input statistics can interact with the time constants of neurons and synapses so that, during
STDP, the small world index will simply fluctuate around 1 and the network structure becomes tempo-
rally small-world like but then tends to return to a random organization. Also, as it was demonstrated
in similar studies [18], the balance between the excitation and inhibition in the model is important to
achieve the effect. Further studies are required to establish the relationship between the formation of
a nontrivial network structure and the dynamical properties of a neural network.
It would be interesting to measure the temporal evolution of network information integration ca-
pacity during STDP learning. However, currently available methods for calculating information in-
tegration measures are valid only for small networks of 8 to 12 nodes [28, 29]. New algorithms for
estimating information integration for large realistic neural networks, need to be developed in the
future to address this issue.

Acknowledgement This work was supported by an EPSRC research grant (Ref. EP/C010841/1).

References
1. Sporns, O., Chialvo, D.R., Kaiser, M., Hilgetag, C.C.: Organization, development and function of complex brain
networks. Trends Cogn. Sci. 8, 418–425 (2004)
2. Reijneveld, J.C., Ponten, S.C., Berendse, H.W., Stam, C.J.: The application of graph theoretical analysis to complex
networks in the brain. Clin. Neurophysiol. 118, 2317–2331 (2007)
3. Bullmore, E., Sporns, O.: Complex brain networks: graph theoretical analysis of structural and functional systems.
Nat. Rev., Neurosci. 10(3), 186–198 (2009)
4. Gomez Portillo, I.J., Gleiser, P.M.: An adaptive complex network model for brain functional networks. PLoS ONE
4(9), e6863 (2009). doi:10.1371/journal.pone.0006863
5. Sporns, O., Honey, C.J.: Small worlds inside big brains. Proc. Natl. Acad. Sci. USA 103(51), 19219–19220 (2006)
6. Yu, S., Huang, D., Singer, W., Nikolic, D.: A small world of neuronal synchrony. Cereb. Cortex 18(12), 2891–2901
(2008)
7. Bassett, D.S., Bullmore, E.: Small-world brain networks. Neuroscientist 10, 512–523 (2006)
4 Emergence of Small-World Structure in Networks of Spiking Neurons 39

8. Sporns, O., Tononi, G., Edelman, G.M.: Connectivity and complexity: the relationship between neuroanatomy and
brain dynamics. Neural Netw. 13(8–9), 909–922 (2000)
9. Tononi, G., Edelman, G.M., Sporns, O.: Complexity and coherency: integrating information in the brain. Trends
Cogn. Sci. 2, 474–484 (1998)
10. Tononi, G.: An information integration theory of consciousness. BMC Neurosci. 5, 42 (2004)
11. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.-U.: Complex networks: Structure and dynamics.
Phys. Rep. 424, 175–308 (2006)
12. Rubinov, M., Kotter, R., Hagmann, P., Sporns, O.: Brain connectivity toolbox: a collection of complex network
measurements and brain connectivity datasets. NeuroImage 47(Suppl 1), 39–41 (2009)
13. Rubinov, M., Sporns, O.: Complex network measures of brain connectivity: Uses and interpretations. NeuroImage
52(3), 1059–1069 (2010)
14. Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393, 440–442 (1998)
15. Fagiolo, G.: Clustering in complex directed networks. Phys. Rev. E, Stat. Nonlinear Soft Matter Phys. 76(2), 026107
(2007)
16. Humphries, M.D., Gurney, K.: Network ‘small-world-ness’: A quantitative method for determining canonical net-
work equivalence. PLoS ONE 3(4), 0002051 (2008). doi:10.1371/journal.pone.0002051
17. Maslov, S., Sneppen, K.: Specificity and stability in topology of protein networks. Science 296(5569), 910–913
(2002)
18. Shin, C.-W., Kim, S.: Self-organized criticality and scale-free properties in emergent functional neural networks.
Phys. Rev. E 74(4), 45101 (2006)
19. Kato, H., Kimura, T., Ikeguchi, T.: Self-organized neural network structure depending on the STDP learning rules.
In: Visarath, X., et al. (eds.) Applications of Nonlinear Dynamics. Model and Design of Complex Systems. Under-
standing Complex Systems, pp. 413–416. Springer, Berlin (2009)
20. Kato, H., Ikeguchi, T., Aihara, K.: Structural analysis on STDP neural networks using complex network theory. In:
Artificial Neural Networks—ICANN 2009. Lecture Notes in Computer Science, vol. 5768, pp. 306–314. Springer,
Berlin (2009)
21. Bi, G., Poo, M.: Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic
strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 (1998)
22. Song, S., Miller, K.D., Abbott, L.F.: Competitive Hebbian learning through spike-timing-dependent synaptic plas-
ticity. Nat. Neurosci. 3, 919–926 (2000)
23. Gleeson, P., Steuber, V., Silver, R.A.: neuroConstruct: a tool for modeling networks of neurons in 3D space. Neuron
54(2), 219–235 (2007)
24. Gleeson, P., Crook, S., Cannon, R.C., Hines, M.L., Billings, G.O., Farinella, M., Morse, T.M., Davison, A.P., Ray,
S., Bhalla, U.S., Barnes, S.R., Dimitrova, Y.D., Silver, R.A.: NeuroML: A language for describing data driven
models of neurons and networks with a high degree of biological detail. PLoS Comput. Biol. 6(6), 1000815 (2010).
doi:10.1371/journal.pcbi.1000815
25. Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J., Diesmann, M., Morrison, A., Goodman,
P., Harris, F., Zirpe, M., Natschlager, T., Pecevski, D., Ermentrout, B., Djurfeldt, M., Lansner, A., Rochel, O.,
Vieville, T., Muller, E., Davison, A., El Boustani, S., Destexhe, A.: Simulation of networks of spiking neurons: A
review of tools and strategies. J. Comput. Neurosci. 23, 349–398 (2007)
26. Billings, G., van Rossum, M.C.W.: Memory retention and spike-timing-dependent plasticity. J. Neurophysiol. 101,
2775–2788 (2009)
27. Carnevale, T., Hines, M.: The NEURON Book. Cambridge University Press, Cambridge (2006)
28. Tononi, G., Sporns, O.: Measuring information integration. BMC Neurosci. 4, 31–51 (2003)
29. Balduzzi, D., Tononi, G.: Integrated information in discrete dynamical systems: Motivation and theoretical frame-
work. PLoS Comput. Biol. 4(6), 1000091 (2008). doi:10.1371/journal.pcbi.1000091

You might also like