Chapter 6

Neuronal Networks: A Discrete Model

Winfried Just*, Sungwoo Ahn and David Terman, *Department of Mathematics, Ohio University, Athens, OH 45701, USA, Department of Mathematical Sciences, Indiana University-Purdue University Indianapolis, IN 46202, USA, Department of Mathematics, Ohio State University, Columbus, OH 43210, USA, [email protected]

6.1 Introduction and Overview

What do brains, ant colonies, and gene regulatory networks have in common? They are networks of individual agents (neurons, ants, genes) that interact according to certain rules. Neurons interact through electric and chemical signals at synapses, ants interact by means of olfactory and tactile signals, and products of certain genes regulate the expression of other genes. These interactions may cause a change of the state of an agent. It seems often possible to model such networks by considering only finitely many states for each agent. A neuron may fire or be at rest, an ant may be in the mood to forage, defend the nest, or tend to the needs of the queen, and a gene may or may not be expressed at any given time. If one also assumes that time progresses in discrete steps, then one can build a discrete dynamical system model for such networks.

Anyone who has ever closely looked at an ant colony will realize that, in general, there may be a significant amount of randomness in the agents’ interactions, and for this reason one may want to conceptualize many biological networks as stochastic dynamical systems, as in the agent-based models of Chapter 4 in this volume. But if there is sufficient predictability in the interactions, a mathematically simpler deterministic model may be adequate. In this chapter we study in detail one such deterministic model for networks of neurons; more examples of models of this type are considered in several other chapters of this volume.

In a deterministic discrete dynamical system, the state of each agent at the next time step is uniquely determined by its current state and the current states of all agents it interacts with, according to the rules that determine the dynamics. The resulting sequence of network states is called a trajectory. If the system has only finitely many states, each trajectory must eventually enter a set of states that it will visit infinitely often. This set is called its attractor. The sequence of states visited prior to reaching the attractor is called the transient part of the trajectory, and the set of all initial states from which a given attractor will be reached is called its basin of attraction.

The network connectivity specifies who interacts with whom. In neuronal networks the connectivity corresponds to the wiring of the brain and can often be assumed to be fixed over time scales of interest. Our focus in this chapter is on how the network connectivity and two intrinsic parameters of individual neurons, the refractory period and firing threshold, influence the network dynamics, in particular the lengths and number of attractors, the sizes of their basins of attraction, and the lengths of transients.

The chapter is organized as follows. Section 6.2 reviews some basic facts about neuronal networks as well as some experimentally observed phenomena that motivated our work. In Section 6.3 we formally define our discrete models and review their basic properties. In Section 6.4 we study provable restrictions on the network dynamics for certain special connectivities, and Section 6.5 reviews some results on the dynamics of typical networks with given parameters. In Section 6.6 we consider an alternative interpretation of our models in terms of disease dynamics and discuss some general issues of choosing an appropriate mathematical model for a biological system. In Section 6.7 we discuss whether our models may be applicable to actual problems in neuroscience and review a result that guarantees an exact correspondence between our discrete models and more detailed ordinary differential equation (ODE) models of certain neuronal networks. In Section 6.8 we describe how the material presented here fits into the larger picture of some current research on the cutting edge of mathematical neuroscience.

We recommend that the reader attempts the exercises included in the main text right away while reading it. They are primarily intended to help the reader gain familiarity with the concepts that we introduce. The additional exercises in the online supplement [1] are less crucial for understanding the material and can be deferred until later. The online supplement also contains some additional material that is related to the main text and, most importantly, several projects for open-ended exploration. Most of these projects are structured in such a way that they start with relatively easy exercises and gradually lead into unsolved research problems.

6.2 Neuroscience in a Nutshell

6.2.1 Neurons, Synapses, and Action Potentials

It is commonly believed that everything the brain does, and therefore everything we, as humans, do—from cognitive tasks such as thinking, planning, and learning to motor tasks such as walking, breathing, and eating—is the result of the collective electrical activity of neurons. There are roughly 1012 neurons in the human brain. Neurons communicate with other neurons at synapses and a brain has approximately 1015 synaptic connections; that is, on average, each neuron receives input from approximately 1000 other neurons. Whether or not a neuron fires an electrical signal, or action potential, depends on many factors. These include electrical and chemical processes within the neuron itself, properties of the synaptic connections and the underlying network architecture. A fundamental issue in neuroscience is to understand how these three factors interact to generate the complex activity patterns of populations of neurons that underlie all brain functions.

A schema of a neuron is shown in Figure 6.1. Most neurons consist of dendrites, an axon, and a cell body (or soma). The dendrites spread out from the cell body in a tree-like manner and detect incoming signals from other neurons. In response to these incoming signals, the neuron may (or may not) generate an action potential (or nerve impulse) that propagates away from the soma along the axon. Many axons develop side branches that help bring information to several parts of the nervous system simultaneously.

image

Figure 6.1 A schematic drawing of a neuron. More pictures of neurons can be found at [2].

All living cells are surrounded by a cell membrane that maintains a resting potential between the outside and the inside of the cell. In response to a signal, the membrane potential of a neuron may undergo a series of rapid changes, corresponding to the action potential. In order to generate an action potential, the initial stimulus must be above some threshold amount. Properties of the nerve impulse, including its shape and propagation velocity, are often independent of the initial (superthreshold) stimulus. Once a neuron fires an action potential, there is a so-called refractory period. During this time, it is impossible for the neuron to generate another action potential.

6.2.2 Firing Patterns

The firing pattern of even a single neuron may be quite complicated. An individual cell may, for example, fire action potentials in a periodic fashion, or it may exhibit bursting oscillations in which periods of rapid firing alternate with periods of quiescent behavior. More complicated patterns, including chaotic dynamics, are also possible. There has been a great deal of effort among mathematicians in trying to classify the types of firing patterns that can arise in single neuron models. For references, see [3].

In this chapter, we are primarily concerned with the collective behavior of a neuronal population. Examples of population rhythms include synchrony, in which every cell in the network fires at the same time and clustering, in which the entire population breaks up into subpopulations or clusters; every cell within a single cluster fires synchronously and different clusters are desynchronized from one another. By dynamic clustering we mean that there are distinct episodes in which some subpopulation of cells fire synchronously; however, membership within this subpopulation may change over time. That is, two neurons may fire together during one episode but not during a subsequent episode. Of course, more complicated population rhythms are also possible. Neurons within the network may be totally desynchronized from each other, or activity may propagate through the network in a wave-like manner.

6.2.3 Olfaction

These types of population rhythms have been implicated in many brain processes and it is critically important to understand how they depend on parameters corresponding to the intrinsic properties of cells, synaptic connections, and the underlying network architecture. Our example is concerned with olfaction; that is, the sense of smell. In the insect and mammalian olfactory systems, any odor will activate a subset of receptor cells, which then project this sensory information to a neural network in the antennal lobe (AL) of insects or olfactory bulb (OB) of mammals in the brain. Processing in this network transforms the sensory input to give rise to dynamic spatiotemporal firing patterns. These patterns typically exhibit dynamic clustering. Any given neuron might synchronize with different neurons at different stages, or episodes, and this sequential activation of different groups of neurons gives rise to a transient series of output patterns that through time converges to a stable activity pattern.

The role of these activity patterns in odor perception remains controversial. Distinct odors will stimulate different neurons, thereby generating different spatiotemporal firing patterns. It has been suggested that these firing patterns help the animal to discriminate between similar odors. That is, it may be difficult initially for an insect’s brain to distinguish between similar odors, because similar odors stimulate similar subsets of neurons within the insect’s AL. However, because of dynamic clustering, the subsets of neurons that fire during subsequent episodes change, so that the neural representations of the odors—that is, the subsets of neurons that fire during a specific episode—become more distinct. This makes it easier for the insect’s brain to distinguish between the neural representations of the two odors.

Mechanisms underlying these firing patterns are poorly understood. One reason for this is that many biological properties of the AL and OB are still not known; these include intrinsic properties of neurons, as well as details of the underlying network architecture. Many of these details are extremely difficult to determine experimentally and computational modeling, along with mathematical analysis of these models, can potentially be very useful in generating and testing hypotheses concerning mechanisms underlying the observed patterns. In particular, a critical role for theoreticians is to classify those networks that exhibit population rhythms consistent with known biology and determine how activity patterns may change with network parameters.

This presents numerous very challenging problems for mathematicians. Any biological system is very complicated and it is not at all clear what details should be included in the model so that the model both accounts for the key biological processes, but is still amenable to mathematical or computational analysis. For a given network, one would like to classify all the attracting states and, since the olfactory system needs to be able to distinguish among many odors, it would be useful to understand under what conditions, on network architecture perhaps, a network exhibits a large number of attractors. Recall that in response to a given odor, the AL or OB responds with a transient series of dynamic clusters that converges to a stable firing pattern. It is possible that the animal recognizes and makes a decision on how to respond to the odor while this trajectory is still in its transient state, having not yet reached the stable firing pattern. This raises the mathematical question: How do properties of transients and attractors, including the length or time duration of each of these, depend on network properties such as the intrinsic properties of cells within the network and the network architecture? This question motivates most of the work presented in this chapter.

6.2.4 Mathematical Models

The seminal model in neuroscience is due to Hodgkin and Huxley and was published in 1952 [4]. This model describes the propagation of action potentials in the giant axon of a squid. The principles underlying the derivation of this model form the basis for the modeling of other cells throughout the nervous system. The Hodgkin-Huxley model consists of four differential equations. One of these is a partial differential equation that describes the evolution of the membrane potential. The other three equations are ordinary differential equations that are related to the opening of ionic channels in the cell membrane. It is the opening and closing of these channels that are responsible for changes in the membrane potential and, therefore, the generation of the action potential.

The Hodgkin-Huxley model is quite complicated and very difficult to analyze mathematically, especially if one considers networks of interacting neurons. For this reason, simpler models have been proposed. In the simplest models, each neuron has two states and time is discrete. A neuron is either “on” (firing an action potential) or “off” (at rest). There are then rules that describe when a neuron switches between these states and how the firing of one neuron impacts the state of other neurons in the network. Artificial neuronal network models of this type are appealing because they often allow for rigorous mathematical analysis and are easy to implement numerically as this chapter will outline in detail. But because of the models’ simplicity and lack of biological detail, it is often not clear how to interpret results from such models biologically. A primary goal of the research program described in this chapter is to develop more realistic discrete-time models that are more closely related to the underlying biology. Section 6.8 describes some alternative models of this kind.

6.3 The Discrete Model

Our notation will be mostly standard. The number of elements of a finite set A will be denoted by image, the symbol image denotes the largest integer that does not exceed x, the least common multiple will be abbreviated by lcm, the greatest common divisor by gcd. For a positive integer n the symbol image will be used as shorthand for the set image.

Definition 6.1

A directed graph or digraph is a pair image, where image. The set image is called the set of vertices or nodes of D, and the set image is called the set of arcs of D.

Digraphs are convenient models for all sorts of networks of interacting agents. The agents are elements of image, while an arc image signifies that agent v may influence the state of agent w. Note that the interactions may not be symmetric; therefore directed graphs are usually more accurate models of networks than undirected graphs. Figure 6.2 gives a visual representation of two digraphs.

image

Figure 6.2 (a) A digraph with image and (b) a digraph with image= [7]. Arcs image are represented by arrows image.

An arc of the form image is called a loop; a digraph that does not contain such arcs is called loop-free. In our intended interpretation, the set image stands for the set of neurons in the network, while an arc image signifies that neuron v may send synaptic input to neuron w. Since we are not considering neuronal networks in which a neuron may send synaptic input to itself, the corresponding digraphs will be loop-free.

Moreover, it will be convenient for our purposes to consider only digraphs such that image for some positive integer n.

Each of our network models will have an associated digraph D that specifies the connectivity of the network. While the connectivity is assumed here to remain fixed at all times, we are interested in the dynamics of our networks, that is, how the state of the network changes over time. Time t is assumed to progress in discrete steps or episodes, so that we will have a state image of the network for each image The state imageof the network at time t will simply be the vector image, where image is the state of node image at time t. According to what was said in Section 6.2, at any given time step, a neuron i may either fire, or be at rest. We will assume that firing lasts exactly one episode, and neuron i needs a refractory period of image time steps before it can fire again, where image is a positive integer. We will allow the refractory periods image to differ from neuron to neuron. Each of our models will have a parameter imagewhich specifies the refractory periods of all neurons. The state image of each neuron needs to specify whether or not the neuron fires at time t and whether it has rested long enough to fire again. We code this information as follows: image means that the neuron fires at time t; more generally, a state image signifies that neuron i fired image time steps earlier, that is, in episode number image and is not yet ready to fire in episode image; and image signifies that neuron i has reached the end of its refractory period and is ready to fire in episode image.

When will a neuron i that is ready to fire image actually fire in episode image? At time t, neuron i receives firing inputs from all neurons j with image and image, and it will fire at time image if and only if there are sufficiently many such neurons. The exact meaning of “sufficiently many” may again differ from neuron to neuron and is conceptualized by a positive integer image that we call the ith firing threshold. The vector image will be another parameter of our models.

Now we are ready for a formal definition.

Definition 6.2

A neuronal network is a triple image, where image is a loop-free digraph and image and image are vectors of positive integers. The state image of the system at time t is a vector image, where image for all image. The state space of N will be denoted by image. The dynamics of N is defined as follows:

• If image, then image.

• If image and there exist at least image different image with image and image, then image.

• If image and there are fewer than image different image with image and image, then image.

A state image at time 0 will be called an initial state. Each initial state uniquely determines a trajectory image of successive states for all times t.

Example 6.3

Consider the neuronal network image, where D is the digraph of Figure 6.2a, image. Find image for the following initial states: (a) image, (b) image, (c) image.

Solution

a. If image, node 1 fires and thus cannot fire in the next time step, node 2 will fire in the next step since it is at the end of its refractory period and receives firing input from image node, node 3 does not receive any firing input and will stay in state image, and node 4 will reach the end of its refractory period image. Thus we get image. Now nodes 3 and 4 receive firing input from the required number of nodes, while nodes 1 and 2 do not receive any such input. Thus image. Now node 1 receives firing input from image nodes, while node 2 does not receive such input and nodes 3 and 4 must enter their refractory periods. It follows that image.

b. If image, then in the next step node 1 receives firing input from image nodes and will fire in step 1. Thus image, and it follows from our calculations for part (a) that image and image.

c. If image, then image. Now node 1 receives firing input from only image node and will not fire in the next step, but node 3 receives firing input from image node and will fire. Thus image. Similarly imageent

Example 6.3illustrates a number of important phenomena in the dynamics of our networks. First of all, notice that in part (a) we got image. Thus image, and so on. The trajectory will cycle indefinitely through the set of states image. The states in AT are called persistent states because every trajectory that visits one of these states will return to it infinitely often, and the set AT itself is called the attractor of image.

The meaning of the word “attractor” will become clear when we consider the solution to part (b). Notice that image is in the same set AT as in the previous paragraph, so the trajectory will again cycle indefinitely through this set. In particular, the initial state image will never be visited again along the trajectory; it is a transient state. By Dirichlet’s Pigeonhole Principle, this kind of thing must always happen in a discrete dynamical system with a finite-state space: Every trajectory will eventually return to a state it has already visited and will from then on cycle indefinitely through its attractor. The states that a trajectory visits before reaching its attractor occur only once. They form the transient (part) of the trajectory. Note that in part (b) of Example 6.3 the length of the transient, i.e., the number of its transient states, is 1; whereas in part (a) the length of the transient is 0.

The trajectories of the initial states in parts (a) and (b) will be out of sync by one time step; nevertheless, they both end up in the same attractor AT that has a length of 3.

In part (c) of Example 6.3, we get image. In this state, every neuron has reached the end of its refractory period and no neuron fires. Thus no neuron fires at any subsequent time, the length of the transient is 3, and we get image It follows that the set image is an attractor of length 1. Attractors of length 1 are called steady state attractors and their unique elements are called steady states. In contrast, attractors of length image are periodic attractors.

Exercise 6.1

Prove that image is the only steady state in any network imageent

The set of all initial states whose trajectories will eventually reach a given attractor AT is called the basin of attraction of AT. The initial states in parts (a) and (b) of Example 6.3 belong to the same basin of attraction, while the initial state in part (c) belongs to a different basin of attraction, namely the basin of the steady state.

Exercise 6.2

Let image be a neuronal network and let image be a state such that image for all image. Prove that image is in the basin of attraction of the steady state attractor and determine the length of the transient. ent

Example 6.3(c)shows that the condition on image in Exercise 6.2 is sufficient, but not necessary, for image to be in the basin of attraction of the steady state attractor.

Definition 6.4

Let image be a digraph. A directed path in D from image to image is a sequence image of vertices image such that image for all image, and all vertices in the sequence, except possibly image and image, are pairwise distinct. A directed cycle in D is a directed path image with image. The length of a directed path (or cycle) is the number of arcs that connect its successive vertices.

For example, in the digraph of Figure 6.2b, the sequence (1, 5, 2, 7, 3) is a directed path of length 4 and the sequence (2, 7, 6, 4, 2) is a directed cycle of length 4. Two directed cycles will be considered disjoint if their set of vertices are disjoint. In particular, the directed cycles (7, 3, 5, 7) and (2, 7, 6, 4, 2) in this digraph are not disjoint despite the fact that they do not use any common arcs. We will usually consider directed cycles that differ by a cyclic shift as identical; for example, (7, 3, 5, 7), (3, 5, 7, 3), and (5, 7, 3, 5) would be considered three representations of the same directed cycle.

Exercise 6.3

Find all directed cycles in the digraph of Figure 6.2a. What is the maximum length of a directed path in this digraph? ent

For any neuronal network image we can define another digraph image whose vertices are the states and whose arcs indicate the successor state for each state, that is, image is an arc in this digraph if and only if image implies image. This digraph will be called the state transition digraph of the network. Note that if image, then we can represent each state of a network N very conveniently by the set of all nodes that fire in this state. If the number of nodes is not too large, this allows us to actually draw image. Figure 6.3 gives an example.

image

Figure 6.3 State transition digraph for the network image with connectivity D as in Figure 6.2(b). Adapted from [5].

Notice that the state transition digraph image of a network image is different from the network connectivity D. For example, image has many more nodes than D (image vs. 7 in the example of Figures 6.3 and 6.2b). Moreover, while we are always assuming that the network connectivity D is loop-free, the state transition digraph contains the loop image.

If image the state transition digraph image must be drawn in such a way that each state is actually represented by its state vector. This may still be possible for very small n, as in Exercise 6.5.

State transition digraphs, as long as we can draw them, give a complete visual picture of the network dynamics. Let us take a closer look at Figure 6.3. The steady state is represented by the empty set of its firing nodes and forms a loop image, that is, a directed cycle of length one. There are six directed cycles of length 2 each, and there is one directed cycle of length 5. These directed cycles of lengths larger than 1 correspond to periodic attractors of the network. All nodes outside of the union of these directed paths are transient states. For each transient state image there exists a unique directed path from image to a state in some attractor that visits no other persistent states. The basin of attraction of each attractor consists of all states from which the attractor can be reached via a directed path. The attractors {(124), (3567)}, {(147), (2356)}, {(157), (2346)} and the steady state attractor image form their own basins of attraction; the basins of attraction of the remaining four attractors are much larger and contain several transient states each. Possible transients show up as directed paths outside of the union of the attractors that end in a state that is succeeded by a state in the attractor. For example, {(34567), (12), (4567), (23)} and {(12), (4567), (23)} are transients for the trajectories with initial states coded by (34567) and (12), respectively, but {(34567), (12), (4567)} cannot be the transient of any trajectory.

Exercise 6.4

What is the maximum length of a transient for the system of Figure 6.3? List all initial states for which transients have this maximum length. ent

Exercise 6.5

a. Draw the state transition digraph for the neuronal network N of Example 6.3. Hint: The network has 24 distinct states.

b. Find all attractors, their lengths, and the sizes of the basins of attraction for the neuronal network N of Example 6.3.

c. What is the maximum length of a transient in this network? ent

All our networks have exactly one steady state attractor. When does a network also have other periodic attractors? We have seen that such attractors correspond to directed cycles of length bigger than one in the state transition digraph image. Moreover, for the two examples with periodic attractors that we have explored so far, we also found directed cycles in the network connectivity D. Is the existence of directed cycles in D necessary for the existence of periodic attractors? Is it sufficient? Note that in essence these are questions about how the network connectivity D is related to the network dynamics. We will study this type of question in the next two sections. In Section 6.4 we will try to derive provable bounds on certain features of the dynamics such as maximum possible lengths of transients and attractors when D belongs to some special classes of digraphs. In Section 6.5 we will derive bounds that are true “on average” for connectivities D that are randomly drawn from certain probability distributions.

Numerical simulations are another powerful tool for exploring network dynamics. For a given network one could in principle use software to track the trajectory of every possible initial state until it visits the same state twice and record the length of all transients, all attractors, their lengths, and sizes of their basins of attraction. This is possible in practice when the number of nodes n is relatively small, but quickly becomes computationally infeasible since the number of states grows at least exponentially in n.

Exercise 6.6

Suppose image is a network with n nodes. Show that the size of the state space is

image (6.1)

Conclude that if image is constant, then image, and for arbitrary image we always have imageent

Thus even networks with 30 nodes and image contain about a billion different states. For larger networks one can still explore the trajectories of a few randomly chosen initial states, but it may be computationally infeasible to run the simulation until the same state will be reached twice. Exercise 6.17 of the online supplement [1] gives an illustration that even networks with very few nodes can have very long transients and attractors. Thus rigorous results about the dynamics of all but the smallest networks will require proving theorems. However, computer simulations can be very useful for generating conjectures for such results and giving us insight into the mechanisms that are responsible for interesting features of the dynamics. The online supplement for this chapter contains a number of projects that involve such numerical explorations.

6.4 Exploring the Model for Some Simple Connectivities

Recall that in Exercise 6.5 we explored the following numerical characteristics of network dynamics: lengths of the attractors, number of different attractors, sizes of their basins of attraction, and maximum lengths of transients. In this section we will derive bounds on these numbers for networks whose connectivities D belong to one of several important classes of digraphs. These bounds will be expressed as functions of the number n of neurons. We will usually need to make some assumptions on image and image. Throughout this chapter image denotes image and image denotes image. We are primarily interested in bounds that hold for all n and fixed values of image and image.

6.4.1 Acyclic Digraphs

A digraph D is acyclic if D does not contain any directed cycle. While none of the digraphs in Figures 6.2 and 6.3 is acyclic, Figure 6.4 gives an example of an acyclic digraph.

Lemma 6.5

Let imagebe a neuronal network whose connectivity D is acyclic. Then imageis the only attractor in N.

Proof

Let image be as in the assumptions and assume image. Suppose there exists a periodic attractor AT in N, and let image be a state in this attractor. Move the system forward to time image. By Exercise 6.2, some node fires in image, that is, there must exist image with image. Fix such image. By the rules of the network dynamics, there must exist another node image with image such that image. By recursion we can now construct a sequence of nodes image such that image for all image and image for all image. Since there are only n nodes total, there must exist image such that image and image is a directed cycle in D, which contradicts the assumption on Dent

image

Figure 6.4 An acyclic digraph on [7]. Arcs image are represented by arrows image.

Lemma 6.5 makes networks with acyclic connectivities rather uninteresting from our point of view, but investigating the class of acyclic digraphs will allow us to lay some groundwork for subsequent explorations. First of all, notice that Lemma 6.5 completely characterizes all four features of the dynamics that we are investigating here in that it implies points (a)–(d) of the following result.

Theorem 6.6

Let imagebe a network whose connectivity D is acyclic. Then

a. the maximum length of each attractor in N is 1,

b. the maximum number of different attractors in N is 1,

c. the size of the basin of attraction of the steady state attractor is image,

d. the transients have length at most image.

Exercise 6.7

Prove part (d). Hint: Use arguments similar to the proof of Lemma 6.5 and the solution of Exercise 6.2ent

The wording of Theorem 6.6 certainly looks overly formal for such a simple result, but it provides a nice template for the kinds of results we might hope to be able to prove for certain classes of digraphs. The bound in point (d) is sharp in the class of all networks with acyclic connectivities, which can be easily seen by considering the trivial case of a digraph with just one node and no arcs. However, this bound can be improved under some additional assumptions on the connectivity. Project 1 of the online supplement [1] gives some guidelines for readers who wish to prove stricter bounds under additional assumptions.

Let us take another look at Lemma 6.5. It tells us that the existence of a directed cycle in the connectivity D is a necessary condition for the existence of a periodic attractor in a network image, but it does not answer the question whether the existence of a directed cycle in D is a sufficient condition. This type of question is best explored by looking first at the simplest possible connectivities that contain a directed cycle. Our next subsection will investigate the possible dynamics for these types of connectivities.

6.4.2 Directed Cycle Graphs

Let n be a positive integer. For convenience, we will write image for image. Let image be the directed cycle graph on image with arc set image. Throughout this subsection we will tacitly assume that we are given a neuronal network image with image for some positive integer n.

In such networks, node i gets input only from node image. Thus, if image is such that image, then image for all image and each trajectory will eventually reach the steady state image. Therefore, we will focus here exclusively on the case image which allows for more interesting dynamics.

Let image be a given initial state. For all image let

image (6.2)

be the set of neurons that fire at time t. We call image the active set at time t.

Note that for cyclic connectivities, a firing of node i at time t can induce at most one firing at time image, and it follows that image along any trajectory, which in turn implies that image is constant on any attractor. Thus throughout any attractor a firing of node i at time t always will induce a firing of node image at time image. If image is such that image for all image, then for all image we have

image (6.3)

Exercise 6.8

Prove that there exists a divisor m of n such that imageent

For node i to fire at time t, the rules of the dynamics require that it must be in state image at time image. After node i has fired at time t (formally: image), its state will increase until it reaches the end of its refractory period: image. This implies that image for m as in Exercise 6.8, that is, the distance between two firing cells is at least image. In particular, this will be true for i of maximum refractory period image. These observations prove the following.

Proposition 6.7

Let imagebe a neuronal network with image. Then the length imageof any attractor is a divisor of n and imagefor all image. Moreover imageif imagethen image.

How many different attractors are there in N?

It follows from our proof of Proposition 6.7 that along any attractor the state image of any node will be uniquely determined by its refractory period and the set image. We may consider each attractor with fixed image as a necklace with k red beads and image black beads. Pick a state image in the attractor. Each red bead corresponds to a block of image positions of image of the form image, where the wildcards image correspond to the stages in the refractory period of the corresponding neuron, but not to firing. This type of block represents one firing together with the mandatory waiting period that must precede the next firing. Black beads correspond to optional extensions of the minimal waiting period between two consecutive elements of image. There is a one-to-one correspondence between periodic attractors in N and this type of necklace. Now the well-known formula for the number of different such necklaces (e.g., Theorem 1 in [6]) plus the fact that there exists exactly one steady state attractor implies the following result.

Theorem 6.7

Let imagebe a neuronal network with image. Then the number of different attractors is

image (6.4)

where imageis Euler’s phi function.

In particular, the number of attractors in neuronal networks image grows exponentially in n. (See Corollary 2 in [7].) In order to get a feel for the dependence of this number on n, the reader may want to do some numerical explorations as suggested in Exercise 6.18 [1].

How about the lengths of the transients? Recall that image is a nonincreasing function of image. Thus, for any trajectory there must be a smallest time image such that image for all image. We can get a bound for the lengths of transients by carefully considering how image is related to the refractory periods.

Exercise 6.9

Let image be a neuronal network with image. Consider a trajectory and let image be defined as above. Prove that:

a. The length of the transient is at most image.

b. If image, then there exists a node i with image and image.

c. If image, then for all nodes i as in (b) we must have imageent

If all nodes have the same refractory period, then the scenario of point (c) of Exercise 6.9 cannot occur, and point (b) implies that image. In view of point (a), this in turn implies the following result of [7].

Theorem 6.8

Let imagebe a neuronal network with image, and imageis constant. Then the length of any transient is at most image.

If image is not a constant vector, then the scenario of point (c) of Exercise 6.9 can occur, which adds significant complications. Exercise 6.19 [1] invites the reader to explore one such example and Theorem 3 of [7] gives a precise upper bound of the length of transients for all networks image.

How about the sizes of the basins of attraction? At the time of this writing no complete characterization for all cases was known. We invite our readers to explore this problem on their own. Project 2 [1] gives some guidance.

6.4.3 Complete Loop-Free Digraphs

Both directed cycles and acyclic digraphs are characterized by “small” arc sets in the sense that directed cycle graphs contain just enough arcs to allow for a directed cycle that visits all nodes, and acyclic digraphs are missing the arcs that would close a directed cycle. However, the word “small” here does not mean the same thing as “few in number.”

Exercise 6.10

Show that for every image there exists an acyclic digraph image with arc set of size image , which is exactly half of the maximum number of arcs in any loop-free digraph with n vertices. ent

On the other extreme are complete loop-free digraphs image whose arc set image contains every pair image with image such that image. In this subsection we will explore the possible dynamics for networks of the form image. As before, let us first consider the simplest case when image.

Exercise 6.11

Let image for some n.

a. Show that along any trajectory we always have image, unless image, in which case the system has reached the steady state.

b. Use the result of (a) to find the maximum length of any transient and any attractor, the number of different attractors, and the size of each basin of attraction. ent

When we allow for image, things become more interesting. The result of Exercise 6.11(a) generalizes as follows:

Proposition 6.10

Let imagefor some n. Then along any trajectory we always have imageif imageand imagewhenever image.

Proposition 6.10 allows for an easy characterization of all features of the dynamics when image is constant and image.

Exercise 6.12

Let image for some n, where image is constant.

a. Show that the basin of attraction of the steady state consists of all vectors image such that for at least one image we have image.

b. Show that every periodic attractor has length image and forms its own basin of attraction.

c. Find a formula that relates the number of different attractors to the number of functions that map image onto image.

d. Find a formula for the size of the basin of attraction of the steady state attractor in the spirit of (c) and also find a formula for the maximum length of any transient. ent

Let us introduce here some terminology that we will use later in this and the next section.

Definition 6.11

Let image be a network, let image be a node, and let image be an initial state. We say that i is minimally cycling (in the trajectory of image) if for every image the implication image holds.

Let AT be an attractor in N. We say that node i is minimally cycling in AT if it is minimally cycling in every trajectory that starts in AT. A node i is active in AT if it fires in at least one state of AT. We say that the attractor AT is minimal if every active node in AT is minimally cycling, and we call AT fully active if every node of the network is active in AT.

The following exercise gives a nice and useful condition on initial states whose trajectories contain at least one minimally cycling node.

Exercise 6.13

Let image and let image.

a. Prove that if there exists a directed cycle image in D such that image for all image, then there exists a minimally cycling node in the trajectory of image.

b. Is the condition in point (a) also necessary for the existence of a minimally cycling node? ent

It is easy to see that Proposition 6.10 implies the following:

Corollary 6.12

Let imagefor some n. Then every periodic attractor is fully active and minimal.

As long as image is constant, Exercise 6.12(b) implies that every periodic attractor has length image, which is the minimum possible length of any periodic attractor in networks with constant refractory periods and arbitrary connectivity. The latter observation explains why we call such attractors minimal. If image is not constant though, as the next exercise shows, minimal attractors are longer than that.

Exercise 6.14

Let image for some n. Show that the length of any periodic attractor AT is equal to imageent

One can deduce from the above that in contrast to Proposition 6.7 for directed cycle graphs, there does not exist a universal upper bound for the attractor length of networks with complete loop-free connectivities. Exercise 6.20 [1] gives more details and also shows that the same remark applies to the number of different attractors.

If image, then it may no longer be true that every periodic attractor is minimal or fully active, see Exercise 6.21 [1]. The example in this exercise is somewhat pathological though: For networks image with n sufficiently large relative to image and image, a typical initial state will still belong to a fully active minimal attractor. Moreover, the same is true for typical network image whenever the connectivity digraph has sufficiently many arcs. We will prove these results and spell out a precise meaning of “typical” in Section 6.5.

6.4.4 Other Connectivities

There are a number of other interesting classes of digraphs for which one might explore the possible dynamics of corresponding networks. This leads to more challenging problems than for the classes of acyclic, directed cycle, and complete loop-free digraphs that we have discussed so far. This territory remains largely unexplored. The Wikipedia page [8] contains a gallery of named graphs, and the reader may want to explore these questions for selected connectivities that are digraph counterparts of some of these graphs.

In this subsection we attempt to give a flavor of this area of research and suggest projects for open-ended exploration. As a sample from the large menu of possible research directions we will focus on two problems.

When does the existence of a directed cycle in the network connectivity imply the existence of a periodic attractor in the network? By Lemma 6.5 the existence of a directed cycle in D is necessary. But it is not all by itself sufficient; the directed cycle must also be long enough.

Exercise 6.15

Let image. Deduce from the proof of Proposition 6.7 that N has a periodic attractor only if the length n of the directed cycle is at least image. Show that the latter condition is also sufficient for the existence of a periodic attractor in this type of networks. ent

There is a nice generalization of the second part of Exercise 6.15 to arbitrary networks image with image constant: These networks have a periodic attractor if and only if the connectivity D has a directed cycle of length at least image (see Theorem 14 of [9]). In particular, if image, then the existence of any directed cycle in D implies the existence of a periodic attractor in N. But the proof is rather more difficult than Exercise 6.15.

Exercise 6.35 [1] shows that the straightforward generalization of this result to networks with unequal refractory periods fails. When we allow for firing thresholds image, then for many interesting classes of digraphs the existence of a directed cycle of any length no longer will be sufficient for the existence of a periodic attractor in N, even if image. The problem of characterizing all networks that have at least one periodic attractor under the most general assumptions on image and image remains open. Project 4 [1] invites the reader to embark on an open-ended exploration of this problem, starting with some interesting special cases.

The second problem we want to discuss here is finding optimal bounds for the maximal length of attractors. For the classes of connectivities we have studied so far, all attractors were relatively short (the maximal length is 1 for acyclic connectivities, n for directed cycle graphs; for complete digraphs when image it is lcm image, which is typically much less than n when there are only few distinct types of neurons and n is large). But in general, attractors can be much longer than n; some examples are given in Exercise 6.17 and Project 5 [1]. What is the upper bound on attractor length in the class of all networks image with n nodes, image and image, where image are fixed? This is an open question even for image. Project 5 [1] reviews some partial results and gives guidance for readers who wish to explore this exciting problem.

6.4.5 Discussion: Advantages and Limitations of the Approach in this Section

The results of this section and the corresponding parts of [1] show that the study of relations between network connectivity and network dynamics is a fertile ground for exciting mathematical problems on the intersection of several areas of discrete mathematics, including graph theory, enumerative combinatorics, and number theory. The problems in our exercises and projects range from very easy ones to unsolved research problems. One can see that the problem of completely characterizing all possible dynamics becomes quickly very hard if we allow connectivities beyond the simplest ones. This may be good news for mathematicians, because solving these questions may well require development of new mathematical tools. It has even been claimed that this type of investigation will lead to an entirely new kind of science [10]. While the latter claim seems exaggerated, the hyperbole with which it has been promoted should not deter mathematicians’ efforts to develop new tools for studying the dynamics of discrete-time finite-state networks.

The question arises to what extent the approach taken in this section is likely to yield results that are relevant to neuroscience. After all, with very few exceptions the connectivity of neuronal networks in actual organisms is not known. The one organism for which we have a complete wiring diagram of its neuronal network is the little roundworm Caenorhabditis elegans. Its hermaphrodite form has 302 neurons. The website [11] has a lot of fascinating information about this organism and its nervous system. For other organisms we have wiring diagrams of important parts of the nervous system, such as the stomatogastric ganglia of some crustaceans [12] which contain between 20 and 30 neurons.

The connectivities of networks that have been mapped are much more complicated than acyclic, directed cycle, or complete digraphs, and our results do not apply to them directly. However, let us assume that we have a trustworthy discrete-time finite-state model for the dynamics of such a model (we will discuss this assumption in some detail in Section 6.7). If the network is very small (like in the ganglia of crustaceans), one could use computer simulations as in the present section to completely characterize the dynamics. This will no longer be possible in a network like the one for C. elegans with its 302 neurons, since there would be at least image states to consider. But simulations still could be used to explore the dynamics for a large sample of initial states. Theoretical investigations like the ones in this section are still useful for this type of exploration: Let, for example, image be an initial state in a network image, where D is strongly connected and has a few hundred nodes. Suppose that in simulations of the trajectory for, say, 1000 steps, we do not find (yet) an attractor, but we discover a node for which the result of Exercise 6.13 guarantees that it will be minimally cycling. While extending the simulation until the trajectory visits the same state twice may not be computationally feasible, by Proposition 12 of [7] we would know already that all nodes in the attractor will be minimally cycling and the attractor will have length 2.

What if the neuronal network in the organism of interest is too large to map all the connections? In that case, we cannot directly investigate the dynamics of the true network, but we can explore the “typical” dynamics of a network that is randomly drawn from the class of all networks that share certain properties for which we do have empirical confirmation. Two such properties that are often known are an estimate of the total number n of neurons and an estimate of the average number of connections that a given neuron makes to other neurons. In the next section and Projects 6 and 7 [1], we will introduce some methods for exploring “typical” dynamics. As it will become clear, such methods rely on theoretical results of the kind that we proved in the present section.

6.6.5 Exploring the Model for Some Random Connectivities

6.5.1 Erdős-Rényi Random Digraphs

The notion of a “typical” connectivity can be made mathematically rigorous by assuming that D is a digraph with vertex set image that is randomly drawn from a given probability distribution image. If Ch is a characteristic property of digraphs, then we can say that a typical digraph drawn from these distributions has property Ch if the probability that a digraph drawn from image has property Ch approaches 1 as image. We can talk about typical dynamics of networks with connectivities drawn from the distributions image in a similar fashion.

The most straightforward implementation of this idea is obtained by assuming that image is the uniform distribution on the set of all loop-free digraphs image. For this distribution, consider a potential arc image with image and image. Notice that image is contained in the arc set of exactly half of all digraphs in image. Thus image. Similarly, if image are pairwise distinct potential arcs, then

image (6.5)

Thus the events image are independent. This formula is very useful. For example, it immediately implies the following:

Proposition 6.13

In a typical loop-free digraph drawn from the uniform distribution every two nodes will be connected by a path of length 2 imagein particular imagethe digraph will be strongly connected.

Proof

If image are different nodes, then

image (6.6)

Note that Eq. (6.6) gives the probability of the event that there does not exist a directed path of length 2 from i to j. Now consider the event F that for some image with image there does not exist a path of length 2 from i to j. There are image ordered pairs image with image, and the probability of a union of events is always bounded from above by the sum of their probabilities. It follows from Eq. (6.6) that image.

Note that image. Thus image, where image denotes the complement of event Fent

It follows from Eq. (6.5) that image is exactly the probability distribution of random digraphs on image that can be constructed in the following way: For each potential arc image, flip a fair coin. Include image in image if and only if the coin comes up heads. One can generalize this construction to the case where the coin is not fair and comes up heads with probability p, which can be an arbitrary number with image. The corresponding probability distributions will be denoted by image. These distributions give the classes of so-called Erdős-Rényi random loop-free digraphs.

Definition 6.14

Let image be a digraph and let image. The indegree of v, denoted by image, is the number of image with image. The outdegree of v, denoted by image, is the number of image with image.

For example, in the digraph of Figure 6.2a, we have image, while image. In a directed cycle graph image each node has indegree and outdegree 1; in a complete loop-free digraph image each node has both indegree and outdegree image.

For nodes v in random digraphs, both image and image are random variables. It is easy to calculate the expected values of indegrees and outdegrees in Erdős-Rényi digraphs.

Proposition 6.15

Let D be randomly drawn from imageand let image. Then

image (6.7)

Proof

This proof exemplifies a technique that is extremely useful in studying random structures. For every potential arc image, define a random variable image that takes the value 1 if image and takes the value 0 if image. Then image. Moreover,

image (6.8)

Since the expected value of a sum of random variables is the sum of their expected values, we get

image (6.9)

as required. ent

The average indegrees and outdegrees correspond to the average number of connections per neuron that can be estimated from empirical results even for those networks where we do not know the actual connectivity graph. Notice that since the uniform distribution image is equal to image, nodes in a random loop-free digraph that is drawn from this distribution will have average indegrees and outdegrees equal to image. However, empirical results indicate that in neuronal networks of actual organisms each neuron usually forms a lot fewer connections than that. For example, as we mentioned in Section 6.2, the human brain contains roughly image neurons, while individual neurons receive on average input from approximately image other neurons. This makes distributions image with relatively small p more promising candidates for predicting the dynamics of such networks than the uniform distribution. The probability p may in general depend on n, and we will often write image instead of image to emphasize this dependence.

There is a substantial literature on the study of Erdős-Rényi random graphs, which are objects similar to digraphs, but with undirected edges image instead of directed arcs image. These results tend to have straightforward translations into results on Erdős-Rényi random digraphs. Project 6 [1] gives some references to the literature on the subject. Many of these results go as follows: Suppose image is a function of the number of nodes n such that image, and Ch is a property of (di) graphs. Then there exists a threshold function image such that whenever image approaches zero faster than image we have image for D drawn from image (that is, property Ch fails almost certainly for such random D), and whenever image approaches zero more slowly than image we have image for D drawn from image (that is, almost all such random D have property Ch). This phenomenon is referred to as a phase transition with threshold function image. The precise meaning of “faster” and “more slowly” may differ somewhat from property to property, and for any given property there are actually infinitely many functions that could serve as a threshold function in the above sense. One of them will be the function image such that image whenever D is randomly drawn from the distribution image, and we will refer to it as “the threshold function for Ch.” It is usually impossible to find a precise formula for this function; one has to restrict oneself to the more modest goal of estimating how quickly it approaches zero as image.

For example, if Ch is the property of being strongly connected, then a threshold function is image (see the discussion at the end of Project 6 [1]). Since image for all n, Proposition 6.13 is a special case of this more general result.

One can try to derive similar results for properties of the dynamics of networks image, where D is randomly drawn from image. Results of this type were obtained in [13]; here we will briefly review the most important of these and then explore two other properties that were not studied in [13]. For ease of exposition we will restrict ourselves to the case where image.

The dynamical properties we are interested in concern the trajectories of a “typical” initial state in such networks. Thus we assume that we draw a random digraph from image, and then we draw an initial state image (independently of the choice of D) from the set image of all possible initial states with the uniform distribution.

Let FAMA be the property that image belongs to a fully active minimal attractor. In [13], we were able to show that if image, then almost no initial states will belong to a fully active minimal attractor (not even a minimal one, actually), and if image, then almost all initial states will belong to a fully active minimal attractor. This indicates that image for some constant image. Numerical simulations confirm that the phase transition is very steep, see Figure 6.5a. They also suggest that image for image; see Figure 6.5b.

image

Figure 6.5 (a) Location of the phase transitions for networks of different sizes n. The connection probability is scaled so that the average number of connections per node is displayed on the horizontal axis. (b) Empirical values of image and image for different n. Adapted from [13].

This makes networks with highly connected Erdős-Rényi connectivities in which the average indegrees and outdegrees grow faster than image (see Proposition 6.15) very similar to networks with connectivities that are complete loop-free digraphs (compare with Corollary 6.12) and rather uninteresting, since in almost all trajectories every node will fire as soon as it reaches the end of its refractory period.

For slightly sparser connectivities, a typical initial state will not be in a minimal attractor, but will have a property that we label AUTO. It means there exists an autonomous set (see [13] for a precise definition) of minimally cycling nodes. Results in [13] indicate that there is a constant image such that image. Again, numerical simulations confirmed that the phase transition is very steep, as inFigure 6.5a. They also suggest that image for image; see Figure 6.5b. Thus for networks with connectivities of average indegrees and outdegrees image, in the trajectory of almost every initial state the (vast) majority of nodes will be minimally cycling.

One should expect more interesting dynamics for connectivities with average indegrees and outdegrees image. Consider the property MNODE that the trajectory of image contains at least one minimally cycling node, and the property NS that image does not belong to the basin of attraction of the steady state attractor. Then image, and none of these implications can be reversed. It follows that for sufficiently large n the corresponding threshold functions should satisfy the inequalities

image

In particular, one would expect that image. Project 6 [1] invites the reader to explore the properties NS and MNODE. The project is structured in such a way that the reader will learn along the way some basic proof techniques for exploring random structures, in particular, the so-called second-moment method.

6.5.2 Discussion: Other Models of Random Digraphs

How relevant are results on random networks for neuroscience and real-world applications in general? As we already mentioned, usually we will not know the exact connectivity of a real-world network, but we may know some of its properties, such as the average number of connections per node. Studying random networks with the empirically observed properties at least gives us some idea what to expect. Moreover, such structures as neuronal networks of actual organisms have been designed by evolution (a stochastic process) over millions of years, so we should expect that the connectivity is at least somewhat random. But there is no absolute notion of randomness; this mathematical concept is always based on some distributions. However, the distributions image that define Erdős-Rényi random digraphs may not be the most realistic ones.

Recall our derivation of the formula (6.9) for the average indegrees and outdegrees in Erdős-Rényi digraphs. These were based on sums of independent random variables. Thus it follows from the Central Limit Theorem that for large n the ratio between the smallest indegree and the largest indegree of any node will be very close to 1 with probability arbitrarily close to 1; similarly for outdegrees. This is not what we observe in many real-world networks. In particular, some empirical studies have found evidence that the degree distributions in some actual neuronal networks are so-called power law or scale-free distributions, with a small number of highly connected neurons [14,15]. Erdős-Rényi digraphs are the wrong kind of random digraphs for modeling such networks.

Project 6 [1] gives a brief introduction to power law distributions and methods for randomly generating such digraphs with certain parameters. It invites the reader to explore the dependence of image for networks with the resulting connectivities on the parameters of the degree distribution.

6.6 Another Interpretation of the Model: Disease Dynamics

While our models are motivated by a problem in neuroscience and while we refer to our models N as “neuronal networks,” there is nothing inherently “neuronal” about these structures. They are just mathematical objects. In general, a given mathematical object, such as a dynamical system, may serve as a model for any number of very different natural systems.

Imagine an infectious disease that spreads in a population of n individuals. At any given time, an individual can be in one of three categories: infected and prone to infect others (in the set I), healthy but susceptible to the infection (in the set S), or recovered and (temporarily or permanently) immune to the disease (in the set R). Infection may result from a contact between an infectious and a susceptible individual. These assumptions lead to the classical SIR models of disease dynamics (see [16] for a review). Notice that infectiousness is similar to the firing of a neuron: It can induce the same state at a subsequent time in another individual. In this section we will explore whether our networks might be suitable models for the dynamics of certain diseases.

The correspondence between a mathematical model and the underlying natural system is never perfect; mathematical modeling is always based on simplifying assumptions. The so-called art of mathematical modeling is in essence a knack for making simplifying assumptions that lead to models which are simple enough to allow exploration either by computer simulations or mathematical methods and yet incorporate enough detail to make realistic predictions about a natural system. The first question a modeler needs to address is which aspects of the natural system the model is supposed to predict. Here we will be interested only in the question of how the proportion of infected individuals in the population changes over time.

SIR models come in a variety of flavors; in particular, there are a lot of details to consider that differ from disease to disease. In actual modeling, these details are inferred from the available data and the model is constructed by deriving suitable assumptions from the data. Here we do not have the space to consider actual data for a disease; instead let us consider some modeling assumptions that one might have deduced:

1. An infected individual will be infectious for a time period image with mean value image and very small standard deviation.

2. The time lag between infection and becoming infectious is very small relative to image so that it may be negligible.

3. After ceasing to be infectious, an individual will remain immune to the disease for a time period image with image where p is a positive integer given by the data and the standard deviation image of image is small.

4. Contact between an infected and a susceptible individual during the time interval of infectiousness will result in a new infection with probability q.

These are all the assumptions we will be using here. But of course, these assumptions already gloss over a lot of details that might (or might not) significantly influence the disease dynamics. Before reading about how we build a model from these assumptions, you may want to take a few minutes to do the following:

Exercise 6.16

List some details that are being ignored by the assumptions above but may significantly influence how the disease will spread in the population. ent

Since the length of time an individual remains infectious has “very small standard deviation” and the time lag between the moment of transmission and becoming infections may be negligible, we might try to work with a discrete-time model where the unit of time is chosen as image. Now consider a network image with image and constant image. Let us interpret each image as an individual of a fixed population, interpret state image as individual i being infectious at time t, state image as individual i being susceptible to infection and state image as individual i being immune to infection. An arc image indicates that individual i interacts with individual j. It is natural to assume here that image is symmetric, that is, either both arcs image are in image or neither of them is, but as we will see in Project 8 [1] this assumption is not actually needed.

Will the dynamics of N correspond to the dynamics of the disease? Possibly. Notice that image implies image for image, which nicely corresponds to being immune from the disease for a time period of image.

However, in the above interpretation of N an individual j that is susceptible at time t will become infectious at time image whenever there is an arc image with individual i being infectious at time t. This would imply that the transmission probability q is 1 and either image always interact during a time interval of length image (when image), or image never interact (when image). These assumptions are too extreme to be realistic. In practice, there will always be an element of stochasticity both in the structure of contacts and in the actual transmission of the disease that results from a given contact. We can build these effects into our model by altering it as follows. Assume that during each time period of length image any two given individuals interact with probability image, and these interactions are independent. If a given interaction involves a susceptible and an infectious individual, a transmission will result with probability q. Thus we can model the dynamics of a random discrete dynamical system image whose updating rules are identical to the model N above, except that the digraph image will not be fixed but will be randomly drawn anew at each time step from the distribution image of Erdős-Rényi random digraphs.

Notice that image treats the ratio image exactly as an integer and treats the “small standard deviation” of image that we get from the data as practically zero. This may be permissible, but one needs to be careful about such simplifications. Project 8 [1] invites the reader to explore whether or not these simplifications will lead to false predictions. More precisely, the project proposes a suite image of progressively more elaborate and apparently more realistic models of the same natural system. In particular, the constructions used for models image through image incorporate the standard deviation of image that is ignored by model image, and allow us to deal with situations where the empirically observed ratio image is only approximately, but not exactly an integer, as will almost certainly be the case for real data sets. The more elaborate models are more difficult to study though. We will explore which of these models, if any, incorporates just the right level of detail.

Without giving away the correct answer for Project 8 [1], let us assume for the sake of argument that our simulations for model image confirm the predictions of well-established models from the literature. Would this imply that model image is “good enough” to make accurate predictions about the course of an actual disease? This kind of question cannot be answered in the affirmative with certainty; Nature always may hold some surprises in the form of hidden features of the real system that influence the dynamics but that a modeler may have overlooked. But it might be possible to prove mathematical theorems to the effect that a given simpler model such as our discrete model image will give us the exact same answer to a given question than a more detailed one, such as an ODE version of the corresponding SIR model. We will not address this question for disease models, but in the next section we will describe such a theorem for models of certain neuronal networks. Theorems of this type would be extremely valuable, since discrete models are often easier to study, at least by simulations. For example, in disease dynamics the underlying network of contacts plays an important role, with some individuals having frequent contact with many others, whereas others may be more socially isolated. This phenomenon can be easily modeled in our framework by drawing D from a given distribution with unequal probabilities for different potential arcs image, but it is more difficult to incorporate into a differential equations framework.

We want to point out though that such theorems could only address the suitability of a given simplification for a specified set of questions about model dynamics. Notice that there is at least one aspect of disease dynamics in which all our models are blatantly wrong: Assume, for example, that image is one week, which corresponds to one time step in our discrete models. Now, if one treats them too literally, our models would appear to say that each individual either gets sick on Monday and recovers on Sunday, or stays healthy the whole week. This is nonsense. Our models become useless for disease dynamics at time scales below one week.

This last observation brings us round circle back to neuroscience. Our models were inspired by the phenomenon of dynamic clustering where time appears to be partitioned into discrete episodes of roughly equal length. Some neurons fire while other neurons remain at rest during any given episode, and membership in the clusters of firing neurons changes from episode to episode. Notice that this phenomenon looks exactly like the dynamics of a hypothetical disease where groups of people will get sick on Mondays and recover on Sundays, with different groups being sick during different weeks and group membership changing over time. This sort of thing does not happen with diseases, but as mentioned in Section 6.2, it has been empirically observed in some recordings from actual neurons. It is a puzzling phenomenon, and one naturally wants to know what might account for it.

A literal reading of our discrete model would predict this phenomenon, but this does not mean that our discrete model explains dynamic clustering, because this pattern is already built into the assumptions of discrete time steps for our models. If one wants to understand why dynamic clustering will occur in some, but not in other, neuronal networks, one needs to consider models based on coupled differential equations which do not have a built-in assumption of discrete episodes and study conditions on these models under which the phenomenon will occur. The next section describes some results of this type and how they relate to the broader context of neuroscience.

6.7 More Neuroscience: Connection with ODE Models

A natural question to ask at this point is: What does anything we have presented so far have to do with neuroscience? Put another way: How can the discrete-time dynamical system, and results concerning its dynamics, help us understand the workings of the brain? The discrete-time system is, after all, a rather simple model in which each “neuron” is represented by a finite number of states and there are rather simple rules for how the “firing” of one neuron impacts on the firing of other neurons. Real neurons, on the other hand, are very complicated cells with complex spatial geometries, the firing of an action potential involves both electrical and chemical processes and communication between neurons at synapses is surely more complicated than our discrete-time model would suggest.

There are many challenges facing mathematicians who wish to make an impact on our understanding of the brain. It is not clear, for example, how to model a neuronal system at an appropriate level of detail. If the model takes into account everything that is currently known about neurons, including their complex dendritic geometries, then the model would be too complicated to simulate numerically, let alone analyze mathematically. For this reason, simpler models have been proposed. These models usually take the form of systems of ordinary or partial differential equations; they take into account only those biological details that are believed to be important in generating the phenomena of interest. Even these models can be extremely difficult to analyze. They are highly nonlinear and involve numerous parameters (many of which are unknown experimentally). Moreover, if one considers any reasonably sized network, then the model will involve hundreds (if not thousands) of differential equations. Questions such as how the emergent population rhythm depends on network architecture present very difficult, yet exciting, challenges for mathematicians and other theoreticians.

A main focus of this chapter has been dynamic clustering, in which subpopulations of neurons fire in synchrony during distinct episodes and the membership of each subpopulation may change from episode to episode. Recall that this phenomenon has been observed in the antennal lobe (AL) of insects and olfactory bulb (OB) of mammals. Detailed differential equations models have been proposed for this phenomenon and these models have been quite useful in understanding biophysical mechanisms underlying this behavior. We note, however, that many (if not most) of the biophysical properties of these systems are not known; these include the intrinsic properties of neurons in the AL and OB, as well as details of the network connectivity. Computational studies have been useful in identifying what cellular and network properties may account for the observed behavior, leading to predictions that can, at times, be tested experimentally. However, as described above, a systematic study of these ODE models—one that can identify and classify those neuronal properties and network architectures that lead to some network behavior—has been lacking. There are simply too many nonlinear equations with too many unknown parameters.

Here we have described a discrete-time model, motivated by the phenomenon of dynamic clustering. This model is considerably simpler than the ODE models that arise in neuroscience and allows for a rigorous mathematical study of its dynamics. The question still remains, however: How can one translate results obtained from the discrete model to the biological system? This question is considerably easier to answer for the ODE models, because there is a more direct link between the biology and the ODE models: The ODE models are derived from physical principles and experimental measurements, so there is a direct link between, for example, changing a parameter in the model and changing some physical quantity in the system. In order for the discrete-time model to be useful, we would need, at a minimum, a direct correspondence between the discrete model and an ODE neuronal model, so that parameters in the discrete model, such as length of refractory periods, correspond directly to parameters in the ODE model and, therefore, to properties of the physical system. Ideally, we would like to prove a theorem that states that given an ODE model, say M, then over a wide range of parameter values and initial conditions, there is a discrete-time model, say N, such that M and N exhibit the same dynamics; that is, the corresponding discrete-time model N will make the same predictions as the ODE model M.

The previous sentence needs some clarification. Of course the discrete model N cannot literally make the same predictions as M because it can, at best, be a much simpler approximation of M and does not allow us to even consider such variables as actual membrane potentials. As we explained in the previous section, N can at best give the same answers to a narrowly specified set of questions. The question we are primarily interested in here is:

Question 6.1

Which neurons in the network will fire during which episode along a given trajectory?

In [5], we considered a general class of ODE neuronal models and proved rigorous results for when there is a direct correspondence between the ODE model, M, and a discrete model, N. More precisely, we demonstrated that dynamic clustering occurs for all trajectories of M that start in a certain region U of the state space of M and there is a discrete-time model N that correctly predicts which neurons will fire during each episode, throughout any ODE trajectory that starts in U. We refer to such a situation by saying that N is consistent with M on U. Moreover, we may expect the set U, on which the two models are consistent, to be open and large enough to contain, for each possible initial state image of N, a state of M that maps to image. In this case we will say that M realizes N on U. In [5] we proved the following:

Theorem 6.16

There exists a class of ODE models M for neuronal networks such that every model in this class realizes a corresponding discrete model on an open subset U of its state space and, conversely, every discrete network model imagewith constant vectors imageis realized by some model M in this class.

Figure 6.6 illustrates an example of an ODE model that realizes the network image, where D is the digraph of Figure 6.2b. This network contains seven cells and the top left panel shows the membrane potentials of each of these cells for one particular solution. The middle left panel is a grey scale representation of this solution. Two other solutions are shown in the right panel. In order to generate the three solutions shown in Figure 6.6, we chose different initial conditions. Note that each cell’s membrane potential typically lies in one of two “states”; the elevated or active state corresponds to the firing of an action potential and is represented by the dark rectangles in the grey scale representations. Moreover, there are discrete episodes in which some subpopulation of cells lie in the elevated state. These subpopulations change from episode to episode and the model exhibits dynamic clustering. The discrete model completely predicts which cells fire during which episode for a large class of initial conditions. The corresponding trajectories in the discrete model are represented by the active cells in each episode.

image

Figure 6.6 Simulations of an ODE model M that realizes the network image, where D is the digraph of Figure 6.2b. The upper left panel shows the time courses of the membrane potentials of the seven cells in the network. The grey scale representations illustrate the emergence of discrete episodes with rather sharp boundaries. The corresponding trajectories in the discrete model N are represented by the active cells in each episode. Adapted from [17].

In the online supplement, we give a computer code for the ODE model that generates the solutions shown Figure 6.6. This code uses the software XPPAUT, which can be freely downloaded from the webpage [18]. The code is flexible enough so that one can explore ODE models that realize different networks. A discussion of why the model is designed as it is—that is, what the various dependent variables and parameters correspond to—is well beyond the scope of this chapter. The interested reader should consult either [3] or [5], where this model is described in detail.

As a consequence of Theorem 6.16, the results presented in this chapter are relevant for a general class of neuronal models. An important conclusion of our study of discrete-time models is that properties of the population rhythm, including the number and size of attractors and length of transients, depend on both the intrinsic properties of the individual neurons, such as their refractory periods or firing thresholds, and the network architecture. A misguided assumption in much of the neuroscience literature is that if one understands how neurons are connected to each other in a given neuronal system, then one can determine the network’s population dynamics. Here we have shown that this is typically not the case; the intrinsic properties of cells in the network may be equally important. Because of Theorem 6.16, this result translates immediately to biologically based ODE models for neuronal activity.

6.8 Directions of Further Research

Much of the theory described in this chapter has been motivated by experimentally observed firing patterns in an insect’s antennal lobe. However, most of the dynamic behaviors described in this chapter, such as synchrony and dynamic clustering, arise in many other neuronal systems. For example, these types of firing patterns have been observed in a brain region known as the basal ganglia in humans suffering from Parkinson’s disease. See, for example [19,20]. In this case, too much synchrony among neurons within the basal ganglia represents a pathological state. In healthy patients, there is very little correlation or synchrony among neurons within the basal ganglia; however, in Parkinson patients, there is a significant increase in synchrony among these neurons.

Another brain region where these types of firing patterns arise is the thalamus, which plays an important role in sensory processing. Certain neurons within the thalamus are thought to be involved in the transition between sleeping and waking states [21,22]. Experiments have demonstrated that as we fall deeper into sleep, certain neurons within the thalamus become more synchronized with each other. Moreover, computational models for sleep rhythms suggest that neurons within the thalamus exhibit clustering during certain stages of sleep.

Finally, synchronous activity has been observed in cortical networks involved in working memory [23]. It is tempting to view the brain (or some brain region) as a large-scale dynamical system and a memory as an attractor of this dynamical system. If this is the case, then it would be important to understand how the number of attractors, and their basins of attraction, depend on network properties, including the intrinsic properties of agents (or neurons) in the system and network architecture. In particular, which networks are capable of exhibiting a very large number of attractors (since one would like to store a very large number of memories) and how do properties of attractors depend on changes in parameters that may, for example, occur during learning?

We note that the neuronal systems described above share many properties, including certain features of the neurons, synaptic connections, and network architecture. Of course, there must be some important differences between these systems, because they play such different functional roles in brain processing. However, from a mathematical viewpoint, it may be possible to consider a general class of neuronal networks that encompass each of these systems and develop an analytic framework that allows us to characterize the firing patterns that emerge.

Our hope is that the analysis presented in this chapter is a first step toward developing such a framework. However, many challenges remain. Here, we characterized a neuron by just two parameters: the firing threshold and the length of refractory period. Obviously neurons are much more complicated biological objects with a myriad of complex cellular processes. Whether or not a neuron fires an action potential may depend on many factors. The firing threshold, for example, may depend on an internal “state” of the neuron, such as the concentration of some ionic species which may change over time depending on how often the neuron fires. The strengths of synaptic connections may also change over time, again depending on the firing rates of neurons. It is not clear which of these details to include in a mathematical model, how to incorporate these details into the theoretical framework described here, and how difficult it would be to mathematically analyze discrete dynamical systems that do include these details. Some progress in this direction is presented in [17].

It is important to emphasize that significant progress can be made only if there is close dialog between mathematicians and experimental neuroscientists. It hardly seems likely that a single mathematical theory can be developed that takes into account every possible detail that may go into a neuronal network model, including complex properties of cells (such as their complex geometry) and all possible network architectures. Collaborations with biologists help guide mathematicians to better understand what details are important and what dynamic properties of the system may be of functional significance. Without some appreciation of the underlying biology, it is often difficult, if not impossible, to ask the most interesting questions and provide the most meaningful answers.

Students interested in pursuing work in mathematical neuroscience may want to start with reading [3], which gives a detailed description of how methods from nonlinear dynamics have been used to address problems in neuroscience. The online supplement to this chapter also gives some additional information on other discrete models of neuronal networks and how they relate to ours.

6.9 Supplementary Materials

The online appendix, additional files, and computer code associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/B978-0-12-415780-4.00024-7 and from the volume’s website http://booksite.elsevier.com/9780124157804

References

1. Just W, Ahn S, Terman D. Neuronal Networks: A Discrete Model - Online Supplement (Appendix 6). In: Robeva R, Hodge T, eds. Mathematical Concepts and Methods in Modern Biology: Using Modern Discrete Models. Elsevier 2013; In: http://booksite.elsevier.com/9780124157804; 2013.

2. Neuron, Wikipedia. http://en.wikipedia.org/wiki/Neuron.

3. Ermentrout G, Terman D. Mathematical foundations of neuroscience. Springer 2010.

4. Hodgkin A, Huxley A. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–544.

5. Terman D, Ahn S, Wang X, Just W. Reducing neuronal networks to discrete dynamics. Physica D. 2008;237:324–338.

6. Elashvili A, Jibladze M, Pataraia D. Combinatorics of necklaces and hermite reciprocity. J Algebraic Comb. 1999;10:173–188.

7. Ahn S, Just W. Digraphs vs dynamics in discrete models of neuronal networks. DCDS-B. 2012;17:1365–1381.

8. Gallery of named graphs, Wikipedia. http://en.wikipedia.org/wiki/Galleryofnamedgraphs.

9. Just W, Ahn S, Terman, D. Minimal attractors in digraph system models of neuronal networks. MBI Technical Report 68; 2007. http://mbi.osu.edu/publications/reports2007.html.

10. Wolfram S. A new kind of science. Wolfram, Media; 2002.

11. Wormatlas, http://www.wormatlas.org/.

12. Stomatogastric ganglion, http://www.scholarpedia.org/article/Stomatogastricganglion.

13. Just W, Ahn S, Terman D. Minimal attractors in digraph system models of neuronal networks. Physica D. 2008;237:3186–3196.

14. Cecchi GA, Rao AR, Centeno MV, Baliki M, Apkarian AV, Chialvo DR. Identifying directed links in large scale functional networks: application to brain fMRI. BMC Cell Biology. 2007;8.

15. Varshney LR, Chen BL, Paniagua E, Hall DH, Chklovskii DB. Structural properties of the Caenorhabditis elegans neuronal network. PLoS Comput Biol. 2011;7(2):e1001066. doi 10.1371/journal.pcbi.1001066.

16. Hethcote HW. The mathematics of infectious diseases. SIAM Rev. 2000;42:599–653.

17. Ahn S, Smith B, Borisyuk A, Terman D. Analyzing neuronal networks using discrete-time dynamics. Physica D. 2010;239:515–528.

18. XPPAUT. http://www.math.pitt.edu/~bard/xpp/xpp.html.

19. Bevan M, Magill P, Terman D, Bolam J, Wilson C. Move to the rhythm: oscillations in the subthalamic nucleus-external globus pallidus network. Trends Neurosci. 2002;25:525–531.

20. Terman D, Rubin J, Yew A, Wilson C. Activity patterns in a model for the subthalamopallidal network of the basal ganglia. J Neurosci. 2002;22:2963–2976.

21. Saper C, Chou T, Scammell T. The sleep switch: hypothalamic control of sleep and wakefulness. Trends Neurosci. 2001;24:726–731.

22. Steriade M, McCormick D, Sejnowski T. Thalamocortical oscillations in the sleeping and aroused brain. Science. 1993;262:679–685.

23. Durstewitz D, Seamans J, Sejnowski T. Neurocomputational models of working memory. Nature Neurosci. 2000;3:1184–1191.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset