Home Page Icon
Home Page
Table of Contents for
Evolutionary Computation in Gene Regulatory Network Research
Close
Evolutionary Computation in Gene Regulatory Network Research
by Nasimul Noman, Hitoshi Iba
Evolutionary Computation in Gene Regulatory Network Research
PREFACE
ACKNOWLEDGMENTS
CONTRIBUTORS
I PRELIMINARIES
CHAPTER 1 A BRIEF INTRODUCTION TO EVOLUTIONARY AND OTHER NATURE-INSPIRED ALGORITHMS
1.1 INTRODUCTION
1.2 CLASSES OF EVOLUTIONARY COMPUTATION
1.3 ADVANTAGES/DISADVANTAGES OF EVOLUTIONARY COMPUTATION
1.4 APPLICATION AREAS OF EC
1.5 CONCLUSION
REFERENCES
CHAPTER 2 MATHEMATICAL MODELS AND COMPUTATIONAL METHODS FOR INFERENCE OF GENETIC NETWORKS
2.1 INTRODUCTION
2.2 BOOLEAN NETWORKS
2.3 PROBABILISTIC BOOLEAN NETWORK
2.4 BAYESIAN NETWORK
2.5 GRAPHICAL GAUSSIAN MODELING
2.6 DIFFERENTIAL EQUATIONS
2.7 TIME-VARYING NETWORK
2.8 CONCLUSION
NOTES
REFERENCES
CHAPTER 3 GENE REGULATORY NETWORKS: REAL DATA SOURCES AND THEIR ANALYSIS
3.1 INTRODUCTION
3.2 BIOLOGICAL DATA SOURCES
3.3 TOPOLOGICAL ANALYSIS OF GENE REGULATORY NETWORKS
3.4 GRN INFERENCE BY INTEGRATION OF MULTI-SOURCE BIOLOGICAL DATA
3.5 CONCLUSIONS AND FUTURE DIRECTIONS
ACKNOWLEDGMENT
REFERENCES
II EAs FOR GENE EXPRESSION DATA ANALYSIS AND GRN RECONSTRUCTION
CHAPTER 4 BICLUSTERING ANALYSIS OF GENE EXPRESSION DATA USING EVOLUTIONARY ALGORITHMS
4.1 INTRODUCTION
4.2 BICLUSTER ANALYSIS OF DATA
4.3 BICLUSTERING TECHNIQUES
4.4 EVOLUTIONARY ALGORITHMS BASED BICLUSTERING
4.5 CONCLUSION
REFERENCES
CHAPTER 5 INFERENCE OF VOHRADSKÝ’S MODELS OF GENETIC NETWORKS USING A REAL-CODED GENETIC ALGORITHM
5.1 INTRODUCTION
5.2 MODEL
5.3 INFERENCE BASED ON BACK-PROPAGATION THROUGH TIME
5.4 INFERENCE BY SOLVING SIMULTANEOUS EQUATIONS
5.5 REXSTAR/JGG
5.6 INFERENCE OF AN ARTIFICIAL NETWORK
5.7 INFERENCE OF AN ACTUAL GENETIC NETWORK
5.8 CONCLUSION
ACKNOWLEDGEMENTS
REFERENCES
CHAPTER 6 GPU-POWERED EVOLUTIONARY DESIGN OF MASS-ACTION-BASED MODELS OF GENE REGULATION
6.1 INTRODUCTION
6.2 EVOLUTIONARY COMPUTATION FOR THE INFERENCE OF BIOCHEMICAL MODELS
6.3 METHODS
6.4 DESIGN METHODOLOGY OF GENE REGULATION MODELS BY MEANS OF CGP AND PSO
6.5 RESULTS
6.6 DISCUSSION
6.7 CONCLUSIONS AND FUTURE PERSPECTIVES
NOTES
REFERENCES
CHAPTER 7 MODELING DYNAMIC GENE EXPRESSION IN STREPTOMYCES COELICOLOR: COMPARING SINGLE AND MULTI-OBJECTIVE SETUPS
7.1 INTRODUCTION
7.2 REGULATORY NETWORKS AND GENE EXPRESSION DATA
7.3 OPTIMIZATION USING EVOLUTIONARY ALGORITHMS
7.4 MODELING GENE EXPRESSION
7.5 RESULTS
7.6 DISCUSSION
7.7 CONCLUSIONS
REFERENCES
CHAPTER 8 RECONSTRUCTION OF LARGE-SCALE GENE REGULATORY NETWORK USING S-SYSTEM MODEL
8.1 INTRODUCTION
8.2 REVERSE ENGINEERING GRN WITH S-SYSTEM MODEL AND EVOLUTIONARY COMPUTATION
8.3 THE PROPOSED FRAMEWORK FOR INFERRING LARGE-SCALE GRN
8.4 EXPERIMENTAL RESULTS
8.5 DISCUSSIONS
8.6 CONCLUSION
ACKNOWLEDGMENTS
REFERENCES
III EAs FOR EVOLVING GRNs AND REACTION NETWORKS
CHAPTER 9 DESIGN AUTOMATION OF NUCLEIC ACID REACTION SYSTEM SIMULATED BY CHEMICAL KINETICS BASED ON GRAPH REWRITING MODEL
9.1 INTRODUCTION
9.2 NUCLEIC ACID REACTION SYSTEM
9.3 SIMULATION BY CHEMICAL KINETICS
9.4 AUTOMATIC DESIGN OF NUCLEIC ACID REACTION SYSTEM
9.5 DISCUSSION AND CONCLUSION
REFERENCES
CHAPTER 10 USING EVOLUTIONARY ALGORITHMS TO STUDY THE EVOLUTION OF GENE REGULATORY NETWORKS CONTROLLING BIOLOGICAL DEVELOPMENT
10.1 INTRODUCTION
10.2 COMPUTATIONAL APPROACHES FOR THE EVOLUTION OF DEVELOPMENTAL GRNS
10.3 USING EVOLUTIONARY COMPUTATIONS TO INVESTIGATE BIOLOGICAL EVOLUTION
10.4 CONCLUSIONS
ACKNOWLEDGEMENTS
REFERENCES
CHAPTER 11 EVOLVING GRN-INSPIRED IN VITRO OSCILLATORY SYSTEMS
11.1 INTRODUCTION
11.2 PEN DNA TOOLBOX
11.3 RELATED WORK
11.4 FRAMEWORK FOR EVOLVING REACTION NETWORKS (ERNE)
11.5 ERNE FOR THE DISCOVERY OF OSCILLATORY SYSTEMS
11.6 DISCUSSION
11.7 CONCLUSION
REFERENCES
IV APPLICATION OF GRN WITH EAs
CHAPTER 12 ARTIFICIAL GENE REGULATORY NETWORKS FOR AGENT CONTROL
12.1 INTRODUCTION
12.2 COMPUTATION MODEL
12.3 VISUALIZING THE GRN ABILITIES
12.4 GROWING MULTICELLULAR ORGANISMS
12.5 DRIVING A VIRTUAL CAR
12.6 REGULATING BEHAVIORS
12.7 CONCLUSION
NOTES
REFERENCES
CHAPTER 13 EVOLVING H-GRNS FOR MORPHOGENETIC ADAPTIVE PATTERN FORMATION OF SWARM ROBOTS
13.1 INTRODUCTION
13.2 PROBLEM STATEMENT
13.3 H-GRN MODEL WITH REGION-BASED SHAPE CONTROL
13.4 EVOLVING H-GRN USING NETWORK MOTIFS
13.5 CONCLUSIONS AND FUTURE WORK
ACKNOWLEDGMENT
APPENDIX
REFERENCES
CHAPTER 14 REGULATORY REPRESENTATIONS IN ARCHITECTURAL DESIGN
14.1 INTRODUCTION
14.2 BACKGROUND
14.3 THE NEED FOR REGULATORY REPRESENTATIONS
14.4 DEVELOPMENTAL MAPPING
14.5 ROBUSTNESS AND EVOLUTIONARY ADAPTATION IN BIOLOGICAL SYSTEMS
14.6 CONCLUSIONS AND DISCUSSION
ACKNOWLEDGMENTS
REFERENCES
CHAPTER 15 COMPUTING WITH ARTIFICIAL GENE REGULATORY NETWORKS
15.1 INTRODUCTION
15.2 BIOLOGICAL GRNs
15.3 COMPUTATIONAL MODELS
15.4 MODELING DECISIONS
15.5 COMPUTATIONAL PROPERTIES OF AGRNs
15.6 AGRN MODELS AND APPLICATIONS
15.7 FUTURE RESEARCH DIRECTIONS
15.8 CONCLUSIONS
REFERENCES
INDEX
SERIES
EULA
Search in book...
Toggle Font Controls
Playlists
Add To
Create new playlist
Name your new playlist
Playlist description (optional)
Cancel
Create playlist
Sign In
Email address
Password
Forgot Password?
Create account
Login
or
Continue with Facebook
Continue with Google
Sign Up
Full Name
Email address
Confirm Email Address
Password
Login
Create account
or
Continue with Facebook
Continue with Google
Prev
Previous Chapter
Evolutionary Computation in Gene Regulatory Network Research
Next
Next Chapter
PREFACE
CONTENTS
PREFACE
ACKNOWLEDGMENTS
CONTRIBUTORS
I PRELIMINARIES
CHAPTER 1 A BRIEF INTRODUCTION TO EVOLUTIONARY AND OTHER NATURE-INSPIRED ALGORITHMS
1.1 INTRODUCTION
1.2 CLASSES OF EVOLUTIONARY COMPUTATION
1.3 ADVANTAGES/DISADVANTAGES OF EVOLUTIONARY COMPUTATION
1.4 APPLICATION AREAS OF EC
1.5 CONCLUSION
REFERENCES
CHAPTER 2 MATHEMATICAL MODELS AND COMPUTATIONAL METHODS FOR INFERENCE OF GENETIC NETWORKS
2.1 INTRODUCTION
2.2 BOOLEAN NETWORKS
2.3 PROBABILISTIC BOOLEAN NETWORK
2.4 BAYESIAN NETWORK
2.5 GRAPHICAL GAUSSIAN MODELING
2.6 DIFFERENTIAL EQUATIONS
2.7 TIME-VARYING NETWORK
2.8 CONCLUSION
NOTES
REFERENCES
CHAPTER 3 GENE REGULATORY NETWORKS: REAL DATA SOURCES AND THEIR ANALYSIS
3.1 INTRODUCTION
3.2 BIOLOGICAL DATA SOURCES
3.3 TOPOLOGICAL ANALYSIS OF GENE REGULATORY NETWORKS
3.4 GRN INFERENCE BY INTEGRATION OF MULTI-SOURCE BIOLOGICAL DATA
3.5 CONCLUSIONS AND FUTURE DIRECTIONS
ACKNOWLEDGMENT
REFERENCES
II EAs FOR GENE EXPRESSION DATA ANALYSIS AND GRN RECONSTRUCTION
CHAPTER 4 BICLUSTERING ANALYSIS OF GENE EXPRESSION DATA USING EVOLUTIONARY ALGORITHMS
4.1 INTRODUCTION
4.2 BICLUSTER ANALYSIS OF DATA
4.3 BICLUSTERING TECHNIQUES
4.4 EVOLUTIONARY ALGORITHMS BASED BICLUSTERING
4.5 CONCLUSION
REFERENCES
CHAPTER 5 INFERENCE OF VOHRADSKÝ’S MODELS OF GENETIC NETWORKS USING A REAL-CODED GENETIC ALGORITHM
5.1 INTRODUCTION
5.2 MODEL
5.3 INFERENCE BASED ON BACK-PROPAGATION THROUGH TIME
5.4 INFERENCE BY SOLVING SIMULTANEOUS EQUATIONS
5.5 REX
STAR
/JGG
5.6 INFERENCE OF AN ARTIFICIAL NETWORK
5.7 INFERENCE OF AN ACTUAL GENETIC NETWORK
5.8 CONCLUSION
ACKNOWLEDGEMENTS
REFERENCES
CHAPTER 6 GPU-POWERED EVOLUTIONARY DESIGN OF MASS-ACTION-BASED MODELS OF GENE REGULATION
6.1 INTRODUCTION
6.2 EVOLUTIONARY COMPUTATION FOR THE INFERENCE OF BIOCHEMICAL MODELS
6.3 METHODS
6.4 DESIGN METHODOLOGY OF GENE REGULATION MODELS BY MEANS OF CGP AND PSO
6.5 RESULTS
6.6 DISCUSSION
6.7 CONCLUSIONS AND FUTURE PERSPECTIVES
NOTES
REFERENCES
CHAPTER 7 MODELING DYNAMIC GENE EXPRESSION IN
STREPTOMYCES COELICOLOR
: COMPARING SINGLE AND MULTI-OBJECTIVE SETUPS
7.1 INTRODUCTION
7.2 REGULATORY NETWORKS AND GENE EXPRESSION DATA
7.3 OPTIMIZATION USING EVOLUTIONARY ALGORITHMS
7.4 MODELING GENE EXPRESSION
7.5 RESULTS
7.6 DISCUSSION
7.7 CONCLUSIONS
REFERENCES
CHAPTER 8 RECONSTRUCTION OF LARGE-SCALE GENE REGULATORY NETWORK USING S-SYSTEM MODEL
8.1 INTRODUCTION
8.2 REVERSE ENGINEERING GRN WITH S-SYSTEM MODEL AND EVOLUTIONARY COMPUTATION
8.3 THE PROPOSED FRAMEWORK FOR INFERRING LARGE-SCALE GRN
8.4 EXPERIMENTAL RESULTS
8.5 DISCUSSIONS
8.6 CONCLUSION
ACKNOWLEDGMENTS
REFERENCES
III EAs FOR EVOLVING GRNs AND REACTION NETWORKS
CHAPTER 9 DESIGN AUTOMATION OF NUCLEIC ACID REACTION SYSTEM SIMULATED BY CHEMICAL KINETICS BASED ON GRAPH REWRITING MODEL
9.1 INTRODUCTION
9.2 NUCLEIC ACID REACTION SYSTEM
9.3 SIMULATION BY CHEMICAL KINETICS
9.4 AUTOMATIC DESIGN OF NUCLEIC ACID REACTION SYSTEM
9.5 DISCUSSION AND CONCLUSION
REFERENCES
CHAPTER 10 USING EVOLUTIONARY ALGORITHMS TO STUDY THE EVOLUTION OF GENE REGULATORY NETWORKS CONTROLLING BIOLOGICAL DEVELOPMENT
10.1 INTRODUCTION
10.2 COMPUTATIONAL APPROACHES FOR THE EVOLUTION OF DEVELOPMENTAL GRNS
10.3 USING EVOLUTIONARY COMPUTATIONS TO INVESTIGATE BIOLOGICAL EVOLUTION
10.4 CONCLUSIONS
ACKNOWLEDGEMENTS
REFERENCES
CHAPTER 11 EVOLVING GRN-INSPIRED
IN VITRO
OSCILLATORY SYSTEMS
11.1 INTRODUCTION
11.2 PEN DNA TOOLBOX
11.3 RELATED WORK
11.4 FRAMEWORK FOR EVOLVING REACTION NETWORKS (ERNE)
11.5 ERNE FOR THE DISCOVERY OF OSCILLATORY SYSTEMS
11.6 DISCUSSION
11.7 CONCLUSION
REFERENCES
IV APPLICATION OF GRN WITH EAs
CHAPTER 12 ARTIFICIAL GENE REGULATORY NETWORKS FOR AGENT CONTROL
12.1 INTRODUCTION
12.2 COMPUTATION MODEL
12.3 VISUALIZING THE GRN ABILITIES
12.4 GROWING MULTICELLULAR ORGANISMS
12.5 DRIVING A VIRTUAL CAR
12.6 REGULATING BEHAVIORS
12.7 CONCLUSION
NOTES
REFERENCES
CHAPTER 13 EVOLVING H-GRNS FOR MORPHOGENETIC ADAPTIVE PATTERN FORMATION OF SWARM ROBOTS
13.1 INTRODUCTION
13.2 PROBLEM STATEMENT
13.3 H-GRN MODEL WITH REGION-BASED SHAPE CONTROL
13.4 EVOLVING H-GRN USING NETWORK MOTIFS
13.5 CONCLUSIONS AND FUTURE WORK
ACKNOWLEDGMENT
APPENDIX
REFERENCES
CHAPTER 14 REGULATORY REPRESENTATIONS IN ARCHITECTURAL DESIGN
14.1 INTRODUCTION
14.2 BACKGROUND
14.3 THE NEED FOR REGULATORY REPRESENTATIONS
14.4 DEVELOPMENTAL MAPPING
14.5 ROBUSTNESS AND EVOLUTIONARY ADAPTATION IN BIOLOGICAL SYSTEMS
14.6 CONCLUSIONS AND DISCUSSION
ACKNOWLEDGMENTS
REFERENCES
CHAPTER 15 COMPUTING WITH ARTIFICIAL GENE REGULATORY NETWORKS
15.1 INTRODUCTION
15.2 BIOLOGICAL GRNs
15.3 COMPUTATIONAL MODELS
15.4 MODELING DECISIONS
15.5 COMPUTATIONAL PROPERTIES OF AGRNs
15.6 AGRN MODELS AND APPLICATIONS
15.7 FUTURE RESEARCH DIRECTIONS
15.8 CONCLUSIONS
REFERENCES
INDEX
SERIES
EULA
List of Tables
Chapter 4
Table 4.1
Table 4.2
Table 4.3
Table 4.4
Table 4.5
Table 4.6
Table 4.7
Chapter 5
Table 5.1
Table 5.2
Table 5.3
Table 5.4
Table 5.5
Chapter 7
Table 7.1
Chapter 9
Table 9.1
Table 9.2
Chapter 11
Table 11.1
Chapter 15
Table 15.1
Table 15.2
List of Illustrations
Chapter 1
Figure 1.1
One-point crossover in a genetic algorithm.
Figure 1.2
Mutation in a genetic algorithm.
Figure 1.3
Schematic of BLX-α.
Figure 1.4
Crossover in genetic programming.
Figure 1.5
Mutation in genetic programming.
Figure 1.6
Generation alternation in differential evolution (Reprinted with permission from Ref. [23]).
Figure 1.7
Crossover and mutation in differential evolution (Reprinted with permission from Ref. [30]).
Chapter 2
Figure 2.1
Inference of a genetic network from gene expression time series data.
Figure 2.2
Example of a Boolean network.
Figure 2.3
State transition diagram for the Boolean network in Figure 2.2.
Figure 2.4
Example of a probabilistic Boolean network.
Figure 2.5
Examples of Bayesian networks.
Figure 2.6
Inference of a genetic network using partial correlations. Edges (shown by dotted lines) with small partial correlations are (repeatedly) eliminated in the graphical Gaussian model.
Figure 2.7
Illustration of network completion.
Figure 2.8
Illustration of inference of time-varying networks. In this problem, it is required to identify both change points and network structures.
Chapter 3
Figure 3.1
The directed acyclic graph induced from the GO term S phase of meiotic cell cycle (GO: 0051332), wherein at the bottom-most level is the GO term of interest itself, and at the upper levels are all its ancestors, adapted from QuickGO GO Browser (http://www.ebi.ac.uk/ego/). GO, Gene ontology.
Figure 3.2
The gene transcriptional regulatory program. The gene transcriptional regulatory program can be simplified at two levels. At the factor–gene binding level, the “activated” transcription factors bind to their specific conserved sequence motifs, called transcription factor binding sites. When the binding process is completed, the regulation mechanism instructs the gene transcription from transcriptional start site (DNA to mRNA); first part of the central dogma in molecular biology. This figure was adapted from Zhang et al. [46].
Figure 3.3
Schematic overview of the computational framework used for the gene regulatory module inference. PPI, protein–protein interaction; PDI, protein–DNA interaction.
Chapter 4
Figure 4.1
An illustrative example where conventional clustering fails but biclustering works. (a) A data matrix, which appears random visually even after hierarchical clustering. (b) A hidden pattern embedded in the data would be uncovered if we permute the rows or columns appropriately [15].
Figure 4.2
Conceptual difference between (a) cluster analysis and (b) bicluster analysis, where biclusters correspond to arbitrary subsets of rows and columns.
Figure 4.3
Examples of different bicluster patterns: (a) constant values, (b) constant rows, (c) constant columns, (d) additive coherent values, (e) multiplicative coherent values, and (f) linear coherent values [15].
Figure 4.4
Different geometries (lines or planes) in the 3D data space for corresponding bicluster patterns. (a) A bicluster with
constant values
is represented by one of the lines that are parallel to the
y
-axis and lie in the plane
x
=
z
(the T-plane). (b) A bicluster with
constant rows
is represented by the T-plane. (c) A bicluster with
constant columns
is represented by one of the lines parallel to the
y
-axis. (d) A bicluster with
additive coherent values
is represented by one of the planes parallel to the T-plane. (e) A bicluster with
multiplicative coherent values
is represented by one of the planes that include the
y
-axis. (f) A bicluster with
linear coherent values
is represented by one of the planes that are parallel to the
y
-axis [15].
Figure 4.5
(a) A 6 × 6 data matrix with hidden biclusters, (b) bicluster with constant values, (c) bicluster with constant rows, (d) bicluster with constant columns, (e) bicluster of additive model, where
O
3
=
O
4
− 5 =
O
5
− 10 =
O
6
− 15 and
A
3
=
A
4
+ 3 =
A
5
– 5 =
A
6
− 20, (f) bicluster of multiplicative model, where
O
1
= 0.2 ×
O
6
and
A
5
= 0.7 ×
A
6
.
Figure 4.6
Types of biclusters with coherent evolution. Considering the entries of a data matrix as symbols, (a) an overall coherent evolution, (b) a coherent evolution on the rows, (c) a coherent evolution on the columns, and (d) a coherent sign change across rows.
Figure 4.7
A plane formed by points in a bicluster in the three-dimensional data space. The gray dots are data located on the plane.
Figure 4.8
Proportion of biclusters significantly enriched by Gene Ontology biological process for different algorithms, where α is the adjusted significant level for the biclusters.
Figure 4.9
Graphical representation of a bicluster containing linear patterns whose relationship is
c
1
= 0.5
c
2
– 1.5
c
3
+ 2
c
4
+ 3
c
5
–
c
6
as shown in the table above.
Figure 4.10
Pseudo-code of linear biclusters detection based on the genetic algorithm.
Figure 4.11
Individual chromosome encoding.
Chapter 5
Figure 5.1
The computation of the objective function (5.11).
Figure 5.2
Generation of offspring by REX
.
Figure 5.3
The structure of the artificial network.
Figure 5.4
A sample of time-series data used for inferring the artificial genetic network.
Figure 5.5
The SOS DNA repair system in
Escherichia coli
.
Figure 5.6
The network structure extracted from the estimated parameters in the analysis of the SOS DNA repair network in
Escherichia coli
. Bold lines represent biologically plausible regulations.
Chapter 6
Figure 6.1
Architecture of CUDA’s threads and memory hierarchy.
Left side
. Threads organization: a kernel is invoked from the CPU (the host) and is executed in multiple threads on the GPU (the device). Threads are organized in three-dimensional structures named blocks which are, in turn, organized in three-dimensional grids. The programmer must decide the dimensions of blocks and grids before the kernel launch.
Right side
. Memory hierarchy: threads can access data from multiple kinds of memories, all with different scopes and characteristics. Registers and local memories are private for each thread; shared memory lets threads belonging to the same block communicate, and has low access latency; all threads can access the global memory, which suffers from high latencies, but it is cached since the introduction of the Fermi architecture; texture and constant memory can be read from any thread and are equipped with a cache as well. cupSODA takes advantage of the memory hierarchy of the GPU, by allocating the most frequently updated data (e.g., the state of the system) in the shared memory, and the unvarying data (e.g., the set of reactions) in the constant memory. Adapted from Nvidia’s CUDA programming guide [55].
Figure 6.2
Schematic description of the memory hierarchy in Fermi and Kepler architectures. GPUs based on these architectures are equipped with a two-level data cache and a read-only data cache. The shared memory and the L1 cache share the same on-chip 64 KB memory banks. The amount of memory can be reconfigured by the user, according to the specific needs of the application. Adapted from Nvidia’s Kepler GK110 whitepaper [56].
Figure 6.3
Example of the conversion of a CP genotype into the corresponding GRM.
Figure 6.4
Fitness value of the best solution of the ED process obtained by varying the settings for
n
r
in CGP. The best setting for the two-genes system under investigation is
n
r
= 10, while smaller and larger values yield worse results. The plot highlights the presence of fluctuations due to the stochasticity of PSO, emphasizing how a proper choice for the grid size is fundamental for the convergence of cuGENED.
Figure 6.5
Fitness value of the best solution of the ED process obtained by varying the settings for ρ in CGP. The best setting for the two-genes system under investigation is ρ = 0.2, that is, about 20% of the genome must be modified when producing new offspring during each generation of the CGP.
Figure 6.6
Comparison of the target dynamics (dots) with the simulated dynamics of the GRM inferred by cuGENED (lines). The GRM η
2γ
achieves a perfect fitting of the desired behavior.
Figure 6.7
The interaction diagram shows the best GRM η
2γ
inferred by cuGENED to reproduce the desired behavior shown in Figure 6.6. Circular nodes represent the chemical species involved in the network, while rectangular nodes represent the reactions. The gray nodes denote the chemical species (i.e., mRNA) whose dynamics is considered as target.
Figure 6.8
Comparison of the target dynamics (dots) with the simulated dynamics of the GRM inferred by cuGENED (lines). The GRM η
3γ
achieves an almost perfect fitting of the desired behavior.
Figure 6.9
The interaction diagram shows the best GRM η
3γ
inferred by cuGENED to reproduce the desired behavior shown in Figure 6.8. Circular nodes represent the chemical species involved in the network, while rectangular nodes represent the reactions. The gray nodes denote the chemical species (i.e., mRNA) whose dynamics is considered as target for the ED. Dashed lines highlight reactions and chemical species that have no effective role in the system dynamics.
Chapter 7
Figure 7.1
A gene regulatory network of a three-gene system interacting over several levels. This demonstrates how the complexity of modeling increases rapidly when considering multiple levels of interactions. Illustrated here are gene-level (solid line), protein-level (dashed line), and metabolite-level (dotted line) interactions. Lines with arrow heads and bar ends represent activating and repressive regulations, respectively.
Figure 7.2
Dependence of antibiotic production on phosphate levels. The figure is reproduced with the addition of curve labels from Ref. [55] © 2010 Nieselt et al. under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0). Each line on the graph has a label of the same color that indicates the measure variable. Antibiotics produced here are undecylprodigiosin (Red), actinorhodin (γ-Act), and total blue pigment (TBP) are labeled on the right side of the figure.
Figure 7.3
Unprocessed (raw) gene expression for the PyrR, WhiB, RedD, and CdaR networks [69]. The number of regulatory targets in each network is given in parentheses.
Figure 7.4
Normalized gene expression for the PyrR, WhiB, RedD, and CdaR network, raw data from Ref. [69]. The number of regulatory targets in each network is given in parentheses.
Figure 7.5
(a) The Pareto front of non-dominated solutions to the multi-objective problem in Equations (7.7) and (7.8). Illustrated here is a front of
k
solutions for a network of
M
genes. (b) Pareto solutions and the corresponding sum squared errors for individual genes (see the text for details). The overall best solution,
s
b
, is obtained by selected the lowest
f
value for each gene in turn yielding, in general, a network of mixed regulation types.
Figure 7.6
Pareto front of non-dominated solutions to the multi-objective problem, Equations (7.11) and (7.12), with a decoupled arrangement. Solution
s
1
corresponds to the best activating connection, and
s
2
corresponds to the best repressive solution, thus evaluating the minimum of these two yields the optimal parameters for each connection.
Figure 7.7
Box plots for the sub-networks for the single and multi-objective setups for both normalized and un-normalized (raw) gene expression data. Box plots are the distribution of sum squared error over 50 independent simulations of each setup. Labels on the
x
-axis give the sub-network name and the number of target genes within the network. Outliers are given as circles and averages are median values. The dashed line indicates when the problem becomes underdetermined, that is, when the number of genes exceeds the number of data points. SO, single objective; MO, multi-objective.
Figure 7.8
Box plots of sum squared error distributions over 50 simulations for the decoupled arrangement, plots as shown in Figure 7.7. A significant difference between the normalized and raw data is clear here with a separation between distributions of around 2.5 orders of magnitude. SO, single objective; MO; multi-objective.
Figure 7.9
Box plots from Figure 7.8 separated for clarity: the performance of both the single objective setup and multi-objective setup (a) on the raw gene expression data and (b) on the normalized gene expression data. SO, single objective; MO; multi-objective.
Chapter 8
Figure 8.1
Different layers of processes in a typical biological system (adapted from Ref. [6]).
Figure 8.2
(a) A sample GRN of 10 genes with 11 interactions. (b) A large-scale GRN of 100 genes with 193 interactions. The networks are generated using GeneNetWeaver tool [42], where panels (a) and (b) are taken from Dream_Challenges → Dream3_In-Silico_Size_10 → InsilicoSize10-Ecoli1 and Dream_Challenges → Dream4_In-Silico_Size_100 → InsilicoSize100_5, respectively.
Figure 8.3
Receiver operating characteristic (ROC) points shown in ROC graphs for different methods (proposed, REGARD [12], ALG [31], and BANJO [57]) in various noise conditions.
Figure 8.4
Receiver operating characteristic (ROC) points shown in ROC graphs for Net-2 and Net-3 for proposed method and BANJO [57] in various noise conditions. The bigger circle and the smaller circle inside the graph show the positions of the ROC points for the proposed method and BANJO, respectively.
Figure 8.5
Evaluation of the proposed method with BANJO [57] in terms of precision and
F
-score.
Figure 8.6
Target and inferred expression profiles for four target genes of Net-2 with five different levels of noise. T/I after the gene names in the graphs implies target/inferred. The horizontal and vertical axes in the graphs represent time and expression levels, respectively.
Chapter 9
Figure 9.1
Domain level representation is explained in a step-by-step manner using DNA catalytic gate [31] as an example. (a) All sequences of nucleotides and hydrogen bonds are shown using double helical arrows and lines. (b) Single stranded DNA is represented as a straight arrow. (c) Letters are allocated to domains that are units of reactions.
Figure 9.2
Hydrogen bond reactions of catalytic gate [31]. One cycle of reaction starts from the top and continue in anticlockwise direction. The gate consists of the substrate (S) and the fuel (F), while the catalyst (C) serves as an input of the gate. S and C first hybridize together and result in an intermediate structure. By branch migration and denaturation, the structure separates into the signal (G) and the intermediate 1 (I1). The I1 and the F turn into the intermediate 2 (I2) and the output (O) by another sequential hybridization and branch migration. Finally, I2 releases C and becomes the waste product (W) by denaturation.
Figure 9.3
Enzymatic reactions of the AND gate of RTRACS [32]. The gate consists of DNA primer and converter, while the inputs and output are ssRNA. After the primer and the input 1 hybridize together, the primer is extended by an RNA-directed primer-dependent DNA polymerase. Ribo-nuclease then decomposes only the input 1. The remained ssDNA hybridizes to the input 2 for another polymerization and decomposition, which results in a partially double stranded DNA. The structure hybridizes to the converter and forms fully double stranded DNA by polymerization. Since promoter region (indicated by “T” in the figure) becomes double stranded, downstream of the dsDNA is transcribed to the output by a primer-independent RNA polymerization.
Figure 9.4
Graph-based models of the (a) catalytic gate [31], (b) hairpin loop, (c) bulge loop, and pseudoknot. Domain level representations and corresponding graph-based models are shown in the top and the bottom of the figure, respectively. Domain “c” of (b), “b” of (c), and from “d” to “F” of (c) are hairpin loop, bulge loop, and pseudoknot, respectively. We surround a connected graph with a rectangle shape because a connected graph corresponds to one molecule.
Figure 9.5
Transition of graph-based model of the catalytic gate [31] by applying graph rewriting rules. The figure is arranged in the same way as Figure 9.2. Although each reaction corresponds to a transition of graphs, the transition of branch migration reaction is applied twice for the reaction on the right to displace both “b” and “d” at once. A graph can be separated into multiple connected graphs as a result of graph transitions such as branch migration and denaturation.
Figure 9.6
Simulation result of the catalytic gate [31].
x
and
y
axes are time and concentration of the output strand, respectively. First changes in first 2000 time units were caused by making a gate. Input was added at 2000 time units, which leads to the increase of the concentration of output. The graph legends indicate the concentration of input. When the concentration of input was 0.01, that of output did not increase because the concentration of input was too small so that an intermediate structure could not exceed the threshold.
Figure 9.7
Simulation result of the AND gate of RTRACS [32].
x
and
y
axes are time and concentration of the output strand, respectively. The inputs were added at 1000 time units. The concentration of the output began to increase from about 2000 time units when both of the inputs were added. The graph legends indicate the combination of inputs.
Figure 9.8
Flowchart of automatic design algorithm. After generating a set of random initial candidates, the algorithm iterates the process of simulation, evaluation, selection, and generation until the terminal condition is satisfied.
Figure 9.9
Designed enzyme-free OR gate. Obtained genotypes of the gate were “cab,” and “A”, while those of inputs were “BA” and “BAD”. (a) Two transitions to produce the output with either of inputs are illustrated. Domains “c” and “D” were unused for the reaction. (b) The simulation result of the OR gate is shown.
x
and
y
axes are time and concentration of the output, respectively. The concentration of output increased if at least one input was added at 1000 time units.
Figure 9.10
Designed enzyme-free AND gate. Obtained genotypes of the gate were “A,” “ba,” and “dca,” while those of inputs were “AB” and “AC.” (a) Transitions to produce the output with both inputs are illustrated. Domain “d” was unused for the reaction. (b) The simulation result of the AND gate is shown.
x
and
y
axes are time and concentration of the output, respectively. The concentration of the output increases if and only if both inputs are added at 1000 time units.
Figure 9.11
Evolutions of fitness values of the AND gate.
x
and
y
axes are the generation and fitness value, respectively. Each line connects a sequence of plots, which reflects the transition of the highest fitness value in candidates during the evolutionary computation. All the trials obtained relatively high fitness values.
Figure 9.12
Designed AND gate that utilizes enzyamtic reactions. Obtained genotypes of the gate were “CTA,” and “c”, while those of inputs were “Bat” and “B”. (a) The gate and its transitions to produce the output with both inputs are illustrated. (b) The simulation result of the AND gate is shown.
x
and
y
axes are time and concentration of the output, respectively. The concentration of output increased if and only if both inputs were added.
Figure 9.13
Designed automaton. Obtained genotypes of the gate were “aABC”, “dacb”, and “CAe”, while the domains to be blocked by inputs were “a” and “b.” (a) Transitions to produce the output with two ordering of inputs are illustrated. Domains “d” and “e” were unused for the reaction. (b) The simulation result of the automaton is shown.
x
and
y
axes are time and concentration of the output, respectively. First and second inputs are added in 1000 and 2000 time units. The concentration of output increased if input 2 was added before input 1.
Figure 9.14
Evolutions of fitness values of the automaton.
x
and
y
axes are the generation and fitness value, respectively. Each line connects a sequence of plots, which reflects the transition of the highest fitness value in candidates during the evolutionary computation. Although trials 1, 2, and 5 obtained relatively high fitness values, trials 3 and 4 did not.
Figure 9.15
Chemical experimental result of automatically designed enzyme-free AND gate.
x
and
y
axes are time in second and fluorescent intensity in arbitrary unit, respectively. To read out the output, fluorophore and quencher molecules were attached to the output and gate. Fluorescent intensity was normalized by the maximum intensity recorded in advance. Inputs were added at 0 s. For technical reason, unnecessary domains in the system were omitted for the experiment.
Chapter 10
Figure 10.1
Overview of evolutionary computation approach. (a) General cycle of evaluation, selection, reproduction, and mutation to create spatial gene expression patterns (see the text).
F
-values represent the relative fitness for each spatial pattern. (b) A more detailed schematic of this process, for an initial population of GRNs (Step 1), evaluation for fitness to produce a particular spatial gene expression pattern (Step 2), selection and reproduction of the fittest GRNs (Step 3), and GRN mutation (Step 4), changing parameters (kinetics) or the network connections themselves, as shown here (after Ref. [3]).
Figure 10.2
Representation of a GRN via the connectivity or regulation matrix
W
=
W
i
←
j
(the
W
element represents the action of the
j
th gene on the
i
th gene; can also be written as
W
ij
). (a) Matrix representation of gene regulation by transcription factors (TFs) encoded by the genes. In biological terms, the
w
ij
s represent both the TF binding strength and the transcriptional activation (or repression) ability of the
j
th factor on the
i
th gene.
w
ij
values are relative. The
i
th row of
W
corresponds to the entire cis-regulatory region of gene
G
i
. The matrix
W
= (
w
ij
) corresponds to all regulatory DNA elements relevant to regulatory interactions among network genes. The more zero entries
W
has, the fewer regulatory interactions exist among network genes. (b) Each gene (horizontal arrow) is regulated by the products of the other genes (boxes). Strength and direction of regulation (depicted as different color saturation levels) are a function of both the regulatory element and the abundance of its corresponding gene product. Genotype is represented as the matrix
W
, and phenotype is the vector
TF
.
Figure 10.3
An example of modeling evolution of a GRN controlling spatially dependent gene expression, in this case anterior–posterior segmentation of the fly body plan. (a) The population of individuals, each with a GRN. (b) GRN topology specifies the gene expression response to a maternal signaling gradient (modeled continuously with DEs). This produces variations on a striped expression pattern. (c) GRNs are mutated. (d) The reproduction chances of individuals are fitness based; fitness increases with the segment number. Reprinted with permission from Ref. [19], copyright 2013 Springer Science and Business Media.
Figure 10.4
Representation of
hb
regulatory region (after Ref. [39]). (a) Schematic of two of the
hb
cis-regulatory modules (CRMs), showing binding sites (BSs) for specific TFs (visualized with
Genamics Expression
software). (b) Representation of the BS for
hb
TFs (Bcd—B, Cad—C, Ems—E, Gt—G, Hb—H, Kr—K, Kni—N, Tll—T, etc.) as strings of characters, neglecting distance along the DNA between BSs. Asterisks denote spacers (non-BS DNA) separating CRMs. (c) Strength of an activating BS is calculated by a three-step algorithm which sums activation (including co-activation) and repression strengths dependent on neighboring TFs (a short action radius of three BSs is used): (1) local activation strength is tallied; (2) neighboring activation is added; (3) neighboring repression is added. The summed action strength is shown at the bottom of the figure.
Figure 10.5
Modeling co-option of a new gene into a network during evolution. (a) Initial
n
-gene network (
G
1
, … ,
G
n
), producing
n
TFs (
, … ,
). (b) New gene (
G
C
1
), producing a new factor
, can be incorporated by chance and retained if it maintains or improves fitness (gray colored row and column). The initial
n
-gene network can become completely rewired by co-opting
G
C
1
.
Figure 10.6
hb
gene expression patterns driven from each of three evolved CRMs. As overall regulation strength,
R
a
, is increased, the expression domains go from fully redundant (a) to fully distinct (e). Patterns 3–5 (c)–(e) show anterior–posterior separation of domains, as seen biologically. Pattern 3 (c) is closest to the biological domain–CRM correspondence, especially the middle domain, with no anterior expression.
Chapter 11
Figure 11.1
Idealized working of a template with its activator.
1.
Hybridization of the input with the template.
2.
Elongation by polymerase: the enzyme reads the template and extends the input using dNTP (monomeric DNA) present in the buffer.
3.
The newly formed duplex contains the recognition sequence of the nuclease, triggering the enzyme to cut right between the input and output.
4.
The nicked structure is not stable at the working temperature, eventually releasing both input and output.
Figure 11.2
The modules of the PEN toolbox: activation, autocatalysis and inhibition.
Figure 11.3
(a) Black box representation of a template. (b) The general form of the transfer function for this box, with α the optimal speed of the module, [.] the concentration of DNA compounds, β and β
i
the Michaelis parameters of the input and inhibitor, respectively. (c) The reactions involving the exonuclease: each unprotected compound (anything but templates) is degraded over time. (d) The contribution of the exonuclease to the equation describing the evolution of the compounds
s
over time.
exo
s
, the activity of the exonuclease with respect to
s
, usually depends on the length of
s
. In general, this means that two values are possible, one for signals and one for inhibitors (slightly longer).
Figure 11.4
Left: a large evolved system sensing its environment (three nodes at the top) and computing a new internal state. Right: various recurring patterns generated during various runs. One may consider them “standard” subroutines that are combined together to make larger systems.
Figure 11.5
Left: Reactions related to activation. Non-enzymatic reactions (hybridization and denaturation) are reversible and as such are represented by a double-headed arrow. Enzymatic reactions (polymerization and nicking) are not reversible. Right: Reactions related to inhibition. Because an inhibitor always leave a short toehold on the left and on the right of the template, it is possible for both the input and the output to invade and remove the inhibitor. Note that this is done at a slower speed than hybridization, with a slowdown dependent on the number of free bases [42].
Figure 11.6
A saturation-based oscillator. Left: the topology of the system. Note that both parts are identical. The only symmetry-breaking element is the initial concentrations of the signal compound. Right, top: without saturation, the system can only perform damped oscillations. The green curve has been offset to increase the readability. Right, bottom: when enzymatic saturation is added to the model, the system can oscillate as each side alternatively gets to a high state where the polymerase gets saturated, decreasing the generation of inhibitor for the other side.
Figure 11.7
Example of substracts that can be considered to saturate the various enzymes present in the system.
Figure 11.8
A graph representation and corresponding ERNe encoding of the Oligator. Nodes represent sequences while arrows represent templates; bar-headed arrows represent inhibition. a and b are signal sequences, whereas the green node Iaa is an inhibiting sequence; it inhibits the template a→a (i.e., the self-activation of a). The system has three sequences and three templates. Thus, there are three node genes and three connection genes in its ERNe encoding.
Figure 11.9
Different types of mutation used in ERNe. Note that although mutate disable template and mutate switch template belong to parameter mutation, they actually change the structure as well.
Figure 11.10
Switch template mutation. The template b→a is disabled, an inhibition node Iaa is added, then a template b→Iaa is added with a new innovation number.
Figure 11.11
Add sequence mutation, with (a) a sequence c added in the middle of existing template a→a, (b) a sequence c added to inhibit template a→b, and (c) a sequence c is added to activate sequence b.
Figure 11.12
Add activation mutation. The template b→b is added.
Figure 11.13
Add inhibition mutation. The template b→a is disabled, an inhibiting sequence Iaa is added, and a template b→Iaa is added.
Figure 11.14
Two-point crossover of the two individuals Parent 1 and Parent 2. The template genes are lined up in the order of innovation number. The red lines show the two crossover points. The child takes the gene number 1, 2, 5, 6, 7, and 8 from Parent 1, and gene number 3 and 5 from Parent 2. As a result, we have a new topology that has some parts of both parents.
Figure 11.15
How to define first peak, second peak, and last peak.
Figure 11.16
Two derivatives of the long oligator, having tracking sequence as the first (a), and the last (b) nodes of the feedback loop.
Figure 11.17
Fitness plotted against number of nodes in the feedback loop.
Figure 11.18
Amplitudes plotted against number of nodes in the feedback loop.
Figure 11.19
Step sizes plotted against number of nodes in the feedback loop.
Figure 11.20
Original (a) and pruned (b) versions of the best solution for the Fast-strong oscillator.
Chapter 12
Figure 12.1
Graphical representation of a GRN. Nodes are proteins and edges represent enhancing and inhibiting affinities between two proteins. The bigger the edges, the closer the proteins.
Figure 12.2
Encoding of a GRN in a genome: the first chromosome is a set of indivisible proteins and the second chromosome contains the dynamics coefficients.
Figure 12.3
Crossover and mutation operators applied to the protein chromosome. A crossover (on the left-hand side) can only occur between two proteins and a mutation (on the right-hand side) consists in adding, removing, or changing a protein. Reprinted with permission from Ref. [22], copyright 2014 Springer Science and Business Media.
Figure 12.4
GRNs can be used to generate pictures and videos. To do so, pixel coordinates are connected to input proteins and color components are connected to output proteins.
Figure 12.5
Examples of pictures generated with a gene regulatory network. Reprinted with permission from Ref. [10], copyright 2014 The MIT Press.
Figure 12.6
Examples of videos generated with a gene regulatory network.
Figure 12.7
Each cell of the organism has a copy of the GRN encoded in the genome. This GRN takes environmental factors (concentration of morphogens, of nutrients, etc.) and internal factors (mechanical pressure, level of energy, current specialization state, etc.) to regulate their behavior, both the cell lifecycle actions (division, quiescence, apoptosis, specialization state) and chemical product regulation (production of energy, diffusion of morphogens and ambrosia, etc.).
Figure 12.8
Three developmental stages emerge from evolution. During the early stage of evolutions (generation 5), the organisms are not organized: they produce many nutritive cells with no protections. Then (generation 20), they organize into clusters in which nutritive cells are surounded by defesive ones. Later in the evolutionary process (generation 20), the start moving out of the center of the environment, where particles are more concentrated. The movement emerges with an asymetrical proliferation of the cells, directed by a morphogen gradient created by dying cells.
Figure 12.9
Successive steps of the development of one of the best individuals. Nutritive cells (in orange) surround themselves with a shield of protective cells (in blue) which they feed. Movement toward the outside of the environment can also be noticed (in the direction of the magenta morphogen gradient). Harmful particles become intentionally denser and denser until no organism can possibly survive. This allows for a constant constraint increase throughout the simulation. Reprinted with permission from Ref. [10], copyright 2014 The MIT Press.
Figure 12.10
Development of the best organism experiment, showing a four-step developmental strategy. Nutritive cells are orange, defensives are green and storage ones are blue. Striped area indicates where nutrients appear. The greener the cells, the higher their energy.
Figure 12.11
Sensors of the car connected to the GRN. Red plain arrows are used to track sensors whereas gray dashed ones are track sensors also available in the simulator but not used by the GRN. The plain arrows
Speed X
and
Speed Y
are respectively the longitudinal and the transversal car speeds.
Figure 12.12
The GRN uses nine track sensors and longitudinal and transversal speeds to compute the car steering, acceleration, and brake.
Figure 12.13
Organization of the protein chromosome and link to the car sensors and actuators: protein tags (in red) are evolved by the genetic algorithm whereas protein types (in green) are fixed and always plugged to the same car sensors (for input proteins) and the same car actuators (for output proteins).
Figure 12.14
Tracks used to train the GRN. All of them are provided by TORCS.
Figure 12.15
Evolution of the car behavior during the three different stages of evolution. The left-hand side plot represents the car position on the track along the distance from the start line: 0 means the car is on the track centerline, −1 means the car is on the right edge of the track and 1 means the car is on the left one. The right-hand side plot is the steering output value along the distance from start. −1 means the steer is fully rotated to the right and 1 means fully rotated to the left.
Figure 12.16
Integration of the high-level behaviors and of the GRN to the three-layer behavioral architecture.
Figure 12.17
Strategies that emerge from independent training of the GRN. From left to right, the first one consists in creating a “wing” between the target to defend and the threat. In the second strategy, defenders are distributed on the line between the target and the threat. The third strategy consists of sending one defender at the time to intercept the threat and, if it fails, other defenders are available to be sent in back up. Finally, the last strategy consists of dividing the space into sectors and have one defender in each sector. These sectors are dynamical: they change according to the threat position, dangerousness and direction. This improves the capacity of the defenders to intercept the threats.
Figure 12.18
Comparison of the best GRN (left-hand side) approach and the scripted approach (right-hand side) on 30 random scenarios composed of 100 threats to intercept with various fixed speed ratios (abscissa). The minimum success rates (worst-case scenario, in red) drops faster with the scripted approach than with the GRN (vertical dashed line): the GRN adapts better to higher speed ratios than the scripted approach.
Figure 12.19
Example of system adaptation to the team size: the same GRN is used to handle teams composed of three, four, or five defenders. The team covers the environment according to the team size.
Chapter 13
Figure 13.1
Illustration of a two-layer hierarchical gene regulatory network structure for target entrapping. Reprinted with permission from Ref. [27], copyright 2014 Springer Science and Business Media.
Figure 13.2
Examples of a target entrapping pattern from protein concentration
g
3
generated by the upper layer of the H-GRN according to different thresholds θ
3
with fixed θ
1
= 0.25 and θ
2
= 0.3 where a target is located at (
x
,
y
) = (10, 10).
Figure 13.3
Arc segments with
contact and intermediate point to connect neighboring points. Two arc configurations are possible with the same (orientation 2) or opposite (orientation 1) sign of curvature.
Figure 13.4
Splinegon representation procedure from a target entrapping pattern generated by the upper layer where targets are represented as a blue star.
Figure 13.5
Geometric relations for the region-based shape control logic. Reprinted with permission from Ref. [27], copyright 2014 Springer Science and Business Media.
Figure 13.6
One hundred robots (blue points) forming a target shape denoted as a ring and avoiding moving obstacles, which are denoted by a red and a green small circle.
Figure 13.7
Forming a ring-shaped region while tracking a moving target.
Figure 13.8
Position and velocity estimation results using the decentralized extended information filter.
Figure 13.9
Entrapping of multiple static targets with neighborhood size adaptation. By adjusting the neighborhood size through iterations using the bumper range
and the maximum sensor range
, the final size of 0.4133 m is obtained at eighth iteration.
Figure 13.10
Entrapping of multiple moving targets with a changing complex region. As the targets move away from each other, an entrapping shape is changed accordingly, and consequently the robots are organizing themselves inside a shape.
Figure 13.11
Illustration of an evolving two-layer H-GRN structure for target entrapping.
Figure 13.12
Non-uniform rational B-splines (NURBS) representation of a target entrapping pattern.
Figure 13.13
Illustration of an evolved two-layer H-GRN structure for target entrapping.
Figure 13.14
Entrapping of two targets with a moving obstacle.
Figure 13.15
Comparison between different obstacle avoidance methods.
Figure 13.16
Entrapping pattern separation as targets move away from each other.
Chapter 14
Figure 14.1
(a) Evolved canopy, (b) structural analysis, and (c) solar analysis. Reprinted with permission from Ref. [55], copyright 2012 Association for Computing Machinery, Inc.
Figure 14.2
Model setup. A three-dimensional volume contains a fixed number of node objects. These nodes are encoded with simple growth instructions, which are used during the developmental representation to grow three-dimensional designs. Reprinted with permission from Ref. [55], copyright 2012 Association for Computing Machinery, Inc.
Figure 14.3
Genotype structure: (a) Each node is described by four genes, which define four aspects of its development:
ROI
,
M
,
G
, and [
X
,
Y
,
Z
]. (b) The
range of influence
(
ROI
), a radial dimension, described by a two-digit gene (in the range 0–99), within which nodes can perceive, communicate, and connect to other nodes. (c) The
position
of the node within 3D space as described by a six-digit gene (each digit in the range 0–9) that defines the Cartesian coordinates [
XX
,
YY
,
ZZ
]. (d–g) The library of
geometry types
,
G
, which the node uses to construct structural connections within its
ROI
. This is specified using a one-digit gene (in the range 0–9) to select either:
G
1,
G
2, or
G
3. Reprinted with permission from Ref. [55], copyright 2012 Association for Computing Machinery, Inc.
Figure 14.4
Developmental growth. (a) Shows a random arrangement of nodes with empty memory spaces. (b) Node 1 grows
G
1 connections between itself and nodes within its
ROI
. (c) Node 2 grows
G
2 connections between surrounding nodes omitting Node 1 as it already has a connection. (d) Nodes 3 and 4 have been repressed by Node 2’s earlier growth. Reprinted with permission from Ref. [55], copyright 2012 Association for Computing Machinery, Inc.
Figure 14.5
Phenotype variability. Small changes to node
ROI
or geometry type can dramatically influence phenotype formation through positive and negative regulation. (a) Phenotype produced from Figure 14.4. (b) Mutation of Node 1 geometry type|
G
1 to
G
2. (c) Mutation of Node 2 geometry type|
G
2 to
G
1. (d) Mutation of Node 2
ROI
-
ROI
= 7 to
ROI
= 2. Reprinted with permission from Ref. [55], copyright 2012 Association for Computing Machinery, Inc.
Figure 14.6
Pruning process. (a) Following the developmental growth process, if any components intersect the last component to be formed is deleted. (b) Following the intersection test, a connectivity test is used to identify and remove any components that are disconnected from the main structure. Reprinted with permission from Ref. [55], copyright 2012 Association for Computing Machinery, Inc.
Figure 14.7
Growth and differentiation of gene regulatory networks in two-dimensions. Networks begin with small disconnected connections (a–b) and through evolution become larger functional structures (c–d). Nodes are positioned using Blondel
et al.
’s [3] method in Python using Thomas Aynaud’s “community” module for NetworkX. Nodes are then sized according to their eigenvector centrality.
Figure 14.8
Genotype–phenotype mapping process. (a) The genotype is a fixed string genome where all characters are integers between 0 and 9. (b) Directed three-dimensional network of regulated connections between genes–here shown in two-dimensions. (c) Network connections that are unbuildable are removed–shown as red dotted lines. (d) The phenotype represents all buildable three-dimensional connections of the regulated gene network–here shown in two-dimensions for clarity.
Figure 14.9
Changes in locality and mutational robustness. (a) Low locality in random solutions. No correlation between mutation innovation and eigenvector centrality of genes in random solutions. (b) Increased locality in evolved solutions. Correlation between mutation innovation and eigenvector centrality in evolved solutions. (c) Increased mutational robustness in evolved solutions. Cumulative mutation innovation in both random and evolved solutions.
Figure 14.10
Typical canalization of a genome. Canalization typically occurs within 100–150 generations and produces noticeable hubs in the gene network. Canalized genomes are better able to retain phenotypic traits following gene knockouts.
Figure 14.11
(a) Neutral mutations improve evolutionary search. Comparative fitness of model with and without neutral mutations. (b) Hamming distance of model with neutral mutations. This graph shows the average Hamming distance of consecutive genotypes which increase fitness throughout evolutionary runs. (c) Phenotypically neutral mutations continue to adapt gene network topology. This graph shows the average cumulative change in eigenvector centrality of the gene networks over generational time.
Figure 14.12
Changes in eigenvector centrality of gene networks during phenotypic stasis. (a) Model without neutral mutations. Each line represents a gene within the genotype. (b) Model with neutral mutations. This graph shows neutral “tinkering” of the gene networks during phenotypic stasis. (c) Model with neutral mutations showing how eigenvector centrality deviates through phenotypic stasis in relation to individual genes.
Figure 14.13
(a) Cumulative beneficial adaptations of specific gene attributes. (b) This plot shows all mutations which end periods of phenotypic stasis during the evolutionary runs. (c) Cumulative neutral mutations of specific gene attributes. (d) This plot shows all neutral mutations which occur in periods of stasis, during evolutionary runs.
Figure 14.14
Typical evolutionary run for generations,
G
= 2 to
G
= 280. This figure shows that in the early stages of evolution changes to genes tend to introduce more “innovation” (or difference) into the phenotype. Significantly, this allows the evolutionary algorithm to explore novel solutions during the early stages and discover useful “fundamental” properties which can be protected (by canalization) and subsequently optimized.
Figure 14.15
Typical evolutionary run for generations,
G
= 300 to
G
= 580. This figure illustrates the latter stages of an evolutionary run (following canalization). Here the architectural structure is adapted by exploiting hidden genetic variation to make smaller changes than shown in the early stages of the evolutionary process. Significantly, this allows the solutions to become slowly optimized over time to produce well-performing architectural structures (see
G
= 580).
Figure 14.16
This diagram shows that neutral mutations can be integral to evolutionary innovations. (a) shows that when neutral “tinkering” is removed from the genome, beneficial mutations are canalized. (b) shows that the inclusion of evolved neutral “tinkering” enables an evolutionary adaptation to occur which improves the fitness of the solution:
F
(
X
nm
).
Figure 14.17
Changes made to the genome over generations. Each genome is represented as a horizontal line of 30 spheres (representing each gene). Size and shade of each sphere is defined by an attribute of the gene it represents. Size represents the ROI attribute of each gene. Shades represent different geometry types and material properties of the connections (analogous to weights of connections). Note that some genes do not change very much following canalization—these usually represent “hubs” of regulation in the gene network. (a) Entire selected genomes from a typical evolutionary run. (b) Zoomed-in view of a period in the “fossil record” which showed phenotypic stasis.
Figure 14.18
Evolved solution that does not contain geometric regularities.
Figure 14.19
Solutions evolved within a simple physics simulation. Nodes are positioned within Cartesian space and contain growth instructions. During growth, nodes are subjected to gravity and “settle” to form a network solution.
Chapter 15
Figure 15.1
A Boolean network. Each gene has a binary state (shown in black or white), inputs from other genes, and a Boolean regulatory function (shown for G0) that determines its next state based upon the states of its inputs.
Figure 15.2
Example of an artificial genome model, loosely based on the model described in Ref. [63]. Genes are identified by the pattern
TATA
. The three letters to the right of this are interpreted as a gene product, and the six to the left are the regulatory region, divided into an activator and a repressor. Gray letters are comparable to non-coding DNA. Note that two genes produce the transcription factor
AGC
. Assuming all genes are initially expressed at the same level, this will lead to a relatively high concentration of
AGC
. Note also how external inputs can be encoded in the concentrations of transcription factors (
TAC
in this example) and how external outputs can be read from the concentrations of designated gene products (
AGC
).
Figure 15.3
Example of how the AGRN from Figure 15.2 might be used to carry out artificial development: (a) maternal cell containing an AGRN; (b) when a designated gene product reaches a certain concentration, the cell divides; (c) different gene products can cause growth in different directions and using different cell types; and (d) AGRNs in different cells can affect each other using diffusive gene products.
Figure 15.4
Example of an artificial biochemical network model in which an AGRN is coupled to an artificial metabolic network (AMN). The AMN models how a group of enzyme-mediated reactions modulate the concentrations of a group of chemical species. The AGRN, in turn, modulates the expression of the functional components of the AMN (i.e., its enzyme analogs), in effect switching different metabolic pathways on and off; see Ref. [53] for more details.
Figure 15.5
Diverse behaviors exhibited by coupled artificial biochemical networks evolved to control bipedal robots. All four controllers were found within the final population of a single multiobjective evolutionary algorithm run in which the objectives were to maximize distance covered and minimize energy used; see Refs. [55, 53] for more information about how ABNs have been used to control legged robots.
Figure 15.6
Expressive behaviors of a discrete map coupled artificial biochemical network controlling a four-legged robot with three degrees of freedom per leg. The objective was for the robot to move around as much as possible without moving away from its starting position, promoting movements that resemble dance; see: http://youtu.be/WzCmTUwtC3s.
Guide
Cover
Table of Contents
Preface
Pages
ix
x
xi
xiii
xv
xvi
xvii
1
3
4
5
6
7
8
9
10
11
12
13
14
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
67
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
162
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
206
207
208
209
210
211
213
214
215
216
217
218
219
220
221
222
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
243
244
246
247
248
250
251
252
253
254
255
256
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
279
280
281
282
283
284
285
286
287
288
289
291
292
293
294
295
296
297
299
301
302
303
304
305
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
334
336
337
338
339
340
342
344
346
347
348
349
350
351
352
355
356
357
358
359
360
361
362
363
364
365
367
368
369
370
371
372
373
374
375
376
378
379
380
381
382
383
384
385
386
387
388
389
390
391
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
Add Highlight
No Comment
..................Content has been hidden....................
You can't read the all page of ebook, please click
here
login for view all page.
Day Mode
Cloud Mode
Night Mode
Reset