Home Page Icon
Home Page
Table of Contents for
Index
Close
Index
by David M. J. Tax, Dick de Ridder, Ferdinand van der Heijden, Yaobin Zou, Ming Fen
Classification, Parameter Estimation and State Estimation, 2nd Edition
Preface
Note
Acknowledgements
About the Companion Website
1 Introduction
1.1 The Scope of the Book
1.2 Engineering
1.3 The Organization of the Book
1.4 Changes from First Edition
1.5 References
Note
2 PRTools Introduction
2.1 Motivation
2.2 Essential Concepts
2.3 PRTools Organization Structure and Implementation
2.4 Some Details about PRTools
2.5 Selected Bibliography
3 Detection and Classification
3.1 Bayesian Classification
3.2 Rejection
3.3 Detection: The Two-Class Case
3.4 Selected Bibliography
Exercises
4 Parameter Estimation
4.1 Bayesian Estimation
4.2 Performance Estimators
4.3 Data Fitting
4.4 Overview of the Family of Estimators
4.5 Selected Bibliography
Exercises
Notes
5 State Estimation
5.1 A General Framework for Online Estimation
5.2 Infinite Discrete-Time State Variables
5.3 Finite Discrete-Time State Variables
5.4 Mixed States and the Particle Filter
5.5 Genetic State Estimation
5.6 State Estimation in Practice
5.7 Selected Bibliography
Exercises
6 Supervised Learning
6.1 Training Sets
6.2 Parametric Learning
6.3 Non-parametric Learning
6.4 Adaptive Boosting – Adaboost
6.5 Convolutional Neural Networks (CNNs)
6.6 Empirical Evaluation
6.7 Selected Bibliography
Exercises
Note
7 Feature Extraction and Selection
7.1 Criteria for Selection and Extraction
7.2 Feature Selection
7.3 Linear Feature Extraction
7.4 References
Exercises
8 Unsupervised Learning
8.1 Feature Reduction
8.2 Clustering
8.3 References
Exercises
Note
9 Worked Out Examples
9.1 Example on Image Classification with PRTools
9.2 Boston Housing Classification Problem
9.3 Time-of-Flight Estimation of an Acoustic Tone Burst
9.4 Online Level Estimation in a Hydraulic System
9.5 References
A Topics Selected from Functional Analysis
A.1 Linear Spaces
A.2 Metric Spaces
A.3 Orthonormal Systems and Fourier Series
A.4 Linear Operators
A.5 Selected Bibliography
Notes
B Topics Selected from Linear Algebra and Matrix Theory
B.1 Vectors and Matrices
B.2 Convolution
B.3 Trace and Determinant
B.4 Differentiation of Vector and Matrix Functions
B.5 Diagonalization of Self-Adjoint Matrices
B.6 Singular Value Decomposition (SVD)
B.7 Selected Bibliography
Note
C Probability Theory
C.1 Probability Theory and Random Variables
C.2 Bivariate Random Variables
C.3 Random Vectors
C.4 Selected Bibliography
Notes
D Discrete-Time Dynamic Systems
D.1 Discrete-Time Dynamic Systems
D.2 Linear Systems
D.3 Linear Time-Invariant Systems
Selected Bibliography
Index
EULA
Search in book...
Toggle Font Controls
Playlists
Add To
Create new playlist
Name your new playlist
Playlist description (optional)
Cancel
Create playlist
Sign In
Email address
Password
Forgot Password?
Create account
Login
or
Continue with Facebook
Continue with Google
Sign Up
Full Name
Email address
Confirm Email Address
Password
Login
Create account
or
Continue with Facebook
Continue with Google
Prev
Previous Chapter
D Discrete-Time Dynamic Systems
Next
Next Chapter
EULA
Index
a
Acceptance boundary
Adaboost
Algorithm
backward
condensation
forward
forward–backward
Viterbi
Allele
Autoregressive, moving average models
b
Batch processing
Bayes estimation
Bayes' theorem
Bayesian classification
Bhattacharyya upper bound
Bias
Binary classification
Binary measurements
Boosting
Branch-and-bound
c
Chernoff bound
Chi-square test
Chromosome
Classifier
Bayes
Euclidean distance
least squared error
linear
linear discriminant function
Mahalanobis distance
maximum a posteriori (MAP)
minimum distance
minimum error rate
nearest neighbour
perceptron
quadratic
support vector
Clustering
average-link
characteristics
complete-link
hierarchical
K-----means
single-link
Completely
controllable
observable
Computational complexity
Computational issues
Condensation algorithm (conditional density optimization)
Condensing
Confusion matrix
Consistency checks
Continuous state
Control law
Control vector
Controllability matrix
Controller
Convolutional Neural Networks (CNNs)
Cost
absolute value
function
matrix
quadratic
uniform
Covariance
Covariance model (CVM) based estimator
Covariance models
Cross-validation
Crossover
Curve
calibration
fitting
d
Datafiles
Datasets
Decision boundaries
Decision function
Degrees of freedom (Dof)
Dendrogram
Design set
Detection
Discrete
algebraic Ricatti equation
Kalman filter (DKF)
Lyapunov equation
Ricatti equation
state
Discriminability
Discriminant function
generalized linear
linear
Dissimilarity
Distance
Bhattacharyya
Chernoff
cosine
Euclidean
inter/intraclass
interclass
intraclass
Mahalanobis
probabilistic
Distribution
Gamma
Drift
Dynamic stability
e
Editing
Elitism
Entropy
Ergodic Markov model
Error correction
Error covariance matrix
Error function
Error rate
Estimation
maximum a posteriori (MAP)
maximum likelihood
minimum mean absolute error (MMAE)
minimum mean squared error (MMSE)
minimum variance
Estimation loop
Evaluation
Evaluation set
Experiment design
Extended Kalman filter (EKF)
f
Face classification
Feature
Feature extraction
Feature reduction
Feature selection
generalized sequential forward
Plusl – take away r
selection of good components
sequential forward
Feed-forward neural network
Fisher approach
Fisher's linear discriminant
Fitness function
Fudge factor
g
Gain matrix
Gene
Generation
Generative topographic mapping
Genetic operators
Goodness of fit
Gradient ascent
h
Hidden Markov model (HMM)
Hidden neurons
Hierarchical clustering
Hill climbing algorithm
Histogramming
Holdout method
i
i.i.d.
Image classification
Image compression
Importance sampling
Incomplete data
Indicator variables
Infinite discrete-time model
Innovation(s)
matrix
Input vector
k
Kalman
filtering
form
Kalman filter
discrete
extended
iterated extended
linearized
Kalman gain matrix
Kernel
Gaussian
PCA(KPCA)
polynomial
radial basis function (RBF)
trick
K-means clustering
K-nearest neighbour rule
Kohonen map
l
Labeled data
Labelling
Labels
Lagrange multipliers
Latent variable
Learning
least squared error
non-parametric
parametric
perceptron
supervised
unsupervised
Learning data
Learning rate
Least squared error (LSE)
Leave-one-out method
Left–right model
Level estimation
Likelihood
function
ratio
Linear
dynamic equation
plant equation
state equation
system equation
Linear feature extraction
Linear feedback
Linear-Gaussian system
Linear system equation
Log-likelihood
Loss function
m
Mahalanobis distance
Mahalanobis distance classifier
MAP estimation
Mappings
Margin
Markov condition
Matched filtering
Maximum likelihood
Maximum likelihood estimation
Mean square error
Measure
divergence
Matusita
Minimum error rate
Minimum risk classification
Missing data
Mixture
of Gaussians
of probabilistic PCA
MMSE estimation
Mode estimation
Model selection
Monte Carlo simulation
Moving average models
Multidimensional scaling
Multiedit algorithm
Mutation
n
Nearest neighbour rule
Neuron
Nominal trajectory
Non-linear operation
Normalized
estimation error squared
importance weights
innovation squared
o
Objects
Observability
Gramian
matrix
Observation
Observer
Online estimation
Optimal filtering
Optimization criterion
Outlier clusters
Outliers
Overfitting
p
Parameter vector
Particle filter
Particle filtering
Particles
Parzen estimation
Perceptron
Periodogram
Place coding
Predicted measurement
Principal
component analysis
components
directions
Principle of orthogonality
Probabilistic dependence
Probability
posterior
prior
Probability density
conditional
posterior
Process noise
Proposal density
Population
q
Quadratic decision function
Quantization errors
r
Random walk
Regression curve
Regularization
Reject rate
Rejection
class
Resampling by selection
Residual(s) 100
Retrodiction
Ricatti loop
Risk
average
conditional
Robust error norm
Robustness
Root mean square (RMS) 390
s
Sammon mapping
Sample
covariance
mean
Sampling
Scatter matrix
between-scatter matrix
within-scatter matrix
Selection
Self-organizing map
Signal-to-noise ratio
Silhouette classification
Single sample processing
Smoothing
Stability
State
augmentation
estimation
offline
online
mixed
Statistical linearization
Steady state
Steepest ascent
Stress measure
Subspace
dimension
structure
Sum of squared differences (SSD)
Support vector
System
identification
noise
matrix
T
Target vector
Test set
Topology
Training set
Trait
Transfer function
Transition probability density
u
Unbiased
absolutely
Unit cost
Unlabeled data
Untrained mapping
v
Validation set
Viterbi algorithm
w
Weak classifier
Weak learners
Weak hypothesis
Weight distribution
White random sequence
Winning neuron
Wishart distribution
Add Highlight
No Comment
..................Content has been hidden....................
You can't read the all page of ebook, please click
here
login for view all page.
Day Mode
Cloud Mode
Night Mode
Reset