Home Page Icon
Home Page
Table of Contents for
Table of Contents
Close
Table of Contents
by Farrokh Janabi-Sharifi, Aleksandar Vakanski
Robot Learning by Visual Observation
Cover
Title Page
Preface
List of Abbreviations
1 Introduction
1.1 Robot Programming Methods
1.2 Programming by Demonstration
1.3 Historical Overview of Robot PbD
1.4 PbD System Architecture
1.5 Applications
1.6 Research Challenges
1.7 Summary
References
2 Task Perception
2.1 Optical Tracking Systems
2.2 Vision Cameras
2.3 Summary
References
3 Task Representation
3.1 Level of Abstraction
3.2 Probabilistic Learning
3.3 Data Scaling and Aligning
3.4 Summary
References
4 Task Modeling
4.1 Gaussian Mixture Model (GMM)
4.2 Hidden Markov Model (HMM)
4.3 Conditional Random Fields (CRFs)
4.4 Dynamic Motion Primitives (DMPs)
4.5 Summary
References
5 Task Planning
5.1 Gaussian Mixture Regression
5.2 Spline Regression
5.3 Locally Weighted Regression
5.4 Gaussian Process Regression
5.5 Summary
References
6 Task Execution
6.1 Background and Related Work
6.2 Kinematic Robot Control
6.3 Vision‐Based Trajectory Tracking Control
6.4 Image‐Based Task Planning
6.5 Robust Image‐Based Tracking Control
6.6 Discussion
6.7 Summary
References
Index
End User License Agreement
Search in book...
Toggle Font Controls
Playlists
Add To
Create new playlist
Name your new playlist
Playlist description (optional)
Cancel
Create playlist
Sign In
Email address
Password
Forgot Password?
Create account
Login
or
Continue with Facebook
Continue with Google
Sign Up
Full Name
Email address
Confirm Email Address
Password
Login
Create account
or
Continue with Facebook
Continue with Google
Prev
Previous Chapter
Cover
Next
Next Chapter
Title Page
Table of Contents
Cover
Title Page
Preface
List of Abbreviations
1 Introduction
1.1 Robot Programming Methods
1.2 Programming by Demonstration
1.3 Historical Overview of Robot PbD
1.4 PbD System Architecture
1.5 Applications
1.6 Research Challenges
1.7 Summary
References
2 Task Perception
2.1 Optical Tracking Systems
2.2 Vision Cameras
2.3 Summary
References
3 Task Representation
3.1 Level of Abstraction
3.2 Probabilistic Learning
3.3 Data Scaling and Aligning
3.4 Summary
References
4 Task Modeling
4.1 Gaussian Mixture Model (GMM)
4.2 Hidden Markov Model (HMM)
4.3 Conditional Random Fields (CRFs)
4.4 Dynamic Motion Primitives (DMPs)
4.5 Summary
References
5 Task Planning
5.1 Gaussian Mixture Regression
5.2 Spline Regression
5.3 Locally Weighted Regression
5.4 Gaussian Process Regression
5.5 Summary
References
6 Task Execution
6.1 Background and Related Work
6.2 Kinematic Robot Control
6.3 Vision‐Based Trajectory Tracking Control
6.4 Image‐Based Task Planning
6.5 Robust Image‐Based Tracking Control
6.6 Discussion
6.7 Summary
References
Index
End User License Agreement
List of Tables
Chapter 05
Table 5.1 LBG algorithm.
Table 5.2 Mean values and standard deviations of the computation times for learning the trajectories from Experiment 1 and Experiment 2.
Table 5.3 Mean values and standard deviations of the computation times for learning the trajectories from Experiment 1 and Experiment 2 by applying the GMM/GMR approach.
Table 5.4 Means and standard deviations of the classification rates obtained by CRF for the painting task in Experiment 1.
Table 5.5 Means and standard deviations of the classification rates obtained by CRF for the peening task in Experiment 2.
Table 5.6 Means and standard deviations of the classification rates obtained by HMM and CRF for the painting task from Experiment 1.
Table 5.7 Means and standard deviations of the classification rates obtained by HMM and CRF for the peening task from Experiment 2.
Chapter 06
Table 6.1 Evaluation of the trajectory tracking with IBVS under intrinsic camera parameters errors ranging from 0 to 80%.
Table 6.2 Evaluation of the trajectory tracking without vision‐based control under intrinsic camera parameters errors ranging from 0 to 80%.
Table 6.3 Coordinates of the object position at the end of the task expressed in the robot base frame (in millimeters) under extrinsic camera parameters errors ranging from 0 to 80%.
List of Illustrations
Chapter 01
Figure 1.1 Classification of robot programming methods.
Figure 1.2 The user demonstrates the task in front of a robot learner, and is afterward actively involved in the learning process by moving the robot’s arms during the task reproduction attempts to refine the learned skills (Calinon and Billard (2007a).
Figure 1.3 Block diagram of the information flow in a general robot PbD system.
Figure 1.4 Kinesthetic teaching of feasible postures in a confined workspace. During kinesthetic teaching the human operator physically grabs the robot and executes the task.
Figure 1.5 The PbD setup for teaching peg‐in‐hole assembly tasks includes a teleoperated robot gripper and the objects manipulated by a human expert. Tracking is done using magnetic sensors.
Figure 1.6 Teleoperation scheme for PbD—master arm (on the left) and slave arm (on the right) used for human demonstrations.
Figure 1.7 AR training of an assembly task using adaptive visual aids (AVAs).
Figure 1.8 Mobile AR component including a haptic bracelet.
Figure 1.9 Sensory systems used for PbD task observation.
Figure 1.10 Learning levels in PbD.
Figure 1.11 Learning at a symbolic level of abstraction by representing the decomposed task into a hierarchy of motion primitives.
Figure 1.12 A humanoid robot is learning and reproducing trajectories for a figure‐8 movement from human demonstrations.
Figure 1.13 Control of a 19 DoFs humanoid robot using PbD.
Figure 1.14 A kitchen helping robot learns the sequence of actions for cooking from observation of human demonstrations.
Figure 1.15 The experimental setup used for teaching (on the left) includes an ultrasound machine, an ultrasound phantom model and a handheld ultrasound transducer with force sensing and built‐in 3D position markers for optical tracking system. The robotically controlled ultrasound scanning is also shown (on the right).
Figure 1.16 Robot grasp planning application.
Chapter 02
Figure 2.1 (a) Position camera sensors of the optical tracking system Optotrak Certus; (b) optical markers attached on a tool are tracked during a demonstration of a “figure 8” motion.
Chapter 03
Figure 3.1 (a) Two sequences with different number of measurements: a reference sequence of 600 measurement data points and a test sequence of 800 measurement data points; (b) the test sequence is linearly scaled to the same number of measurements as the reference sequence; and (c) the test sequence is aligned with the reference sequence using DTW.
Chapter 04
Figure 4.1 Graphical representation of an HMM. The shaded nodes depict the sequence of observed elements
and the white nodes depict the hidden states sequence
.
Figure 4.2 Graphical representation of a CRF with linear chain structure.
Chapter 05
Figure 5.1 An example of initial selection of trajectory key points with the LBG algorithm. The key points are indicated using circles. The following input features are used for clustering: (a) normalized positions coordinates
; (b) normalized velocities
; and (c) normalized positions and velocities
.
Figure 5.2 Illustration of the weighted curve fitting. For the clusters with small variance of the key points high weights for spline fitting are assigned, whereas for the clusters with high variance of the key points low weights are assigned, which results in loose fitting.
Figure 5.3 Diagram representation of the presented approach for robot PbD. The solid lines depict automatic steps in the data processing. For a set of observed trajectories
X
1
, …,
X
M
, the algorithm automatically generates a generalized trajectory
X
gen
, which is transferred to a robot for task reproduction.
Figure 5.4 (a) Experimental setup for Experiment 1: panel, painting tool, and reference frame; (b) perception of the demonstrations with the optical tracking system; and (c) the set of demonstrated trajectories.
Figure 5.5 Distributions of (a) velocities, (b) accelerations, and (c) jerks of the demonstrated trajectories by the four subjects. The bottom and top lines of the boxes plot the 25th and 75th percentile for the distributions, the bands in the middle represent the medians, and the whiskers display the minimum and maximum of the data.
Figure 5.6 Initial assignment of key points for the trajectory with minimum distortion.
Figure 5.7 Spatiotemporally aligned key points from all demonstrations. For the parts of the demonstrations which correspond to approaching and departing of the tool with respect to the panel, the clusters of key points are more scattered, when compared to the painting part of the demonstrations.
Figure 5.8 (a) RMS errors for the clusters of key points, (b) weighting coefficients with threshold values of 1/2 and 2 standard deviations, and (c) weighting coefficients with threshold values of 1/6 and 6 standard deviations.
Figure 5.9 Generalization of the tool orientation from the spatiotemporally aligned key points. Roll angles are represented by a dashed line, pitch angles are represented by a solid line, and yaw angles are represented by a dash–dotted line. The dots in the plot represent the orientation angles of the key points.
Figure 5.10 Generalized trajectory for the Cartesian
x–y–z
position coordinates of the object.
Figure 5.11 Distributions of (a) velocities, (b) accelerations, and (c) jerks for the demonstrated trajectories by the subjects and the generalized trajectory.
Figure 5.12 (a) The part used for Experiment 2, with the surfaces to be painted bordered with solid lines and (b) the set of demonstrated trajectories.
Figure 5.13 (a) Generalized trajectory for Experiment 2 and (b) execution of the trajectory by the robot learner.
Figure 5.14 Generated trajectory for reproduction of the task of panel painting for Experiment 1, based on Calinon and Billard (2004). The most consistent trajectory (dashed line) corresponds to the observation sequence with the highest likelihood of being generated by the learned HMM.
Figure 5.15 RMS differences for the reproduction trajectories generated by the presented approach (
X
G
1
), the approaches proposed in Calinon and Billard (2004) (
X
G
2
), Asfour
et al
. (2006) (
X
G
3
), and the demonstrated trajectories (
X
1
–
X
12
). As the color bar on the right side indicates, lighter nuances of the cells depict greater RMS differences.
Figure 5.16 Cumulative sums of the RMS differences for the reproduction trajectories generated by the presented approach (
X
G
1
), the approaches proposed in Calinon and Billard (2004) (
X
G
2
), and Asfour
et al
. (2006) (
X
G
3
). (a) Demonstrated trajectories (
X
1
–
X
12
) from Experiment 1 and (b) demonstrated trajectories (
X
1–
X
5
) from Experiment 2.
Figure 5.17 Generalized trajectory obtained by the GMM/GMR method (Calinon, 2009) for (a) Experiment 1 and (b) Experiment 2, in Section 4.4.4.
Figure 5.18 (a) Experimental setup for Experiment 1 showing the optical tracker, the tool with attached markers and the object for painting. (b) One of the demonstrated trajectories with the initially selected key points. The arrow indicates the direction of the tool’s motion. (c) Demonstrated trajectories and the generalized trajectory.
Figure 5.19 (a) Plot of a sample demonstrated trajectory for the peening task from Experiment 2 and a set of initially selected key points, and (b) generalized trajectory for the peening experiment.
Chapter 06
Figure 6.1 Response of classical IBVS: feature trajectories in the image plane, camera trajectory in Cartesian space, camera velocities, and feature errors in the image plane.
Figure 6.2 Response of classical PBVS: feature trajectories in the image plane, camera trajectory in Cartesian space, camera velocities, and feature errors in the image plane.
Figure 6.3 The learning cell, consisting of a robot, a camera, and an object manipulated by the robot. The assigned coordinate frames are: camera frame
ℱ
c
(
O
c
,
x
c
,
y
c
,
z
c
), object frame
ℱ
o
(
O
o
,
x
o
,
y
o
,
z
o
), robot base frame
ℱ
b
(
O
b
,
x
b
,
y
b
,
z
b
), and robot’s end‐point frame
ℱ
e
(
O
e
,
x
e
,
y
e
,
z
e
). The transformation between a frame
i
and a frame
j
is given by a position vector
and a rotation matrix
.
Figure 6.4 (a) The eigenvectors of the covariance matrix
ê
1
,
ê
2
for three demonstrations at times
k
= 10, 30, and 44; (b) observed parameters for feature 1,
. The vector
is required to lie in the region bounded by
η
min
and
η
max
.
Figure 6.5 (a) Three demonstrated trajectories of the object in the Cartesian space. For one of the trajectories, the initial and ending coordinate frames of the object are shown, along with the six coplanar features; (b) projected demonstrated trajectories of the feature points onto the image plane of the camera; (c) reference image feature trajectories produced by Kalman smoothing and generalized trajectories produced by the optimization model; and (d) object velocities from the optimization model.
Figure 6.6 (a) Demonstrated Cartesian trajectories of the object, with the features points, and the initial and ending object frames; (b) demonstrated and reference linear velocities of the object for
x
‐ and
y
‐coordinates of the motions; (c) reference image feature trajectories from the Kalman smoothing and the corresponding generalized trajectories from the optimization; and (d) the demonstrated and retrieved generalized object trajectories in the Cartesian space. The initial state and the ending state are depicted with square and cross marks, respectively.
Figure 6.7 (a) Experimental setup showing the robot in the home configuration and the camera. The coordinate axes of the robot base frame and the camera frame are depicted; (b) the object with the coordinate frame axes and the features.
Figure 6.8 Sequence of images from the kinesthetic demonstrations.
Figure 6.9 (a) Feature trajectories in the image space for one‐sample demonstration; (b) demonstrated trajectories, Kalman‐smoothed (reference) trajectory, and corresponding planned trajectory for one feature point (for the feature no. 2); (c) demonstrated linear and angular velocities of the object and the reference velocities obtained by Kalman smoothing; (d) Kalman‐smoothed (reference) image feature trajectories and the generalized trajectories obtained from the optimization procedure; and (e) demonstrated and the generated Cartesian trajectories of the object in the robot base frame.
Figure 6.10 (a) Desired and robot‐executed feature trajectories in the image space; (b) tracking errors for the pixel coordinates (
u
,
v
) of the five image features in the image space; (c) tracking errors for
x
‐,
y
‐, and
z
‐coordinates of the object in the Cartesian space; (d) translational velocities of the object from the IBVS tracker.
Figure 6.11 Task execution without optimization of the trajectories: (a) Desired and robot‐executed feature trajectories in the image space; (b) tracking errors for the pixel coordinates (
u
,
v
) of the five image features in the image space; (c) tracking errors for
x
‐,
y
‐, and
z
‐coordinates of the object in the Cartesian space; (d) translational velocities of the object from the IBVS tracker.
Figure 6.12 (a) Demonstrated trajectories of the image feature points, superimposed with the desired and robot‐executed feature trajectories; (b) desired and executed trajectories for slowed down trajectories.
Figure 6.13 (a) Image feature trajectories for one of the demonstrations; (b) demonstrated trajectories, reference trajectory from the Kalman smoothing, and the corresponding generalized trajectory for one of the feature points; (c) desired and robot‐executed image feature trajectories; (d) translational velocities of the object from the IBVS tracker; (e) tracking errors for pixel coordinates (
u
,
v
) of the five image features in the image space; and (f) tracking errors for
x
‐,
y
‐, and
z
‐coordinates of the object in the Cartesian space.
Figure 6.14 Projected trajectories of the feature points in the image space with (a) errors of 5, 10, and 20% introduced for all camera intrinsic parameters; (b) errors of 5% introduced for the focal length scaling factors of the camera.
Guide
Cover
Table of Contents
Begin Reading
Pages
iii
iv
v
x
xi
xii
xiii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
43
44
45
46
47
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
Add Highlight
No Comment
..................Content has been hidden....................
You can't read the all page of ebook, please click
here
login for view all page.
Day Mode
Cloud Mode
Night Mode
Reset