8.3. ADJOINT STATE COMPUTATION 79
updated from one time step to the next. We see that we no longer need the excessively expensive
d Q
d u
term in order to compute the derivative. If we can find these so-called adjoint states
O
Q, we
can use Equation (8.4) to obtain the gradient in an efficient way.
e overall optimization is achieved using an iterative two-step process. In every iteration,
we compute a gradient that we can use to update our best estimate of the control force sequence.
We start the method by initialiazing the control forces. is could be anything you want. e
most simple approach is to initialize them to just be zero forces. Something more elaborate
would be where we use a different strategy to get an estimate of the control forces. We can then
use those as an initial guess. ese are then refined using gradient descent optimization.
To summarize, once we have a control force vector to start with, we perform the following
two steps in a loop until we converge or decide that we’ve spent enough time on this problem.
1. Gradient computation starts by running a standard forward cloth simulation and by ap-
plying the current best guess of the sequence of optimal forces. We will refer to this step
as the forward simulation.
2. e second step of the gradient computation consists of a simulation backward in time
where the adjoint states are computed. Once all adjoint states have been computed over
the optimized frames, the states are mathematically mapped to the gradient. We refer to
this step as the backward simulation.
8.3 ADJOINT STATE COMPUTATION
In this section, we will have a look at how these adjoint states are actually computed. e adjoint
states Oq
n
D hOx
n
; Ov
n
i at time n are computed using the following equation:
O
Q
n
D
@F
@Q
T
O
Q
nC1
C
@
@Q
T
:
(8.5)
e derivation of this formula can be found in Wojtan et al. [2006]. Here, we’ll just assume the
author wasnt lying and accept this as the one true formula. Note that in this chapter, Oq
n
refers to
the adjoint state and not the normalized vector. Once computed for all time steps, these adjoint
states are then mapped to the gradient using the equation given in (8.4).
One funny thing about this formulation is that the prior adjoint state depends on the next
one. We will have to run a simulation backward in time in order to solve this for all time steps.
We start by initializing the final adjoint state and then work our way back to the beginning of
the simulation. is is why we named this phase the backward simulation step.
e equation given in (8.5) is still a little bit vague. What is
@F
@Q
supposed to be? Lets look
into it some more here. We mentioned earlier that F is what takes the particle states from one
80 8. CONTROLLING CLOTH SIMULATIONS
time step to the next. In our standard forward simulation, we used the linearized backward Euler
integration scheme to accomplish this. Mathematically, computing the adjoint states based on
the linearized scheme is the right thing to do in order to compute the correct gradient. However,
Wojtan et al. [2006] states that doing so will lead to a dimensional explosion in the derivatives
which again would make the method computationally untracktable.
is issue can be overcome by computing the adjoint states corresponding to the backward
Euler scheme instead of its linearized version. is means that we’re no longer computing the
exact gradients for our simulations. However, we can compute a gradient that is very similar
at a much cheaper cost. For computer graphics purposes, this is definitely worth the trade-off.
Recall that the backward Euler scheme is given by
q
nC1
D q
n
C hV.q
nC1
/
(8.6)
with
d q
dt
D V
.
q
/
. Remember that h was the time duration with which we advance the simulation
in a single simulation step. If we substitute this into Equation (8.5) for computing the adjoint
states, we find
O
q
n
D Oq
nC1
C h
@V
@q
ˇ
ˇ
ˇ
ˇ
n
T
Oq
n
C
@
@q
n
T
:
(8.7)
is adjoint state computation is linear because the Jacobian
@V
@q
ˇ
ˇ
ˇ
ˇ
n
is known from the
corresponding time step in the forward simulation. So, if we apply the following backward Euler
scheme to Equation (8.7)
v
nC1
D v
n
C hM
1
f
nC1
x
nC1
D x
n
C hv
nC1
;
(8.8)
we get
Ov
n
D Ov
nC1
C hM
1
@f
@v
ˇ
ˇ
ˇ
ˇ
n
T
Ov
n
C hOx
n
C
@
@v
n
T
Ox
n
D Ox
nC1
C hM
1
@f
@x
ˇ
ˇ
ˇ
ˇ
n
T
Ov
n
C
@
@x
n
T
:
(8.9)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset