8.3. ADJOINT STATE COMPUTATION 79
updated from one time step to the next. We see that we no longer need the excessively expensive
d Q
d u
term in order to compute the derivative. If we can find these so-called adjoint states
O
Q, we
can use Equation (8.4) to obtain the gradient in an efficient way.
e overall optimization is achieved using an iterative two-step process. In every iteration,
we compute a gradient that we can use to update our best estimate of the control force sequence.
We start the method by initialiazing the control forces. is could be anything you want. e
most simple approach is to initialize them to just be zero forces. Something more elaborate
would be where we use a different strategy to get an estimate of the control forces. We can then
use those as an initial guess. ese are then refined using gradient descent optimization.
To summarize, once we have a control force vector to start with, we perform the following
two steps in a loop until we converge or decide that we’ve spent enough time on this problem.
1. Gradient computation starts by running a standard forward cloth simulation and by ap-
plying the current best guess of the sequence of optimal forces. We will refer to this step
as the forward simulation.
2. e second step of the gradient computation consists of a simulation backward in time
where the adjoint states are computed. Once all adjoint states have been computed over
the optimized frames, the states are mathematically mapped to the gradient. We refer to
this step as the backward simulation.
8.3 ADJOINT STATE COMPUTATION
In this section, we will have a look at how these adjoint states are actually computed. e adjoint
states Oq
n
D hOx
n
; Ov
n
i at time n are computed using the following equation:
O
Q
n
D
@F
@Q
T
O
Q
nC1
C
@
@Q
T
:
(8.5)
e derivation of this formula can be found in Wojtan et al. [2006]. Here, we’ll just assume the
author wasn’t lying and accept this as the one true formula. Note that in this chapter, Oq
n
refers to
the adjoint state and not the normalized vector. Once computed for all time steps, these adjoint
states are then mapped to the gradient using the equation given in (8.4).
One funny thing about this formulation is that the prior adjoint state depends on the next
one. We will have to run a simulation backward in time in order to solve this for all time steps.
We start by initializing the final adjoint state and then work our way back to the beginning of
the simulation. is is why we named this phase the backward simulation step.
e equation given in (8.5) is still a little bit vague. What is
@F
@Q
supposed to be? Let’s look
into it some more here. We mentioned earlier that F is what takes the particle states from one