The angle θ between two vectors u and v in ε3 is defined by
cosθ=u·v|u||v|,
where |u| is the natural norm, i.e., |u| = (u · u)1/2. Then the vector product in ε3 is defined by
u×v=|u||v|nsinθ,|n|=1,u·n=v·n=0.
As such, the vector product accepts two vectors u and v as inputs, and provides a single vector in the direction of n as output. Note that n is the unit normal to the plane containing u and v; thus, two vector products are possible in ε3. We single out the one for which {u, v, n} form a right-handed set. We also demand that the basis {e1, e2, e3} is right-handed, so
e1×e2=e3,e2×e3=e1,e3×e1=e2,e1×e3=−e2,e2×e1=−e3,e3×e2=−e1,e1×e1=0,e2×e2=0,e3×e3=0.
Using indicial notation, we can write the nine equations in (2.56) as the single expression
ei×ej=εijkek,
where εijk is the permutation symbol, defined such that
εijk={1ifijk=123=231=312,−1ifijk=132=213=321,0otherwise.
Then, it can be shown (refer to Problem 2.41) that
u×v=εijkuivjek,
i.e.,
u×v=(u2v3−u3v2)e1+(u3v1−u1v3)e2+(u1v2−u2v1)e3=|e1e2e3u1u2u3v1v2v3|.
Note that the vector product is anticommutative, i.e.,
u×v=−v×u.
The scalar triple product [u v w] of three vectors is defined by
[uvw]=u·(v×w).
It can be shown that
[uvw]=[vwu]=[wuv].
The absolute value of [u v w] is the volume of the parallelepiped determined by u, v, and w. Also,
detS=[SuSvSw][uvw].
Corresponding to each skew tensor W∈L is an axial vector w∈ε3, i.e.,
Wv=w×v
for any v∈ε3. Because of this correspondence, the set of all skew tensors is a three-dimensional inner product space ε3 (refer to Problem 2.42).
The presentation of the conceptual material in this section again follows [2]. In general, a tensor S maps a vector u to a vector Su. The vector Su typically has a length and orientation different from those of u. However, for special unit vectors a called eigenvectors, the tensor S maps a to a vector Sa that is parallel to a, i.e.,
Sa=αa.
The scalar α is referred to as an eigenvalue of S corresponding to the eigenvector a.
It can be shown that if S is positive definite, then its eigenvalues are strictly positive (refer to Problem 2.43). Further, if S is both symmetric and positive definite, then it can be shown that det S > 0, so S−1 exists (refer to Problem 2.44).
If S is symmetric, its eigenvectors {a1, a2, a3} constitute an orthonormal basis for ε3 (the corresponding eigenvalues are α1, α2, and α3). Additionally, any symmetric S in L can be written with respect to a basis of its eigenvectors {ai ⊗ aj} as
S=α1a1⊗a1+α2a2⊗a2+α3a3⊗a3=∑i=1αiai⊗ai,
or, in matrix form,
[S]=[α1000α2000α3].
If B is a symmetric, positive-definite tensor, then there exists a unique symmetric, positive-definite tensor V such that
V2≡VV=B.
That is, V=√B.
If F is an invertible tensor (det F ≠ 0), then
F=RU=VR,
where U and V are symmetric, positive-definite tensors, and R is an orthogonal tensor. The multiplicative decomposition (2.69) is referred to as the polar decomposition of F. The tensors U, V, and R are unique, with
U=√FTF,V=√FFT,R=FU−1.
The terms in (2.70) make sense, because FTF and FFT can be shown to be symmetric and positive definite (refer to Problem 2.45), so we may use (2.68); also, det F ≠ 0 implies that det U ≠ 0, so U−1 exists. Note that if det F > 0, then R is proper orthogonal.
A nontrivial solution of the eigenvalue problem (2.66) requires
det(S−αI)=0.
It can be shown that
det(S−αI)=−α3+α3I1(S)−αI2(S)+I3(S),
so (2.71) becomes
−α3+α2I1(S)−αI2(S)+I3(S)=0,
where
I1(S)=trS,I2(S)=12[(trS)2−tr(S2)],I3(S)=detS
are called the principal invariants of S. Equation (2.73) is called the characteristic equation of S. According to the Cayley-Hamilton theorem, every tensor S satisfies its own characteristic equation, i.e.,
−S3+S2I1(S)−SI2(S)+II3(S)=0.
Note that if S is symmetric, then it follows from (2.67b) and (2.74) that
I1(S)=α1+α2+α3,I2(S)=α1α2+α2α3+α1α3,I3(S)=α1α2α3.
Thus, two symmetric tensors with the same eigenvalues have the same principal invariants.
We call linear transformations from ε3 to L, or from L to ε3, third-order tensors. Thus, a third-order tensor D(3) is a linear map that assigns to each vector u∈ε3 a second-order tensor D(3)u∈L such that
D(3)(u+v)=D(3)u+D(3)v,D(3)(αv)=α(D(3)v),
or a linear map that assigns to each second-order tensor T∈L a vector D(3)T∈ε3 such that
D(3)(S+T)=D(3)S+D(3)T,D(3)(αT)=α(D(3)T)
for any second-order tensors S,T∈L, vectors u,v∈ε3, and scalars α∈R. The set of all third-order tensors is denoted by L(3).
If a, b, and c are vectors in ε3, we define the third-order tensor a ⊗ b ⊗ c by
(a⊗b⊗c)v=a⊗b(c·v),(a⊗b⊗c)v⊗w=a(c·v)(b·w)
for any vectors v,w∈ε3.
The Cartesian components Dijk of a third-order tensor D(3) are defined by
Dijk=(ei⊗ej)·(D(3)ek),
and we have
D(3)=Dijkei⊗ej⊗ek.
ei ⊗ ej ⊗ ek is a basis for L(3)=ε27, a 27-dimensional inner product space.
We call linear transformations from ε3 to L(3), L to L, or L(3) to ε3 fourth-order tensors. Thus, a fourth-order tensor C(4) is a linear map that assigns to each vector u∈ε3 a third-order tensor
D(3)=C(4)u,
or to each second-order tensor T∈L a second-order tensor
S=C(4)T,
or to each third-order tensor ε(3)∈L(3) a vector
v=C(4)ε(3).
The set of all fourth-order tensors is denoted by L(4).
If a, b, c, and d are vectors in ε3, we define the fourth-order tensor a ⊗ b ⊗ c ⊗ d by
(a⊗b⊗c⊗d)v=a⊗b⊗c(d·v),(a⊗b⊗c⊗d)v⊗w=a⊗b(d·v)(c·w),(a⊗b⊗c⊗d)v⊗w⊗z=a(d·v)(c·w)(b·z)
for any vectors v,w,z∈ε3.
The Cartesian components Cijkl of C(4) are defined by
Cijkl=(ei⊗ej)·(C(4)ek⊗el),
and we have
C(4)=Cijklei⊗ej⊗ek⊗el.
ei ⊗ ej ⊗ ek ⊗ el is a basis for L(4)=ε81, an 81-dimensional inner product space.
Let φ be a scalar-valued function of a tensor T, vector x, and scalar t, i.e.,
φ=φ(T,x,t).
We define the partial derivatives of φ by
∂∂tφ(T,x,t)=limα→01α[φ(T,x,t+α)−φ(T,x,t)],[∂∂xφ(T,x,t)]·u=limα→01α[φ(T,x,t+αu,t)−φ(T,x,t)],[∂∂Tφ(T,x,t)]·S=limα→01α[φ(T+αS,x,t)−φ(T,x,t)]
for all vectors u in ε3 and all tensors S in L. Note that α is a real number. Also note that ∂φ/∂t is a scalar, ∂φ/∂x is a vector, and ∂φ/∂T is a tensor.
Let v be a vector-valued function of tensor T, vector x, and scalar t, i.e.,
v=v(T,x,t).
We define the partial derivatives of v by
∂∂tv(T,x,t)=limα→01α[v(T,x,t+α)−v(T,x,t)],[∂∂xv(T,x,t)]u=limα→01α[v(T,x,t+αu,t)−v(T,x,t)],[∂∂Tv(T,x,t)]S=limα→01α[v(T+αS,x,t)−v(T,x,t)],
for all vectors u in ε3 and all tensors S in L. Note that ∂v/∂t is a vector and ∂v/∂x is a tensor. The quantity ∂v/∂T maps a tensor S to a vector, and is called a third-order tensor (refer to Section 2.4).
Let A be a tensor-valued function of tensor T, vector x, and scalar t, i.e.,
A=A(T,x,t).
The partial derivatives of A are defined by
∂∂tA(T,x,t)=limα→01α[A(T,x,t+α)−A(T,x,t)],[∂∂xA(T,x,t)]u=limα→01α[A(T,x+αu,t)−A(T,x,t)],[∂∂TA(T,x,t)]S=limα→01α[A(T+αS,x,t)−A(T,x,t)],
for all vectors u in ε3 and all tensors S in L. Note that ∂A/∂t is a tensor. The quantity ∂A/∂x is a third-order tensor, i.e., it maps vectors to tensors; the fourth-order tensor ∂A/∂T maps tensors to tensors (refer to Section 2.4).
Recall from Section 2.3 that the principal invariants of a tensor A are
I1(A)=trA,I2(A)=12[(trA)2−tr(A2)],I3(A)=detA.
It can be shown that (refer to Problems 2.46–2.48)
dI1(A)dA=I,dI2(A)dA=I1(A)I−AT,dI3(A)dA=I3(A)A−T.
If T and x in (2.88), (2.90), and (2.92) also depend on t, then the chain rule implies that
ddtφ(T(t),x(t),t)=∂φ∂T·dTdt+∂φ∂x·dxdt+∂φ∂t,ddtv(T(t),x(t),t)=∂v∂TdTdt+∂v∂xdxdt+∂v∂t,ddtA(T(t),x(t),t)=∂A∂TdTdt+∂A∂xdxdt+∂A∂t.
If the vector x in (2.88), (2.90), and (2.92) is a position vector, we define gradients of the scalar-valued function φ, vector-valued function v, and tensor-valued function A by
gradφ=∂∂xφ(T,x,t),gradv=∂∂xv(T,x,t),gradA=∂∂xA(T,x,t).
Note that the gradients of scalars, vectors, and tensors are vectors, tensors, and third-order tensors, respectively.
The divergence of a vector-valued function v of position is defined
divv=tr(gradv),
which is a scalar. The divergence of a tensor-valued function A of position is defined through
(divA)·a=div(ATa)
for any vector a. The divergence of a tensor is a vector. It can be shown (refer to Problems 2.49–2.51) that
grad(φv)=φgradv+v⊗gradφ,div(φv)=φdivv+v·gradφ,grad(v·w)=(gradv)Tw+(gradw)Tv,div(ATv)=A·gradv+v·divA,grad(1φ)=−1φ2gradφ,div(φA)=φdivA+Agradφ,div(v⊗w)=vdivw+(gradv)w.
The curl of a vector v is defined by
(curlv)×a=[gradv−(gradv)T]a.
We can show that the divergence of the curl of a vector v vanishes, i.e.,
div(curlv)=0.
Also,
curl(v×w)=(gradv)w−(gradw)v+v(divw)−w(divv).
Note that a useful property of the gradient, divergence, and curl is distributivity over vector and tensor addition, e.g.,
curl(v+w)=curlv+curlw,
div(A+B)=divA+divB.
If R is an open region (volume) bounded by a closed surface ∂R, and φ, v, and A are smooth functions of position, then, according to the divergence theorem,
∫Rgradφdv=∫∂Rφnda,∫Rdivvdv=∫∂Rv·nda,∫RdivAdv=∫∂RAnda,
where dv is the volume element of R, da is the area element of ∂R, and n is the outward unit normal on ∂R. It follows that (refer to Problem 2.52)
∫∂Rv·Anda=∫R(A·gradv+v·divA)dv.
The Cartesian component form of the chain rule (2.95) is
ddtφ(T(t),x(t),t)=∂φ∂TijdTijdt+∂φ∂xjdxidt+∂φ∂t,ddtvi(T(t),x(t),t)=∂vi∂TjkdTjkdt+∂vi∂xjdxjdt+∂vi∂t,ddtAij(T(t),x(t),t)=∂Aij∂TkldTkldt+∂Aij∂xkdxkdt+∂Aij∂t.
It can be shown (refer to Problems 2.53–2.57) that
(gradφ)i=∂φ∂xi≡φi,(gradv)ij=∂vi∂xj≡vi,j,(gradA)ijk=∂Aij∂xk≡Aij,k,divv=∂vi∂xi=vi,i,(divA)i=∂Aij∂xj=Aij,j.
Then, it follows that the Cartesian component form of the divergence theorem is
∫Rφidv=∫∂Rφnida,∫Rvi,idv=∫∂Rvinida,∫RAij,jdv=∫∂RAijnjda.
The Cartesian component form of the curl of a vector v is
(curlv)i=εijkvk,j.