4
Functional Spaces for Engineers

Functional spaces are Hilbert spaces formed by functions. For engineers, these spaces represent spaces of quantities having a spatial or temporal distribution, i.e. fields. For instance, the field of temperatures on a region of the space, the field of velocities of a continuous medium, the history of the velocities of a particle is functions. As previously observed, these spaces have some particularities connected to the fact that they are infinite dimensional.

From the beginning of the 20th Century, the works of Richard Courant, David Hilbert, Kurt Friedrichs and other mathematicians of Gottingen have pointed the necessity of a redefinition of derivatives and of the functional spaces involving derivatives.

Otto Nykodim introduced a class named “BL” (class of Bepo-Levi), involving (u, v) = ∫∇u.∇v dx [NIK 33]. He constructed a theory about this class and brought it to the framework of Hilbert spaces two years later [NIK 35], but the complete theory arrived later with the works of Serguei Sobolev, a Russian mathematician that introduced functional spaces fitting to variational methods.

Sobolev was closely connected to Jacques Hadamard, who faced the mathematical difficulties arising in fluid mechanics, when studying the Navier–Stokes equations [HAD 03a, HAD 03b]. Far from Hadamard, the Russian Nikolai Gunther, a former PhD student of Andrey Markov, was interested in potential theory and, particularly, in extensions of Kirchhoff formula and derivatives of irregular functions by a new smoothing method. Gunther suggested, in collaboration with Vladimir Smirnov, a promising PhD student – Serguei Sobolev. Hadamard was a frequent visitor of Sobolev and was in contact with him since 1930. At these times, Sobolev was working at the Seismological Institute of the Academy of Sciences in Leningrad (now Saint Petersburg), under the direction of Smirnov, and studied the propagation of waves in inhomogeneous media [SOB 30, SOB 31]. During this work, he formulated a method of generalized solutions [SOB 34a, SOB 34b, SOB 35a, SOB 35b], which was rapidly diffused in France by Jacques Hadamard and Jean Leray. This last mathematician then applied the nascent – and imperfect – theory to fluid mechanics and obtained a fundamental result, by using a new notion of derivative [LER 34].

World War II interrupted the advances of variational methods, notably in the works of Laurent Schwartz and Israel Gelfand, who popularized the works of Sobolev among mathematicians and developed the extension of their applications considerably. The end of the war was to produce their more interesting works. For instance, the fundamental work of Schwartz appeared in 1945 [SCH 45] and the complete theory was published in 1950–1951 [SCH 50, SCH 51]. Gelfand published his major works in 1958–1962 [GEL 64a, GEL 68b, GEL 63c, GEL 64b, GEL 66]. The works of Schwarz and Gelfand diffused the new theory worldwide and led to fantastic developments in terms of solution of differential equations – we will study this aspect in the next chapter.

These works did not signal the end in the development of functional spaces connected to variational methods. Namely, the works of Jean-François Colombeau [COL 84] led to a response to the objections of Schwarz in [SCH 54] and the theory of hyper functions generalized his works [SAT 59a, SAT 60]. These new developments still need to find engineering applications and will thus not be studied in this book.

4.1. The L2(Ω) space

The construction of complete function spaces is complex: as shown in the preceding examples, there is a contradiction between the assumptions of regularity used in order to verify that (u, u) = 0 ⇒ u = 0 and the assumption that Cauchy sequences have a limit in the space.

Such a difficulty has been illustrated, for instance, in example 3.5, with images is continuous on Ω and (v, v) < ∞} and

images

In this example, we have shown a sequence {un: n img img*} such that un → u strongly, but images. In order to overcome this difficulty, the method of completion has been introduced. Completion consists of generating a new vector space, referred to as L2 (Ω) and formed by sets of Cauchy sequences on V (see [SOU 10]). L2(Ω) may be interpreted as follows: let V be the set of continuous square summable functions:

images

and

images

Then

images

is a Hilbert space (see [SOU 10]). Indeed, in this case,

images

In general, in order to alleviate the notation, the square brackets are not written and [v] is denoted by v. When using this simple notation, we must keep in mind that v (i.e. [v]) contains infinitely many elements, so that u(x) has no meaning. For instance, let us consider,

images

Then, ∀α img img,uα img [0]. So, [uα] = [0] and, uα(x) = α, so that the class of the null function contains an infinite amount of elements and the value of images has no meaning. When denoting [0] by 0, we must keep in mind that the values of 0 in a particular point vary. By these reasons, elements of L2(Ω) are sometimes referred to as generalized functions and completion implies a loss of information: point values are lost when passing to the complete space L2(Ω). In order to help the reader to keep in mind the difference between u and [u], we will use the notation:

images

Thus, index zero recalls that the elements on the scalar products are not standard functions, but classes, for which point values are not defined.

It must be noted that the construction of L2(Ω) may be carried using other spaces images such as, for instance,

images

or

images

images is often referred to as the set of test functions. A square summable function may be approximated by elements of these spaces, namely by using mollifiers introduced by Sobolev [SOB 38] and Friedrichs [FRI 44, FRI 53]. Moreover, these spaces are dense in L2(Ω).

In the following, this approach is generalized by Sobolev spaces, and a way to retrieve the notion of point values is given.

4.2. Weak derivatives

As observed above, passing from standard functions to generalized functions involves the loss of the notion of point values. This implies that the standard notion of derivative does not apply. Indeed, let Ω = (a,b), v img L2(Ω): the quotient used to define the derivative at point x img Ω reads as images and has no meaning, since the values of v(x) and v(x + h) are undefined. Distribution theory proposed by Schwarz introduces a new concept, based on the integration by parts. Indeed, assume that v:Ω → img is regular enough.

images

and, for a regular function φ such that φ(a) = φ(b) = 0,

images

so that v′ is characterized by the linear functional

images

This remark suggests the following definition:

DEFINITION 4.1.– Let Ω ⊂ imgn be a regular domain and u img L2(Ω). The weak derivative images is the linear functional images defined by:

images

square

Weak derivatives are also referred to as distributional derivatives or generalized derivatives. We have:

THEOREM 4.1.– Let images be the weak derivative u/xi, and verify:

images

Then, there exists a unique wi img L2 (Ω) such that,

images

In this case, we say that wi = u/xi.

square

PROOF–. Theorem 3.23 shows that i extends to a linear continuous functional i:L2(Ω)img. Therefore, the result is a consequence of Riesz (theorem 3.24).

square

COROLLARY 4.1.– Let ψ img C1(Ω)be such that both ψ and ∇ψ are square summable. Let u = [ψ] img L2(Ω). Then u/xi = ψ/xi, for 1 ≤ i n, i.e. the weak and classical derivatives coincide.

square

PROOF.– We observe that:

images

Thus, Green’s formula:

images

yields:

images

so that, taking M = ||ψ/xi||0,

images

and the result follows from theorem 4.1.

square

REMARK 4.1.– Standard derivation rules apply to weak derivatives, under the assumption that each element is defined. For instance, if images, then,

images

However, in derivatives of composite functions, change of variables have the same properties as classical derivatives, provided each element is defined.

square

EXAMPLE 4.1.– Let Ω = (−1, 1), u(x) = |x|. Let us show that u’(x) = sign(x). Indeed,

images

and we have (recall that φ(−1) = φ(1) = 0).

images

Thus,

images

square

EXAMPLE 4.2.– Let Ω = (−1, 1), u(x) = sign(x). Let us show that u’ = 2δ0, with δ0(φ) = φ(0) [Dirac’s delta – see Chapter 6]. Indeed,

images

and we have (recall that φ(−1) = φ(1) = 0).

images

In this case, there is no w img L2(Ω) such that (v) = (w, v)0, ∀v img L2(Ω).

square

4.2.1. Second-order weak derivatives

We have, for Ω = (a,b), images (thus, φ(a) = φ(b) = φ′(a) = φ′(b) = 0)

images

Then, we may define:

DEFINITION 4.2.– Let Ω ⊂ imgn be a regular domain and u img L2(Ω). The weak derivative 2u/xi xj is the linear functional images defined by:

images

square

Analogous to theorem 4.1, we have:

THEOREM 4.2.– Let images be the weak derivative 2u/xi xj, and verify:

images

Then, there exists a unique wij img L2 (Ω) such that:

images

square

In this case, we say that wij = 2u/xi xj.

The proof is analogous to these given in theorem 4.1.

EXAMPLE 4.3.– Let Ω = (−1, 1), u(x) = |x|. Let us show that u" = 2δ0, with δ0(φ) = φ(0) (Dirac’s delta, see Chapter 6). Indeed,

images

and we have (recall that φ′(−1) = φ′(1) = 0):

images

square

4.2.2. Gradient, divergence, Laplacian

Let Ω ⊂ imgn be a regular domain. Let u img L2(Ω). The gradient of u, denoted by ∇u, is defined as images, given by:

images

The Laplacian of u, denoted by Δu, is defined as images, given by:

images

Let u = (u1, …, un) img [L2(Ω)]n. The divergence of u, denoted by div(u), is defined as images, given by:

images

REMARK 4.2.– Classical results such as Green’s formulas or Stokes’ formula remain valid when using weak derivatives, under the assumption of each term has a meaning. For instance,

images

under the condition that each term (including products) is defined.

square

EXAMPLE 4.4.– Let images, u(x) = ln r, images. Let us show that Δu = 2πδ0. Let images, images. Notice that images, so that the weak and classical derivatives coincide on Ωε. Then, Green’s formula shows that:

images

Since images on Ω,

images

Using polar coordinates (r, θ) and taking into account that images, images ds = rdθ and that r = εon Ωε, we have:

images

Since ε ln ε → 0 and

images

we have:

images

Thus,

images

square

EXAMPLE 4.5.– Let images, images, images. Analogously to the preceding situation, we have images. In this case, images, images and, using spherical coordinates (r, θ, ψ), we have images ds = r2 sinψdθdψ and that r = ε on Ωε, so that

images

Thus

images

and

images

square

4.2.3. Higher-order weak derivatives

We have, for Ω = (a, b), φimages(Ω),

images

Thus, we may define:

DEFINITION 4.3.– Let Ω ⊂ imgn be a regular domain and vL2(Ω). Let images. The weak derivative images is the linear functional images defined by:

images

square

We have:

THEOREM 4.3.– Let images be the weak derivative imagesand verify:

images

Then, there exists a unique wαL2(Ω) such that:

images

In this case, we say that images

square

The proof is analogous to those given in theorem 4.1.

4.2.4. Matlab® determination of weak derivatives

Approximations of classical derivatives may be obtained by using the method derive defined in the class basis introduced in section 2.2.6. The evaluation of weak derivatives requests the use of scalar products, analogous to those defined in section 3.2.5. We present below a Matlab implementation using this last class. Recall that du = u′ verifies:

images

Let us consider a basis {φi:iimg} ⊂ S and

images

We have:

images

Thus, vector DU = (du1, … , dun) is the solution of a linear system A. DU = B, where images.

Let us assume that the family images is defined by a cell array basis such that basis{i} defines φi according to the standard defined in section 3.2.6: basis{i} is a structure having properties dim, dimx, component, where component is a cell array such that each element is a structure having as properties value and grad. u is assumed to be analogously defined. In this case, the weak derivative is evaluated as follows (Program 4.1):

image
image

Program 4.1. A class for weak derivatives of functions of one variable

EXAMPLE 4.6.– Let us consider the family φi(x) = xi−1 and u(x) = x4 The weak derivative is u′(x) = 4x3. We consider a = −1, b = 1. We generate the function by using the commands:


xlim.lower.x = a;
xlim.upper.x = b;
xlim.dim = 1;
u.dim = 1;
u.dimx = 1;
u.component = cell(u.dim,1);
u.component{1}.value = @(x) x^4;
ua = u.component{1}.value(a);
ub = u.component{1}.value(b);

Then, the commands


sp_space = @(u,v) scalar_product.sp0(u,v,xlim,'subprogram');
c = weak_derivative.coeffs(u,basis,sp_space,a,b,ua,ub,'subprogram');
du = weak_derivative.funcwd(c,basis);

generate the weak derivative by using the method “subprogram”. We may also generate a table containing the values of u as follows:


x = a:0.01:b;
p.x = x;
p.dim = 1;
U.dim = 1;
U.dimx = 1;
U.component = cell(U.dim,1);
U.component{1}.value = spam.partition(u.component{1}.value,p);

The commands


sp_space = @(u,v) scalar_product.sp0(u,v,p,'table');
ctab = weak_derivative.coeffs(U,tb,sp_space,a,b,ua,ub,'table');
dutab = weak_derivative.funcwd(ctab,basis);

generate the weak derivative by using the method “table”. The results are shown in Figure 4.1.

square

EXAMPLE 4.7.– Let us consider the family images and u(x) = |x|. The weak derivative is u′(x) = sign(x). We consider a = −1, b = 1. The results are exhibited in Figure 4.2.

square

image

Figure 4.1. Evaluation of the weak derivative of x4 using subprograms and tables

image

Figure 4.2. Evaluation of the weak derivative of |x| using subprograms and tables

4.3. Sobolev spaces

As previously indicated, Serguei Lvovitch Sobolev was the Russian mathematician that introduced functional spaces fitting to variational methods, giving a complete mathematical framework. In his works, Sobolev formulated a theory of variational solutions and introduced Hilbert spaces involving derivatives [SOB 64], such as, for instance,

images
images

and

images

We may also consider Γ ⊂ Ω such that meas(Γ) ≠ 0, i.e. Γ has a strictly positive length if its dimension is 1 and a strictly positive surface if its dimension is 2:

images

Sobolev spaces are generated by completion of particular spaces [SOB : 8]. For instance, by considering

images and∇v are continuous on Ω and (v, v)1<∞}

with the scalar product

images

the completion method generates H1(Ω). Analogously, the completion

images

generates images. Taking Γ = Ω. generates images.

The generic Sobolev space is denoted by:

images

and has a scalar product:

images

It is obtained by completion of a convenient space of continuous functions. We have H0(Ω) = L2(Ω).

Sobolev spaces are often invoked when solving partial differential equations. Note that they are formed by generalized functions.

One of the main properties of elements of Sobolev spaces are inequalities connecting an element and its weak derivatives. These inequalities are often referred to as Sobolev imbedding inequalities. For instance, we have Morrey’s inequality (see [EVA 10]).

THEOREM 4.4.– There exists imagesimg such that

images

square

and Poincaré’s inequality (see [EVA 10]).

THEOREM 4.5.– Let Ω ⊂ imgn be open bounded, not empty. There exists imagesimgsuch that:

images

square

Sobolev spaces are separable (see, for instance, [LEB 96]).

4.3.1. Point values

The point value may have a meaning for elements from Sobolev spaces other than H0(Ω). For instance, let us consider Ω = (a,b) and uH1(Ω)We have:

images

Let λ img img

images

We have:

images

so that,

images

This equality may be used to determine u(b). Let us define:

images

and b:imgimg given by:

images

Tb is linear, so that b is also linear. We have:

images

Moreover, there exists C img img such that,

images

Then, b:imgimg is linear and continuous. From the Riesz theorem, there exists β img img such that b (λ) = βλ. We define u(b)

The map H1(Ω) ∋ uβ img img is the trace of u at b., denoted by γu.

It generalizes to higher dimension by considering Ω ⊂ imgn, u img H1(Ω), Γ ⊂ Ω – in this case, we use the Green’s formula:

images

We may consider, for instance, ƒ img L2(Γ) and

images

In this case,

images

and this equality may be used in order to define u on Γ, into an analogous way. Here,

images

Again, TΓ is linear and

images

so that the existence of C img img such that:

images

yields that Γ: L2(Γ) → img is linear and continuous. In this case, from the Riesz theorem, there exists g img L2 (Γ) such that,

images

We define γu = g on Γ.

4.4. Variational equations involving elements of a functional space

Variational equations were introduced in Chapter 3 when considering orthogonal projections (equations [3.5], [3.7], [3.8]) and in Chapter 2.

The standard form of a variational equation is:

where HV is a linear subspace, : Vimg is linear continuous and images is such that va(u, v) is linear. The standard formulation includes situations where the variational equation is a nonlinear one. For orthogonal projections, only linear or affine situations have to be considered:

  1. 1) In the case of the orthogonal projection Px of x img V onto a vector subspace H, u = Px img H, a(u, v) is the scalar product (u, v), (v) = (x, v) and H = S.
  2. 2) In the case of the orthogonal projection Px of x img V onto an affine subspace S = S0 + {x0} (S0 vector subspace, x0 img V), u = Px − u0, a(u, v) is the scalar product (u, v), (v) = (xx0, v) and H = S0.

Approximations of the solution of the variational equation [4.1] are generated by considering a total family on S, denoted by F = {φi}iimgimg. and looking for an approximated solution having the form:

images

where the coefficients U = (u1, … , uk)timgimgk have to be determined. More generally, we may consider a sequence of finite families images such that F = imgkimgimgFk is a total family on S − this is the situation when finite element approximations are used. In both these situations, S is approximated by:

images

and the variational equation becomes:

images

Since both va(u, v) and v(v) are linear, we have

images

Let us introduce G: imgkimgk given by:

images

Then, U is the solution of the algebraical system:

images

i.e.

images

If

images

with a1 bilinear and a0 linear, we have.

images

and

images

where A is the n × n matrix given by:

images

while B is the k × l matrix given by:

images

In this case, U is the solution of the linear system

images

4.5. Reducing multiple indexes to a single one

The main applications of variational equations in engineering concern Sobolev spaces such as, for instance, V = H0(Ω) =images

For images, with d > 1, these spaces become Cartesian products of spaces images, such as, for instance, images, images.

In these situations, the simplest way to generate a total multidimensional family for a space V is to consider products of total families on images. For instance, let images be such that imgmimgimg Fm is a total family on images. Then we may consider:

images

Then, images. In order to apply the methods previously exposed, it becomes necessary to transform the multi-index i = (i1, i2, … , id) into a single index. Such a transformation is standard in the framework of finite elements. For instance, we may use the transformations:

images

For a two-dimensional (2D) region Ω ⊂ img2, these transformations lead to:

images

The number of unknowns to be determined is images. Into an analogous way, the coefficient uj, correspond to a multidimensionally indexed images such that

images

For a three-dimensional (3D) region Ω ⊂ img3, the transformation leads to:

images

The number of unknowns to be determined is images. In an analogous way, the coefficient uj, corresponds to a multidimensionally indexed images such that:

images

4.6. Existence and uniqueness of the solution of a variational equation

The reader will find in the literature a large number of results concerning the existence and uniqueness of solutions of variational equations, among all the result known as the theorem of Lax-Milgram [LAX 54], from Peter David Lax and Arthur Norton Milgram.

We give below one of these results, containing the essential assumptions and which applies to nonlinear situations. Let us consider the variational equation:

images

We have:

THEOREM 4.6.– Assume that:

  • i) H is a Hilbert space or a closed vector subspace of a Hilbert space;
  • ii) va(u, v)is linear, ∀v img H;
  • iii) :Himg is linear continuous;
  • iv) ∃M img img such that:
    images
  • v) images such that:
    images

Then there exists a unique uimgV such that:

images

square

PROOF.– From (vi) and Riesz’s theorem, we have:

images

and the variational equation reads as A(u) = f or, equivalently,

images

This equation may be reformulated as:

images

i.e.

images

where PH is the orthogonal projection onto H. We have:

images

Thus

images

so that

images

and

images

Thus,

images

Moreover,

images

and we have:

images

so that,

images

and

images

Taking

images

We have,

images

so that F is a contraction and the result yields from Banach’s fixed point theorem (theorem 3.11).

square

4.7. Linear variational equations in separable spaces

Let us consider the situation where a(square, square) is bilinear, i.e.

images

In this case, conditions (iv)–(v) in theorem 4.6 are equivalent to:

images

This situation corresponds to the classical result [LAX 54].

When the space H is separable, we may consider a Hilbert basis F ={φn:n img img*} ⊂ H and consider the sequence {un : n img img*} defined as:

images

We observe that un is determined by solving a linear system:

images

where

images

Then, un is uniquely determined and is given by:

images

Moreover,

images

so that,

images

and images is bounded. As a consequence, there is a subsequence images such that images (weakly). Thus, Riezs’s theorem shows that:

images

Indeed, there exists w img H such that (v) = (w, v), ∀v img H and, as a consequence, images. Moreover, we have:

images

so that images (strongly). Thus, ∀v img H:

images

Let us denote by Pnv the orthogonal projection of v img H on to Hn.

Then,∀v img H:

images

From theorem 3.21 (iii), we obtain:

images

The uniqueness of u shows that images. Thus, theorem 3.5 shows that un u.

4.8. Parametric variational equations

In some situations, we are interested in the solution of variational equations depending upon a parameter. For instance, let us consider the situation where a and/or depends on a parameter θ:

images

In this situation, the solution u = uθ depends upon θ and we may

consider:

images

i.e. the coefficients of the expansion depend upon θ. In this case, the approach presented in section 4.4 leads to parametric equalities which may be solved by the methods introduced in section 2.2 (other methods of approximation may be found in [SOU 15]). Indeed, using the same notation of section 4.4, we have

images

so that images is the solution of the algebraical system:

images

which may be solved as in section 2.2: assuming that the functional space where uj = uj(θ) is chosen is a separable Hilbert space, we may consider a convenient Hilbert basis images and the expansion:

images

The coefficients may be determined as in section 2.2. This approach may be reinterpreted as follows: we consider the expansion as:

images

Finite dimensional approximations are obtained by truncation. For instance,

images

Let us collect all the unknowns images in a vector U of kn elements: we may use the map ind2(i, j) = (i − 1)n + j, so that, for m = ind2 (i, j), images. Let images. Then, we look for a vector U such that:

images

Then, U is the solution of the algebraical system,

images

4.9. A Matlab® class for variational equations.

The numerical solution of variational equations is performed analogously to the determination of orthogonal projections or weak derivatives. Let us assume that a subprogram avareq evaluates the value of a(u, v) and a subprogram ell evaluates (v). Both va(u, v) and v(v) are linear.

In the general situation, ua(u, v) is nonlinear and we must solve a system of nonlinear algebraic equations G(U) = 0. Assume that a use a subprogram eqsolver solves these equations – for instance, we may look for the minimum of ||G(U)||or call the Matlab® program fsolve, if the Optimization Toolbox is available. If ua(u, v) is linear, G(U) = AU – B and the equations correspond to a linear system, so that we may use linear solvers, such as the anti-slash operator of Matlab®.

Under these assumptions, we may use the class below:

image
image

Program 4.2. A class for one-dimensional variational equations

EXAMPLE 4.8.– Let us consider a = 0, b = 1 and the variational equations

images

The variational equation is nonlinear and its solution is u(x) = tan (x). We have:

images

a and e may be evaluated as follows:


sp = @(u,v) scalar_product.sp0(u,v,xlim,'subprogram');
avareq = @(u,v) a1(u, v,sp,b);
ell = @(v) ell1(v,sp);

where a1 and ell1 are defined in Program 4.3 below.

image

Program 4.3. Definition of a and ℓ in example 4.8

Results obtained using the family images and the family images, i > 1 are shown in Figure 4.3.

square

image

Figure 4.3. Solution of the nonlinear variational equation in example 4.8

EXAMPLE 4.9.– Let us consider a = 0, b = 1 and the variational equation:

images

Here

images

In this case, the problem is linear. Since u(0) = 0, we eliminate the function constant equal to one from the basis and we consider the families images and images. The results are shown in Figure 4.4.

square

image

Figure 4.4. Solution of the linear variational equation in example 4.9

4.10. Exercises

EXERCISE 4.1.– Let Ω = (0,1) and images.

  • 1) Show that is linear.
  • 2) Let images:
    • a) Show that there exists images such that images.
    • b) Conclude that there exists one and only one u img V such that images
    • c) Show that u(x) = 1 on Ω.
  • 3) Let V = H1(Ω):
    • a) Show that there exists images such that: ∀v img V: |(v)| ≤ M||v||1,1.
    • b) Conclude that there exists one and only one u img V such that images.
    • c) Show that images
  • 4) Let V = H1(Ω), but images.
    • a) Let v img V.
      Verify that images:
      – show that images
      – conclude that images
      – show that images
    • b) Conclude that there exists one and only one u img V such that images
    • c) Show that −u″ = 1 on images.
  • 5) Let images, with the scalar product (square, square)1, 0.
    • a) Le tv img V:
      – verify that images
      – conclude that images
      – show that there exists M img img such that: images.
    • b) Conclude that there exists one and only one u img V such that images.
    • c) Show that images.

      square

EXERCISE 4.2– Let images and (v) = v(0).

  • 1) Show that is linear.
  • 2) Let images and images. Show that images. Conclude images verifies images
  • 3) Let V = H1(Ω):
    • a) Verify that:
      images
    • b) Show that:
      images
    • c) Conclude that:
      images

      and show that images.

    • d) Conclude that : Vimg is continuous.
  • 4) Let images, but consider images.
    • a) Verify that images.
    • b) Use the equality images to show that images.
    • c) Conclude that there exists one and only one u img V such that images.

      square

EXERCISE 4.3.– Let Ω = (0,1), a img Ω and images. Let images and images.

  • 1) Verify that (square, square) is a scalar product on V.
  • 2) Show that: images
  • 3) Show that images.
  • 4) Show that images.
  • 5) Conclude that there exists M img img such that images.
  • 6) Show that there exists one and only one u img V such that images.

    square

EXERCISE 4.4.– Let us consider the variational equation (ƒ img img)

images

1) Let us consider the trigonometrical basis images, with φ0 = 1 and, for k > 0:

images

Let images. Determine the coefficients un as functions of ƒ. Draw a graphic of the solution for ƒ = −1 and compare it to:

images

2) Let us consider the polynomial basis F = {φn: n img img}, with φn = xn. Let images. Determine the coefficients un as functions of ƒ. Draw a graphic of the solution for ƒ = −1 and compare it to u1.

3) Consider the family F = {φn: n img img} of the P1 shape functions

images

Determine the coefficients un as functions of f. Draw a graphic of the solution for f = −l and compare it to u1.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset