Chapter 7. Interpolation and Approximation

  • 7.1 Curve Fitting

  • 7.2 Lagrange Interpolation

  • 7.3 Newton Interpolations

  • 7.4 Cubic Spline

  • 7.5 Least-Squares Approximation

  • 7.6 Visual Solution: Code7

  • 7.7 Summary

  • Numerical Exercises

  • Programming Challenges

CURVE FITTING

The body of a car has been designed in such a way it possesses good aerodynamic features. This is important in order for the car to be comfortable, energy-efficient, cost-effective, and attractive. To achieve these objectives, the body surface of the car is made to be smooth. The normal techniques for designing the body of a car involve computer-aided design tools on the computer. The body is constructed by fitting and blending a set of patches from the B-spline or Bezier surfaces by approximating a set of points. B-spline and Bezier are some two-dimensional curves that are widely used in curve and surface fittings.

In general, curve and surface fitting is useful in many applications, notably in the design of body surfaces such as cars, aircrafts, ships, glasses, pipes, and vases. A patch in the surface is the three-dimensional extension of the B-spline curve which is obtained from a curve fitting technique.

Curve fitting is a generic term for constructing a curve from a given set of points. This objective can be achieved in two ways, through interpolation or approximation. Interpolation refers to a curve that passes through all the given points, whereas approximation is the case when the curve does not pass through one or more of the given points. The curve obtained from interpolation or approximation is one that best represents all points.

Interpolation (top) and approximation (bottom).

Figure 7.1. Interpolation (top) and approximation (bottom).

Figure 7.1 shows two different curves that can be produced from a set of points (xi, yi) for i = 0, 1,..., 5. The curve at the top is generated through interpolation as it passes through all points. The bottom curve is an approximation as it misses several points.

In this chapter, we will discuss several common interpolation and approximation methods. We will concentrate on the two-dimensional aspect of these methods that provides a strong foundation for three-dimensional or higher problems. The topics include the Lagrange, Newton, and cubic spline methods in interpolation, and the least-squares method in approximation. In discussing the interpolation and approximation methods, the interpolating points are given as (xi, yi), for i = 0, 1,...,n. There are n + 1 points given. The exact values at xi are yi = f (xi), whereas their interpolated values are denoted as P (xi).

LAGRANGE INTERPOLATION

An interpolation on two points, (x 0,y 0) and (x1, y 1), results in a linear equation or a straight line. The standard form of a linear equation is given by

Equation 7.1. 

LAGRANGE INTERPOLATION

where m is the gradient of the line and c is the y -intercept. In the above equation,

Equation 7.2. 

LAGRANGE INTERPOLATION

which results in

Equation 7.3. 

LAGRANGE INTERPOLATION

French mathematician, Joseph Louis Lagrange, proposed to rewrite the linear equation so that the two interpolated points, (x 0, y 0) and (x1,y1), are directly represented. With this in mind, the linear equation is rewritten as

Equation 7.4. 

LAGRANGE INTERPOLATION

where a0 and a1 are constants. The points x0 and x1 in the factors of the above equation are called the centers. Applying the equation at (xo, yo), we obtain y0 = a0(x0x1) + a1(x0x0), or

LAGRANGE INTERPOLATION

Equation 7.2. 

LAGRANGE INTERPOLATION

The quadratic form of the Lagrange polynomial interpolates three points, (x0, y0), (x1, y1), and (x2, y2). The polynomial has the form of

Equation 7.6. 

LAGRANGE INTERPOLATION

with centers at x0, x1, and x2. At (x0, y0),

Equation 7.7. 

LAGRANGE INTERPOLATION

or

Equation 7.8. 

LAGRANGE INTERPOLATION

Similarly, applying the equation at (x1, y1) and (x2, y2) yields

Equation 7.9. 

LAGRANGE INTERPOLATION

This produces a quadratic Lagrange polynomial, given by

Equation 7.3. 

LAGRANGE INTERPOLATION

Definition 7.1. The Langrange operator Li (x) for (xi, yi), where i = 0, 1,...,n is defined as

Equation 7.4. 

LAGRANGE INTERPOLATION

In general, the Lagrange polynomial of degree n is a polynomial that is produced from an interpolation over a set of points, (xi, yi) for i = 0, 1,...,n, as follows:

Equation 7.5. 

LAGRANGE INTERPOLATION

There are n factors in both the numerator and the denominator of Equation (7.5). The inequality ki denies zero value in the denominator, which may cause a fatal error in division. It is obvious that n = 1 produces a linear curve or a straight line, whereas n = 2 produces a quadratic curve or a parabola.

Example 7.1. Find a polynomial P (x) that interpolates the points {(−1, 2), (0, 3), (2, −1), (5, 1)} using the Lagrange method. Hence, find P (2.5).

Solution. There are four given points, and this will produce a polynomial of degree n = 3, given as

Equation 7.15. 

Lagrange Method.

The polynomial as given by Equation (7.1) is

Equation 7.16. 

Lagrange Method.

Therefore, P(2.5) = P3(2.5) = −2.131944.

NEWTON INTERPOLATIONS

The Lagrange method has a drawback. The amount of calculations depends very much on the given number of interpolated points. A large number of points require very tedious calculations on the Lagrange operators, as each of these functions has an equal degree as the interpolating polynomial.

A slightly simpler approach to the Lagrange method is the Newton method which applies to polynomials in the form of Newton polynomials. A Newton polynomial has the following general form:

Equation 7.6. 

NEWTON INTERPOLATIONS

In the above equation, ai for i = 0, 1,...,n are constants whose values are determined by applying the equation at the given interpolated points.

There are several different methods for evaluating the Newton polynomials. They include the divided-difference, forward-difference, backward-difference, and central difference. We discuss each of these methods in this chapter.

Divided-Difference Method

The divided-difference method is a method for determining the coefficients ai for i = 0, 1,...,n in Equation (7.6) using the divided-difference constants, which are defined as follows:

Definition 7.2. The divided-difference constant dk, i is defined as ith divided-difference of the function y = f (x) at xi, where

Equation 7.18. 

Divided-Difference Method

The initial values are d0,i = yi for i = 0, 1,...,n and k = 1, 2,...,n − 1.

The general form of the linear equation in Equation (7.1), which interpolates (x0, y0) and (x1, y1), can also be expressed in the form of divided-difference constants,

Equation 7.19. 

Divided-Difference Method

where x0 is the center and d0,0 and d1,0 are special constants called the zeroth and first divided-difference at x0, respectively. Applying the linear equation at the two points, we obtain

Equation 7.20. 

Divided-Difference Method

This gives

Divided-Difference Method

At the same time, the quadratic form of the Newton polynomial, which interpolates the points (x0, y0), (x1, y1), and (x2, y2) can now be written as

Equation 7.21. 

Divided-Difference Method

In the above equation, x0 and x1 are the centers. Applying the quadratic equation to the three points, we obtain

Equation 7.22. 

Divided-Difference Method

In general, the divided-difference method for interpolating (xi, yi) for i = 0, 1,...,n produces a Newton polynomial of degree n, given by

Equation 7.7. 

Divided-Difference Method

Algorithm 7.2 summarizes the divided-difference approach. An example using this algorithm is illustrated in Example 7.2.

Example 7.2. Find the polynomial from the interpolating points in Example 7.1 using the Newton's divided-difference method.

Solution. From the set {(−1, 2), (0, 3), (2, −1), (5, 1)}, we obtain

Equation 7.24. 

Newton's Divided-Difference Method.

It follows that

Equation 7.25. 

Newton's Divided-Difference Method.

Therefore,

Equation 7.26. 

Newton's Divided-Difference Method.

we obtain p(2.5)= p3(2.5)= −2.131944.

Forward-Difference Method

Both the Lagrange and the Newton divided-difference methods can be applied to cases where the x subintervals are uniform or nonuniform. On a special case where the x subintervals are uniform, it may not be necessary to apply the two methods as other methods may prove to be easier. In this case, the divided-difference method with uniform x subintervals can be reduced into a method called forward-difference. This method involves an operator called the forward-difference operator.

Definition 7.3. The forward-difference operator Δk,i is defined as the kth forward difference at xi, or Δk,i = Δk1,i+1 − Δk−1,i. The initial values are given by Δ0,i = yi for i = 0, 1, ..., n.

In deriving the Newton polynomial using the forward-difference method, let h be the width of the x subintervals. Uniform subintervals suggest all subintervals in x have equal width given as h. In other words, h = xi+1xi for i = 0, 1, ..., n − 1.

The forward-difference formula is derived from the divided-difference equation. Consider a cubic form of the divided-difference equation from Equation (7.6) given as

Equation 7.27. 

Forward-Difference Method

This is simplified into

Equation 7.28. 

Forward-Difference Method

It can be proven that the general form of the forward-difference method for interpolating the points (xi, yi) for i = 0, 1, ..., n can be extended from the cubic case above. The solution is given by

Equation 7.8. 

Forward-Difference Method

Algorithm 7.3 summarizes the forward-difference approach. This is followed by an illustration using Example 7.3.

Example 7.3. Find the Newton polynomial P(x) from the points given by {(−2, 2), (0, 3), (2, −1), (4, 1)} using the forward-difference method. Hence, find P (2.5).

Solution. From the set {(−2, 2), (0, 3), (2, − 1), (4, 1)}, we obtain

i

xi

Δ0, i = f(xi)

Δ1,i

Δ2,i

Δ3,i

0

−2

2

1

−5

11

1

0

3

−4

6

 

2

2

−1

2

  

3

4

1

   

From the above table, we get the forward-difference values,

Equation 7.30. 

Newton's Forward-Difference Method.

We obtain

Equation 7.31. 

Newton's Forward-Difference Method.

It follows that

Equation 7.32. 

Newton's Forward-Difference Method.

Backward-Difference Method

The operator Δk,i is based on forward difference. It is also possible to do the opposite, that is, backward difference.

Definition 7.4. The backward difference operatork,i is defined as the kth backward operator at xi or ∇k,i = ∇k−1,i − ∇k-1, i−1. The initial values are ∇0,i = yi for i = 0, 1,...,n.

The backward-difference method is also derived from the Newton divided-difference method. The method requires all x subintervals to have uniform width, given as h. We discuss the case of a quadratic polynomial from the divided-difference method in deriving the formula for the backward-difference method. From Equation (7.7),

Equation 7.9. 

Backward-Difference Method

where ∇k fi = ∇k1 fi − ∇k−1 f i1 is the backward-difference operator and

Backward-Difference Method

Algorithm 7.4 summarizes the steps in the backward-difference method for generating the Newton polynomial. The algorithm is illustrated through Example 7.4.

Example 7.4. Find the Newton polynomial P(x) from the interpolating points given by {(−2, 2), (0, 3), (2, −1), (4, 1)} using the backward-difference method. Hence, find P(2.5) using the backward-difference method.

Solution. From the set {(−1, 2), (0, 3), (2, −1), (5, 1)}, we obtain

i

x i

0,i = f(x i)

1,i

2,i

3,i

0

−2

2

   

1

0

3

1

  

2

2

−1

−4

−5

 

3

4

1

2

6

11

From the above table, we get the forward-difference values,

Equation 7.34. 

Newton's Backward-Difference Method.

This produces

Equation 7.35. 

Newton's Backward-Difference Method.

Therefore,

Equation 7.36. 

Newton's Backward-Difference Method.

Stirling's Method

The forward-difference and backward-difference methods may be bias toward the forward and backward directions, respectively, in their construction of Newton polynomials. As a result, the accuracy and precision of the results may differ to some extent. A more practical approach is to consider both the forward and the backward factors within a single formulation. Stirling's method is one such method that considers both factors to produce a more reliable result.

Stirling's method is based on uniform x subintervals, which works best in interpolating points that lie close to the middle of the interval. In interpolating a point x in x0 < x < xn, the method starts by locating the subinterval xi < x < xi+1, where x i and x i+1 are two given successive points. This determines the index k = i, where x k is the lower-bound value in the subinterval, or xk = x i. Stirling's solution is expressed in terms of variable r as

Equation 7.10. 

Stirling's Method

where

Stirling's Method

Equation 7.11a. 

Stirling's Method

Equation 7.11b. 

Stirling's Method

for i = 0, 1,...,n and j = 1,...,n. In Equations (7.11a) and (7.11b), δij and μij are the odd and even constants of the Stirling's method, respectively. δij computes the difference between its left and right point, whereas μij is the average between the points.

Stirling's method is implemented according to the steps in Algorithm 7.5. The algorithm is illustrated through Example 7.5.

Example 7.5. Find P(3.7) from the following points using the Stirling's method:

i

0

1

2

3

4

5

x i

3.0

3.2

3.4

3.6

3.8

4.0

yi = f(xi)

2.5

2.9

2.4

2.0

2.1

2.7

Solution. The width of the subintervals is h = 0.2. Since 3.6 < 3.7 < 3.8, we have x k = x3 and k = 3. Therefore, y k = y3 = 2 and

Stirling's Method.

i

x i

δi0 = f(x i)

δi1

δi2

δi3

δi4

δi5

0

3

2.5

     
   

0.400

    

1

3.2

2.9

 

−0.900

   
   

−0.500

 

0.100

  

2

3.4

2.4

 

0.100

 

0.600

 
   

−0.400

 

0.400

 

0.200

3

3.6

2

 

0.500

 

−0.400

 
   

0.100

 

0.000

  

4

3.8

2.1

 

0.500

   
   

0.600

    

5

4

2.7

     

From Equation (7.10), we obtain

Equation 7.40. 

Stirling's Method.

CUBIC SPLINE

A spline is a single curve that is formed from a set of piecewise continuous functions s k(x) for k = 1, ..., m − 1 as a result of interpolation over the points (xk, y k) for k = 0, 1, ..., m. A spline made from four pieces of functions, for example, has five interpolating points that appear like a single piece of smooth curve. This is realized as the spline is continuous at all joints. Not only that, the spline is also continuous in terms of derivatives at these points, which contribute to the smoothness.

A spline of degree n is constructed from piecewise polynomials of the same degree. The simplest spline is the linear spline, which consists of straight lines connecting the points successively. A quadratic spline consists of quadratic polynomials connecting the points where the first derivatives at the interpolated points exist. As the name suggests, a cubic spline connects the points through cubic polynomials where, beside the continuity, the curve has the main property as it is differentiable in its first and second derivatives at each connecting point between any two pieces of function.

A cubic spline interpolated from four points.

Figure 7.2. A cubic spline interpolated from four points.

Figure 7.2 shows a cubic spline interpolated over four points (xi, y i), for i = 0, 1, 2, 3, whose pieces are denoted as s k (x) for k = 1, 2, 3. Each piece of the spline is a continuous cubic function in the given subinterval. The spline appears to be very smooth at each interior node as its first and second derivatives exist there.

Cubic spline is a spline of degree three in the interval x 0xx m, which is made up of a set of piecewise polynomials s k (x). The general form of a cubic spline is expressed as

Equation 7.12. 

A cubic spline interpolated from four points.

In the above equation, ak, bk, ck, and d k are constants for k = 1, 2, ..., m that make up the cubic polynomials. The spline interpolates m + 1 points, (xi, y i) for i = 0, 1, ..., m.

A cubic spline has the following properties that satisfy the requirements of continuity and differentiability at the interpolating nodes:

Property 1. The spline is continuous in x 0x ≤ xn, or s 1(x 0) = y 0 and sk(x k) = y k for k = 1, 2, ..., m. This property states that the spline passes through all m + 1 given points.

Property 2. The points at the end of each interior node must be equal or s k (x k+1) = s k +1(k+1) for k= 1, 2, ..., m − 1. This property states that the value of the spline from the left of the interior node must be the same as the value from its right.

Property 3. The first derivative at the end of each interior knot is continuous or s'k (xk+1) = s'k+1 (xk+1) for k= 1, 2, ..., m − 1. This property is derived from fundamental calculus where a derivative at a point is said to exist if its value from the left equals that from the right.

Property 4. The second derivative is continuous at each interior knot or s"k (xk+1) = s"k+1(xk+1) for k = 1, 2, ..., m − 1. The same property from calculus applies where the second derivative at a point is said to exist if its derivative from the left equals that from the right.

Property 5. The end knots are free boundaries, which means the second derivative at the end knots are zero or s 1" (x 0) = sn" (xn) = 0. This property suggests the two end nodes must be tied to zero.

Property 5 above suggests the condition of s" 1 (x0) = s'n(xn) = 0 referred to as free boundaries at the end nodes. The spline with this condition is referred to as a natural cubic spline. Alternatively, another type of spline called the clamped cubic spline is produced if the condition at the end nodes is changed to s 1"(x 0) = s" (xn) ≠ 0.

The first derivative of the Equation (7.11) is a quadratic function whereas its second derivative is linear. Let w k = s"(xk) for 0 ≤ km. From the condition s"(x 0) = s"(xm) = 0, we have w 0 = w m = 0. The second derivative can be written as

Equation 7.42. 

A cubic spline interpolated from four points.

where

A cubic spline interpolated from four points.

Equation 7.43. 

A cubic spline interpolated from four points.

Integrating the above term twice gives an alternative form of Equation (7.12),

Equation 7.13. 

A cubic spline interpolated from four points.

for k = 1, 2, ...,m − 1. The constants Ak, Bk, Ck, and D k are found to be

Equation 7.14a. 

A cubic spline interpolated from four points.

Equation 7.14b. 

A cubic spline interpolated from four points.

Equation 7.14c. 

A cubic spline interpolated from four points.

Equation 7.14d. 

A cubic spline interpolated from four points.

The value of w k can be found from the condition s'k(xk) = s'k+1(xk). From Equation (7.13), we obtain the relationship given by

Equation 7.15. 

A cubic spline interpolated from four points.

for k = 1, 2, ..., m − 1. This produces a tridiagonal system of linear equations, given by

Equation 7.16. 

A cubic spline interpolated from four points.

whose three-band diagonal elements are

Equation 7.17a. 

A cubic spline interpolated from four points.

Equation 7.17b. 

A cubic spline interpolated from four points.

Equation 7.17c. 

A cubic spline interpolated from four points.

Equation 7.17d. 

A cubic spline interpolated from four points.

With w k determined from Equation (7.16), we obtain the values of A k, B k, C k, and D k using Equations (7.14a), (7.14b), (7.14c), and (7.14d), respectively.

The Thomas algorithm is the most practical method for solving the tridiagonal system of linear equations as the method uses a small number of variables and, therefore, consumes a small amount of memory. Example 7.6 shows an example of the cubic spline interpolation where the generated system of linear equations is solved using the Thomas algorithm method.

Example 7.6. Find a cubic spline that interpolates (xk, y k) = {(−1, 2), (0, 3), (2, −1), (5, 1), (6, 5)}.

Solution. In this problem, there are five points, and m = 4, where w 0 = w 4 = 0. From Algorithm 7.6, we get a system of three linear equations given by

Equation 7.55. 

Cubic Spline Method.

The values of e k, f k, g k, and r k for k = 1 to k = m are computed using Equations (7.17a), (7.17b), (7.17c), and (7.17d). The results are displayed in the following table:

k

x k

y k

e k

f k

g k

r k

0

−1.0

2.0

    

1

0.0

3.0

 

6.000000

2.000000

−18.000000

2

2.0

−1.0

2.000000

10.000000

3.000000

16.000000

3

5.0

1.0

3.000000

8.000000

 

20.000000

4

6.0

5.0

    

This produces the following system of linear equations:

Equation 7.56. 

Cubic Spline Method.

The above system is solved to produce w 1 = −3.588832, w 2 = 1.766497, and w 3 = 1.837563. We obtain A k, B k, C k, and D k using Equations (7.14a), (7.14b), (7.14c), and (7.14d), as shown in the following table:

k

x k

y k

A k

B k

C k

D k

0

−1.0

2.0

    

1

0.0

3.0

0.000000

−0.598139

2.000000

6.588832

2

2.0

−1.0

−0.299069

0.147208

1.799069

−0.647208

3

5.0

1.0

0.098139

0.102087

−0.431472

0.267706

4

6.0

5.0

0.306261

0.000000

0.693739

5.000000

LEAST-SQUARES APPROXIMATION

The least-squares method is an approximation method for a set of points based on the sum of the square of the errors. The method is popularly applied in many applications, such as in the statistical analysis involving multiple data regression. Multiple regression involves approximation on several variables based on straight lines or linear equations. Its advantage of using low-degree polynomials contributes to providing tools for forecasting, experimental designs, and other forms of statistical modeling.

In the least-squares method, an error at a point is defined as the difference between the true value and the approximated value. The method has the advantage over the Lagrange and Newton methods as the approximation is independent of the number of points. This allows low-degree polynomials for fitting a finite number of points.

The least-squares method generates a low-degree polynomial for approximating the given points by minimizing the sum of the squares of the errors. The solution to the problem is obtained by solving a system of linear equations that is generated from the minimization.

Approximation using the least-squares method can be achieve din either continuous or discrete forms. The difference between these two forms rests on the use of integral in the former and summation in the latter. The continuous least-squares method is appropriate in applications requiring the use of continuous variables and in analog-based applications. On the other hand, the discrete least-squares method is good at handling applications that have finite data. We will limit our discussion only to the discrete least-squares method in this chapter.

The discrete least-squares form of the problem is based on m interpolated points (xi, y i) for i= 0, 1, ..., m − 1. The curve to be fitted is a low-degree polynomial P (x) that best represents all points. The most common function used in the least-squares method is the linear function P (x) = a 0 + a 1x, which is good enough for many applications. Occasionally, some applications also require quadratic or cubic polynomials.

Approximation using a polynomial in the least-squares method.

Figure 7.3. Approximation using a polynomial in the least-squares method.

In the least-squares method, the error between the interpolated point y i and the approximated value p i = P (x i) at the point x = x i is given by

Equation 7.18. 

Approximation using a polynomial in the least-squares method.

The sum of the squares of the errors e i at the points x = x i for i = 0, 1, ..., m − 1 is expressed as an objective function E, as follows:

Equation 7.19. 

Approximation using a polynomial in the least-squares method.

Figure 7.3 shows a case of m = 6 points with the interpolated points, (xi, y i) in white squares and the approximated points (xi, p i) in dark squares. At each point x i, the error e i = yip i is computed. The objective function in Equation (7.18) is obtained by adding the sum of squares of all these errors.

The curves to be fitted in the least-squares approximations are normally low-degree polynomials, such as

P 1(x) = c 0 + c 1x, for a linear function,

P 2(x) = c 0 + c 1x + c 2x 2, for a quadratic polynomial,

P 3(x) = c 0 + c 1x + c 2x 2 + c 3x 3, for a cubic polynomial.

In linear approximation, a straight line equation is used to approximate the points. The objective function becomes

Equation 7.20. 

Approximation using a polynomial in the least-squares method.

Our objective here is to find the values of c 0 and c 1 by minimizing the sum of the square of the errors. The minimization requires setting

Approximation using a polynomial in the least-squares method.

Equation 7.60. 

Approximation using a polynomial in the least-squares method.

Setting

Approximation using a polynomial in the least-squares method.

Equation 7.61. 

Approximation using a polynomial in the least-squares method.

The second linear equation is obtained by setting

Approximation using a polynomial in the least-squares method.

Equation 7.62. 

Approximation using a polynomial in the least-squares method.

The two equations can be written in matrix form as follows:

Equation 7.21. 

Approximation using a polynomial in the least-squares method.

A least-squares approximation using the quadratic function P 2 (x i) = c 0 + c1 x + c2x2 produces a system of 3 × 3 linear equations. The objective function is

Equation 7.22. 

Approximation using a polynomial in the least-squares method.

The approximation is obtained by minimizing the sum of the squares of the errors through

Approximation using a polynomial in the least-squares method.

Equation 7.65. 

Approximation using a polynomial in the least-squares method.

The second equation is generated in the same manner, as follows:

Equation 7.66. 

Approximation using a polynomial in the least-squares method.

We also obtain the third equation from similar steps as above

Equation 7.67. 

Approximation using a polynomial in the least-squares method.

The three equations are formulated in matrix form, as follows:

Equation 7.23. 

Approximation using a polynomial in the least-squares method.

Equations (7.21) and (7.23) can be generalized into approximating a polynomial of degree n. The least-squares method produces the following system of (n + 1) × (n + 1) linear equations:

Equation 7.24. 

Approximation using a polynomial in the least-squares method.

Equation (7.24) is a generalization for fitting a polynomial of degree n into a set of m points using the least-squares approximation method. Letting

Approximation using a polynomial in the least-squares method.

Equation 7.25. 

Approximation using a polynomial in the least-squares method.

As a word of caution, since the equation involves a power of zero, a value of x = 0 produces 0°, which should be treated as 1.

Example 7.7. Find a least-squares polynomial of degree 2 that approximates the points (xk, yk) = {(−1, 2), (0, 3), (2, −1), (5, 1)}.

Solution. In this problem, the number of points is m = 4, whereas the polynomial degree is n = 2. The quadratic polynomial is P2(x) = c 0 + c1x + c2x2, and the objective here is to find the values of c 0, c 1, and c2. From Equation (7.24), a 3 × 3 system of equations is produced, as follows:

Equation 7.71. 

Least-Squares Method.

which becomes

Equation 7.72. 

Least-Squares Method.

Solving the above system of linear equations, we obtain c 0 = 1.339713, c 1 = −1.698565 and c 2 = 0.327751. Therefore, the approximated polynomial is P 3(x) = 1.339713 − 1.698565x + 0.327751x2.

VISUAL SOLUTION: CODE7

Our discussion on interpolation and approximation methods will not be complete without looking at the visual interface of the problems. The project is called Code7, and it displays curves from the points clicked on the displayed window. Curves can be chosen from the methods of Lagrange, Newton's divided-difference, cubic spline, and least-squares.

Figure 7.4 shows a sample output from Code7 showing the Lagrange polynomial that interpolates five points. The main window in the output is split into three regions. The first region consists of the menu items from the four methods. Second is the input region for the points, which also displays the curve from the problem. Third is a table of values of x and their interpolated/approximated polynomial values P (x).

Output from Code7.

Figure 7.4. Output from Code7.

Input for the problem is made by left-clicking the mouse in the input region in the order from left to right. A maximum of 20 points can be clicked in as the input points. A curve corresponding to the selected method is generated by left-clicking an item in the menu once input has been completed.

Figure 7.5 is a schematic drawing illustrating the computational steps in Code7. The execution progress in Code7 is monitored through fStatus with fStatus=0 indicating the pre-curve drawing stage and fStatus=1 when the curve has been generated. The program starts with the initialization of variables and objects in the constructor, CCode7(). The initial display consists of the menu items and the input region. Input is performed by left-clicking anywhere in the curve region in the order from left to right. A click any where in the input region is recorded as an input point, and a small rectangle is drawn at the point. The event is handled by OnLButtonDown(). With at least two points drawn, a click at a menu item activates OnLButtonDown() again, and a call to the selected method is made. This changes the fStatus value from 0 to 1, and a curve corresponding to the selected method is drawn. The solution is displayed in the list view table and plotted as a graph in the input region.

Code7 consists of three files, Code7.cpp and Code7.h. The application is based on a single class called CCode7, which is inherited from CFrameWnd. The main variables and objects are organized into four structures, PT, MENU, CURVE, and OUTPUT. Basically, PT has the same set of objects as before. A new addition is the array ipt, which represents the plotted points from the mouse clicks. The array has the size of nPt+1, where nPt is the total number of plotted points. Therefore, ipt is not the same as pt. The latter is an array that represents all points that make up a curve. PT is given as follows:

Schematic drawing of the computational steps in Code7.

Figure 7.5. Schematic drawing of the computational steps in Code7.

typedef struct
{
    double x, y;
}PT;
PT *pt, *ipt, max, min, left, right;

OUTPUT represents the output items that are accessed through output:

typedef struct
{
      CPoint hm, end;
      CRect rc;
}OUTPUT;
OUTPUT output;

MENU and CURVE are structures that represent the menu and curve items. The structures are accessed through menu and curve, respectively. The two structures are the same as the ones discussed in Chapter 6. Hence, their contents and components will not be discussed any more here.

Table 7.1. Other variables and objects in CCode7

Variable/Object

Type

Description

maxPt

int (macro)

Maximum number of plotted points allowed.

nMenuItems

int (macro)

Total number of menu items.

m

int (macro)

Total number of subintervals of x for drawing the curve.

fStatus

bool

Flag whose values are fStatus=0 and fStatus=1, indicating incomplete and complete inputs, respectively.

fMenu

int

Flag for the menu whose values indicate the selected method, fMenu=1 for bisection, fMenu=2 for false-point position, fMenu=3 for Newton-Raphson, and fMenu=4 for secant.

m1, c1, m2, c2

double

Conversion variables from the real coordinates to Windows.

h

double

Width between the subintervals of x for drawing the curve.

nPt

int

Actual number of plotted points from the mouse clicks.

idc

int

Id for the control resources.

btn

CButton

Push button object called Compute.

table

CListCtrl

List view table for displaying the results from iterations.

Table 7.1 lists other main variables and objects in CCode7. The maximum number of points allowed to be plotted is 20, and the number is represented by a macro called maxPt. The actual number of plotted points is the integer nPt whose initial value is 0. This value is incremented by one each time a point is clicked in the input region. There are five items in the menu, and they are represented by nMenuItems. The selected item in the menu is identified through fMenu, where fMenu=1 is the Lagrange method, fMenu=2 is the Newton's divide-difference method, fMenu=3 is the cubic spline method, fMenu=4 is the linear least-squares method, and fMenu=5 is the cubic least-square method. Conversion variables from the real coordinates to Windows are represented by m1, c1, m2, and c2. These variables are declared to be global as they are called by more than one function in this project.

Table 7.2 lists the functions in CCode7. Each interpolation and approximation method is represented by a function: Lagrange() for the Lagrange method, DDifference() for the Newton's divided-difference method, CSpline() for the cubic spline method, and LSquare() for the linear and cubic least-squares methods.

Mouse's Left-Click Event

In Code7, the left button click is an event that represents two separate jobs, selecting an item in the menu and plotting the points in the input region. The two jobs are recognized inside OnLButtonDown() through menu [k].rc.PtInRect(px) and curve.rc.PtInRect(px), respectively. The first option causes the program to call a method for interpolating or approximating the points and to produce the desired output. The second option invalidates the main window by drawing a small rectangle for each plotted point. The code fragments for this function are given as

Table 7.2. Member functions in CCode7

Function

Description

CCode7()

Constructor.

~CCode7()

Destructor.

Lagrange()

Interpolates the plotted points using the Lagrange method.

DDifference()

Interpolates the plotted points using the Newton's divided-difference method.

CSpline()

Interpolates the plotted points using the cubic spline method.

LSquare()

Approximates the plotted points using the linear and cubic least-squares methods.

DrawCurve()

Draws the curve y = f (x) in the given interval.

ShowTable()

Creates a list view table to display the plotted points.

OnLButtonDown()

Responds to ON_WM_LBUTTONDOWN, which allows points to be clicked in the input region and a menu item to be selected.

OnPaint()

Displays and updates the output in the main window.

void CCode7::OnLButtonDown(UINT nFlags, CPoint px)
{
     CRect rc;
     for (int k=1;k<=nMenuItems;k++)
           if (menu[k].rc.PtInRect(px))
               if (nPt>1)
               {
                    fMenu=k;
                    table.DestroyWindow();
                     switch(fMenu)
                    {
                         case 1:
                           Lagrange(); break;
                         case 2:
                             DDifference(); break;
                         case 3:
                             CSpline(); break;
                         case 4:
                         case 5:
                           LSquare(); break;
                   }
fStatus=1;
                   ShowTable();
                   InvalidateRect(curve.rc);
               }
   if (curve.rc.PtInRect(px))
  {
         ipt[nPt].x=(px.x-c1)/m1;
         ipt[nPt].y=(px.y-c2)/m2;
         rc=CRect(px.x, px.y, px.x+5, px.y+5);
         InvalidateRect(rc);
         nPt++;
  }
}

Main Window Update

The main display is constantly updated in OnPaint() whenever a point is clicked or the menu is activated. A point click in the input area causes a small rectangle to be drawn, whereas a click on the menu activates the selected method for solving the problem and displays the curve in the main window. The code fragments are

void CCode7::OnPaint()
{
     int i, k;
     CPaintDC dc(this);
     CString str;
     CPoint px;
     CRect rc;
     dc.SetBkColor(RGB(150, 150, 150));
     dc.SetTextColor(RGB(255, 255, 255));
     dc.SelectObject(Arial80);
     for (i=1;i<=nMenuItems;i++)
     {
         dc.FillSolidRect(&menu[i].rc, RGB(150, 150, 150));
         dc.TextOut(menu[i].hm.x+5, menu[i].hm.y+5, menu[i].item);
     }
     dc.SetBkColor(RGB(255, 255, 255));
     dc.SetTextColor(RGB(100, 100, 100));
     dc.Rectangle(curve.rc);

     // Draw & label the x, y axes
     CPen pGray(PS SOLID, 1, RGB(100, 100, 100));
     dc.SelectObject(pGray);
     px=CPoint(m1*0+c1, m2*min.y+c2); dc.MoveTo(px);
     px=CPoint(m1*0+c1, m2*max.y+c2); dc.LineTo(px);
px=CPoint(m1*left.x+c1, m2*0+c2); dc.MoveTo(px);
     px=CPoint(m1*right.x+c1, m2*0+c2); dc.LineTo(px);
     dc.SelectObject(Arial80);
     str.Format("%.0lf", left.x); dc.TextOut(px.x, px.y, str);
     str.Format("%.0lf", right.x); dc.TextOut(px.x-10, px.y, str);
     if (fMenu==0)
     {
           px.x=m1*ipt[nPt-1].x+c1;
           px.y=m2*ipt[nPt-1].y+c2;
           rc=CRect(px.x, px.y, px.x+5, px.y+5);
           dc.Rectangle(rc);
     }
     if (fStatus)
     {
          for (i=0;i<=nPt-1;i++)
          {
                px.x=m1*ipt[i].x+c1;
                px.y=m2*ipt[i].y+c2;
                dc.Rectangle(px.x, px.y, px.x+5, px.y+5);
          }
          DrawCurve();
     }
}

Lagrange's Solution

Lagrange's method is handled by Lagrange(). The function performs the computation using Algorithm 7.1. The code segment is given as follows:

void CCode7:: Lagrange()
{
     int i, j, k;
     double L,  h=(ipt[nPt-1].x-ipt[0].x)/m;
     pt[0].x=ipt[0].x;
     pt[0].y=ipt[0].y;
     for (i=0;i<=m;i++)
     {
          pt[i].y=0;
          for (j=0;j<=nPt-1;j++)
          {
                L=1;
                for (k=0;k<=nPt-1;k++)
                      if (j!=k)
                             L*= (pt[i].x-ipt[k].x)/
                                 (ipt[j].x-ipt[k].x);
pt[i].y += ipt[j].y*L;
            }
            if (i<m)
                    pt[i+1].x=pt[i].x+h;
      }
}

The Lagrange operator in this function is represented by L, and calculations are made based on the product in Equation (7.4). Interpolation is obtained through the following code segment:

for (i=0;i<=m;i++)
{
     pt[i].y=0;
     for (j=0;j<=nPt-1;j++)
     {
          L=1;
          for (k=0;k<=nPt-1;k++)
               if (j!=k)
                     L *= (pt[i].x-ipt[k].x)/
                          (ipt[j].x-ipt[k].x);
          pt[i].y += ipt[j].y*L;
     }
}

Newton's Divided-Difference Solution

Newton's divided-difference method is slightly simpler to execute than the Lagrange method as it does not require the operator L (x) to be determined from the total number of points. The code fragments for this method, as outlined in Algorithm 7.2, are written in DDifference() as

void CCode7::DDifference()
{
     int i, j, k;
     double **d, product;
     double h=(ipt[nPt-1].x-ipt[0].x)/(double)m;
     d=new double *[maxPt+1];
     for (i=0;i<=maxPt+1;i++)
          d[i]=new double [maxPt+1];
     pt[0].x=ipt[0].x;
     for (i=0;i<=nPt-1;i++)
          d[0][i]=ipt[i].y;
     for (k=1;k<=nPt-1;k++)
             for (i=0;i<=nPt-1-k;i++)
d[k][i]=(d[k-1][i+1]-d[k-1][i])/
                          (ipt[k+i].x-ipt[i].x);
     for (i=0;i<=m;i++)
     {
          pt[i].y=d[0][0];
          for (j=1;j<=nPt-1;j++)
          {
               product=1;
               for (k=0;k<=j-1;k++)
                    product *= (pt[i].x-ipt[k].x);
               pt[i].y += d[j][0]*product;
     }
     if (i<m)
          pt[i+1].x=pt[i].x+h;
   }
}

The solution in the divide-difference method is provided by Equation (7.7), and the code for this equation is given by

for (i=0;i<=m;i++)
{
     pt[i].y=d[0][0];
     for (j=1;j<=nPt-1;j++)
     {
           product=1;
           for (k=0;k<=j-1;k++)
                 product *= (pt[i].x-ipt[k].x);
           pt[i].y += d[j][0]*product;
     }
     if (i<m)
           pt[i+1].x=pt[i].x+h;
}

Cubic Spline Solution

The cubic spline solution is provided through CSpline(). The function has been developed based on Algorithm 7.6, and the following code segment lists the full contents of this function:

void CCode7::CSpline()
{
   int i, j, k;
   double *e, *f, *g, *r, *w;
   double h;
e=new double [maxPt+1];
   f=new double [maxPt+1];
   g=new double [maxPt+1];
   r=new double [maxPt+1];
   w=new double [maxPt+1];

   // form the tridiagonal system
   e[0]=0; e[1]=0;
   f[0]=0; g[0]=0; r[0]=0;
   g[nPt-2]=0;
   for (k=1;k<=nPt-2;k++)
   {
        e[k+1]=ipt[k+1].x-ipt[k].x;
        f[k]=2*(ipt[k+1].x-ipt[k-1].x);
        g[k]=ipt[k+1].x-ipt[k].x;
        r[k]=6*((ipt[k+1].y-ipt[k].y)/(ipt[k+1].x-ipt[k].x)
             +(ipt[k-1].y-ipt[k].y)/(ipt[k].x-ipt[k-1].x));
   }

  // Thomas algorithm to solve the system
  w[0]=0; w[nPt-1]=0;
  for (k=2;k<=nPt-2;k++)
  {
       e[k]/=f[k-1];
       f[k]-=e[k]*g[k-1];
  }
  for (k=2;k<=nPt-2;k++)
        r[k]-=e[k]*r[k-1];
  w[nPt-2]=r[nPt-2]/f[nPt-2];
  for (k=nPt-3;k>=1;k--)
        w[k]=(r[k]-g[k]*w[k+1])/f[k];
  for (k=1;k<=nPt-1;k++)
  {
       A[k]=w[k-1]/(6*(ipt[k].x-ipt[k-1].x));
       B[k]=w[k]/(6*(ipt[k].x-ipt[k-1].x));
       C[k]=ipt[k-1].y/(ipt[k].x-ipt[k-1].x)
            -w[k-1]/6*(ipt[k].x-ipt[k-1].x);
       D[k]=ipt[k].y/(ipt[k].x-ipt[k-1].x)
            -w[k]/6*(ipt[k].x-ipt[k-1].x);
   }
   h=(ipt[nPt-1].x-ipt[0].x)/m;
   pt[0].x=ipt[0].x;
   pt[0].y=ipt[0].y;
   for (i=0;i<=m;i++)
{
        for (k=1;k<=nPt-1;k++)
             if (pt[i].x>=ipt[k-1].x && pt[i].x<=ipt[k].x)
                   pt[i].y=A[k]*pow(ipt[k].x-pt[i].x, 3)
                           +B[k]*pow(pt[i].x-ipt[k-1].x, 3)
                           +C[k] * (ipt[k].x-pt[i].x)
                           +D[k] * (pt[i].x-ipt[k-1].x);
         if (i<m)
             pt[i+1].x=pt[i].x+h;
   }
   ShowTable();
   delete e, f, g, r, w;
}

The main objective in CSpline() is to solve for the constants Ak, Bk, C k, and D k in Equation (7.13), for k= 1, 2, ..., m — 1. The solution is provided by Equations (7.14), which in turn requires the evaluation of wk,for k = 1, 2, ..., m — 1. In finding the values of w k, two main steps are involved. The first step is the formation of the tridiagonal system of linear equations, as given by Equation (7.16). The second step is the solution to the linear system using any suitable method.

The tridiagonal system of linear equations is formed based on Equations (7.17a), (7.17b), (7.17c), and (7.17d), as follows:

// form the tridiagonal system
e[0]=0; e[1]=0;
f[0]=0; g[0]=0; r[0]=0;
g[nPt-2]=0;
for (k=1;k<=nPt-2;k++)
{
    e[k+1]=ipt[k+1].x-ipt[k].x;
    f[k]=2* (ipt[k+1].x-ipt[k-1].x);
    g[k]=ipt[k+1].x-ipt[k].x;
    r[k]=6*((ipt[k+1].y-ipt[k].y)/(ipt[k+1].x-ipt[k].x)
        +(ipt[k-1].y-ipt[k].y)/(ipt[k].x-ipt[k-1].x));
}

The tridiagonal system of linear equations is solved using the Thomas algorithm, as discussed in Chapter 5. The code fragments are given by

// Thomas algorithm to solve the system
w[0]=0; w[nPt-1]=0;
for (k=2;k<=nPt-2;k++)
{
   e[k]/=f[k-1];
f[k]-=e[k]*g[k-1];
}
for (k=2;k<=nPt-2;k++)
      r[k]-=e[k]*r[k-1];
w[nPt-2]=r[nPt-2]/f[nPt-2];
for (k=nPt-3;k>=1;k--)
      w[k]=(r[k]-g[k]*w[k+1])/f[k];

The computed values of w k are applied to determine the values of the constants A k, B k, C k, and D k in Equations (7.14), as follows:

for (k=1;k<=nPt-1;k++)
{
     A[k]=w[k-1]/(6*(ipt[k].x-ipt[k-1].x));
     B[k]=w[k]/(6*(ipt[k].x-ipt[k-1].x));
     C[k]=ipt[k-1].y/(ipt[k].x-ipt[k-1].x)
          -w[k-1]/6*(ipt[k].x-ipt[k-1].x);
     D[k]=ipt[k].y/(ipt[k].x-ipt[k-1].x)
          -w[k]/6*(ipt[k].x-ipt[k-1].x);
}

The rest of the code in CSpline() generates (xi, y i) for i = 0, 1, ..., m for plotting the cubic spline. The code is given by

for (i=0;i<=m;i++)
{
    for (k=1;k<=nPt-1;k++)
         if (pt[i].x>=ipt[k-1].x && pt[i].x<=ipt[k].x)
              pt[i].y=A[k]*pow(ipt[k].x-pt[i].x, 3)
                      +B[k]*pow(pt[i].x-ipt[k-1].x, 3)
                      +C[k]*(ipt[k].x-pt[i].x)+D[k]*(pt[i].x
                      -ipt[k-1].x);
    if (i<m)
         pt[i+1].x=pt[i].x+h;
}

Least-Squares Solution

In the least-squares approximation, linear and cubic polynomials are illustrated in items 4 and 5 in the menu. A single function called LSquare() represents the solution to both problems, as the only difference between them is the size of the systems of linear equations generated, two and four, respectively. The solution is provided by Algorithm 7.7, and the code fragments are given by

void CCode7::LSquare()
{
     int i, j, k, sa, sv;
     double **a, *b, *S, *v, sum;
     double h=(ipt[nPt-1].x-ipt[0].x)/(double)m;
     pt[0].x=ipt[0].x;
     pt[0].y=ipt[0].y;
     max.y=pt[0].y; min.y=pt[0].y;
     switch (fMenu)
    {
         case 4: sa=2; sv=1; break;
         case 5: sa=6; sv=3; break;
    }
    a=new double *[sa+1];
    S=new double [sa+1];
    b=new double [sv+1];
    v=new double [sv+1];
    for (i=1;i<=sa+1;i++)
         a[i]=new double [sa+1];
    for (k=0;k<=sa;k++)
    {
        S[k]=0;
        if (k<=sv)
               v[k]=0;
    }
    for (k=0;k<=sa;k++)
          for (i=0;i<=nPt-1;i++)
          {
               S[k] += ((ipt[i].x==0&&k==0)?1:pow(ipt[i].x, k));
               if (k<=sv)
                     v[k] += ipt[i].y*((ipt[i].
                             x==0&&k==0)?1:pow(ipt[i].x, k));
          }

    //Form the SLE
    for (i=1;i<=sv+1;i++)
    {
          for (j=1;j<=i;j++)
          {
               a[i][j]=S[i+j-2];
               a[j][i]=a[i][j];
          }
          b[i]=v[i-1];
     }
// Solve the SLE
    double m1, Sum;
    for (k=1;k<=sv;k++)
         for (i=k+1;i<=sv+1;i++)
         {
              m1=a[i][k]/a[k][k];
              for (j=1;j<=sv+1;j++)
                    a[i][j] -= m1*a[k][j];
              b[i] -= m1*b[k];
          }
     for (i=sv+1;i>=1;i--)
     {
           Sum=0;
           c[i-1]=0;
           for (j=i;j<=sv+1;j++)
                Sum += a[i][j]*c[j-1];
           c[i-1]=(b[i]-Sum)/a[i][i];
      }
      for (i=0;i<=m;i++)
      {
           if (fMenu==4)
                pt[i].y=c[0]+c[1] *pt[i].x;
           if (fMenu==5)
                pt[i].y=c[0]+c[1]*pt[i].x+c[2]*pow(pt[i].x, 2)
                       +c[3]*pow(pt[i].x, 3);
           if (i<m)
                pt[i+1].x=pt[i].x+h;
      }
}

In the above code, linear polynomial approximation is recognized through the assignment fMenu=4, whereas cubic approximation is recognized through fMenu=5. Two major steps are involved in the least-squares method. The first step is the formation of the system of linear equations, and the second is the solution to this system using any suitable method. Linear approximation is represented by y = C 0 + C 1X. There are two unknowns to be determined from this equation, and this leads to the formation of a 2 × 2 system of linear equations, given by

Equation 7.73. 

Least-Squares Solution

Similarly, a cubic polynomial approximation in y = c 0 + c1x + c2x2 + c3x3 requires the formation of a 4 × 4 system of linear equations, given as

Equation 7.74. 

Least-Squares Solution

In LSquare(), the size of the matrix is held by sa, which receives its assignment from fMenu. From this value, the values of S i and v k can be determined, and this leads to the formation of the system of linear equations. The code for the above equation is given by

for (k=0;k<=sa;k++)
{
     S[k]=0;
     if (k<=sv)
          v[k]=0;
}
for (k=0;k<=sa;k++)
     for (i=0;i<=nPt-1;i++)
     {
          S[k] += ((ipt[i].x==0&&k==0)?1:pow(ipt[i].x, k));
          if (k<=sv)
               v[k] += ipt[i].y*((ipt[i]
                       .x==0&&k==0)?1:pow(ipt[i].x, k));
     }

//Form the SLE
for (i=1;i<=sv+1;i++)
{
     for (j=1;j<=i;j++)
     {
            a[i][j]=S[i+j-2];
            a[j][i]=a[i][j];
     }
     b[i]=v[i-1];
}

The system of linear equations is then solved using the Gaussian elimination method to produce the values of c i, as shown in the following code segment:

// Solve the SLE
double m1, Sum;
for (k=1;k<=sv;k++)
    for (i=k+1;i<=sv+1;i++)
    {
         m1=a[i][k]/a[k][k];
       for (j=1;j<=sv+1;j++)
              a[i][j] -= m1*a[k][j];
       b[i] -= m1*b[k];
    }
for (i=sv+1;i>=1;i--)
{
     Sum=0;
     c[i-1]=0;
     for (j=i;j<=sv+1;j++)
          Sum += a[i][j]*c[j-1];
     c[i-1]=(b[i]-Sum)/a[i][i];
}

The final section of LSquare() generates the points (xi, y i) for i = 0, 1, ...,m for generating the corresponding curves.

for (i=0;i<=m;i++)
{
     if (fMenu==4)
          pt[i].y=c[0]+c[1]*pt[i].x;
     if (fMenu==5)
          pt[i].y=c[0]+c[1]*pt[i].x+c[2]*pow(pt[i].x, 2)
                      +c[3]*pow(pt[i].x, 3);
     if (i<m)
          pt[i+1].x=pt[i].x+h;
}

SUMMARY

Interpolation and approximation concepts are widely used in science and engineering applications such as in data analysis, curve and surface fittings, and forecasting. The output from interpolation and approximation is often expressed as a polynomial that becomes the generalization to the problem.

In this chapter, interpolation and approximation have been illustrated as polynomials and display as curves. The techniques discussed are the Lagrange, Newton's divided-difference, cubic spline, and least-squares methods. Each of these methods has its strength and weaknesses in interpolating or approximating the points depending on the objectives. For example, in producing low-degree polynomials, it is wise to deploy the least-squares or spline fitting methods.

Interpolation and approximation techniques are used widely in computer graphics and computer-aided designs (CAD). In computer graphics, interpolation and approximation contribute to producing graphs that provide the relationship between the given points. In computer-aided designs, various bodies and structures are constructed based on interpolative and approximated techniques. In either case, interpolation and approximation are regarded as important tools for modeling and simulation.

NUMERICAL EXERCISES

  1. Apply the Lagrange, Newton's divided-difference and cubic spline methods for interpolating the following sets of data:

    1. {(0, 3), (1, 5), (4, − 1)}.

    2. {(0, 3), (1, 5), (4, − 1), (5, 0)}.

    3. {(0, 3), (1, 5), (2, − 1), (3, 0)}.

  2. The chapter discusses interpolation methods using polynomials only. It is also possible to use other functions, including nonpolynomials in interpolation. Find the values of a and b in f (x) = a sin x + b cos x, which interpolates the points (0, 0.5) and (π/ 2, −0.3).

  3. Do some research to study the properties of a spline of degree two or quadratic spline. Hence, find the quadratic spline that interpolates the points {(0, 3), (1, 5), (4, − 1), (5, 0)}.

  4. Use the Stirling's method to approximate the value of P (1.5) from a data set given by {(0, 3), (1, 5), (2, − 1), (3, 0)}.

PROGRAMMING CHALLENGES

  1. Run Code7 to check for the existence of at least one root of the functions in the given intervals in the following problems.

  2. Code7 produces curves based on points clicked in the rectangular region provided in the window. This approach provides an easy mode of input for the user, but at the same time, a difficulty arises where it is not possible to click the exact points using a mouse. To overcome this difficulty, it is wise to provide another interface where input for the points is allowed using edit boxes. Improve on Code7 by adding this mode of input to the interface.

  3. In Code7, the points for plotting the curves must be plotted in the order from left to right only. Breaking this rule will produce some undesirable results on the generated curve. Improve on the code to allow the points to be plotted from any direction.

  4. A quadratic spline, instead of a cubic spline, may prove to be a better choice in interpolating points for producing a low-degree polynomial. Find some suitable references for studying the properties of this method, and produce a program for generating its curve.

  5. The Newton's forward-difference and backward-difference methods and Stirling's method are based on equal-width subintervals. Devise a program for these methods using the mouse clicks as their input. The input should start with a single interval whose left and right end points are marked through the mouse clicks. The program then divides the intervals into m equal-width subintervals automatically where m in the input value is supplied by the user.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset