image

μ4=μ44μ3μ1+6μ2(μ1)23(μ1)4=264(5.5)(1)+6(2.5)(1)+3(1)4=2622+153=16μ1=0,μ2=1.5,μ3=0,μ4=16

image

4.17.2 Moments About Origin

Definition 4.25

The rth moment about origin of a random variable X, denoted by Vr is the expected value of Xr, symbolically

Vr=E[Xr]=i=1nxirpi

image

For r=0, 1, 2, … when X is discrete, and

Vr=E[Xr]=xrf(x)dx

image

when X is continuous.

Relation Between μr and Vr:

If μ is the Arithmetic mean of X, the distribution then

Vr=μr+C1rμr1μ+C2rμr2μ2++μr

image

Putting r=1, 2, 3,…. we get

V1=μ,V2=μ2+μ2,V3=μ3+3μ2μ+μ3

image

Remark

From the definition of moment about arbitrary number A, we have

μr=E|(xA)r|=i=1n(x1A)rpi,r=0,1,2,

image

Taking A=0, we obtain

μr=E[Xr]=i=1n(xi)rpi

image

i.e.,

μr=Vr

image

4.17.3 Skewness and Kurtosis

A measure of skewness is defined by

Skm=E[(xμ)3]σ3=μ3μ23

image

If we define

β1=μ32μ23

image

The moment coefficient of skewness=μ3μ23=±β1image

where the sign of β1image is taken as that of μ3image.

The moment of coefficient of skewness is also denoted by r1. Thus

r1μ3μ33=±β1

image

If measure of skewness is positive, we say that the distribution is positively skewed or right tailed. And if the measure of skewness is negative, we say that the distribution is negatively skewed or left tailed.

If Arithmetic mean of the distribution is greater than the mode, the distribution is positively skewed (i.e., right tailed), and if the Arithmetic mean is less than the value of the mode, the distribution is negatively skewed (i.e., left tailed). In a symmetrical distribution the quartiles are equidistant from the median.

Measure of kurtosis tells us the extent to which a distribution is more peaked or more flat topped than the normal curve. If the curve of a frequency distribution is more peaked than the normal curve, it is said to be “Leptokurtic.” The curve of a frequency distribution is called “Platykurtic” if it is more flat topped than the normal curve. If the curve of a frequency is neither flat nor sharply peaked, then the curve of the distribution is called “Mesokurtic.” The measure of kurtosis denoted by β2 and is defined as

β2=μ4μ22

image

where μ2 and μ4 are respectively the second and fourth central moments about mean. Kurtosis is also known as Convexity of a curve (or bulginess).

If β2<3, the distribution is said to be “Platykurtic”

If β2=3, the distribution is said to be “Mesokurtic”

If β2>3, the distribution is said to be “Leptokurtic.”

The kurtosis of distribution is also measured by the considering the value of β2−3, which is denoted by r2.

Thus

r2=β23

image

If r2=0, the distribution is “Mesokurtic”

If r2<0, the distribution is “Platykurtic” and

If r2>0, the distribution is “Leptokurtic”.

Example 4.34: In a certain distribution the first four moments about x=5 are 2, 20, 40, and 50. Calculate and state whether the distribution is Leptokurtic or Platykurtic?

Solution: We have A=5

μ1=2,μ2=20,μ3=40,μ4=50

image

Hence we get μ1=0 (always),

μ2=μ2(μ1)2=(20)2216μ3=μ33μ2μ1+2(μ1)2=403(2)(20)+2(2)3=40120+16=64μ4=μ44μ3μ1+6μ2(μ1)23(μ1)4=504(40)(2)+6(20)(22)+3(2)4=50320+48048=162β1=μ32μ23=(64)2163=1β2=μ4μ22=162162=0.63

image

Since β2<3, the curve of the distribution is Platykurtic.

Example 4.35: The density of a random variable X is given by f(x)=kx(2−x), 0≤x ≤2. Find

a. k

b. Mean

c. Variance

d. β1 and β2

e. rth moment.

Solution:

a. Since f(x)dx=1image, i.e., 02f(x)dx=1image
We have 02kx(2x)dx=1image
or k[x2x33]02=1image
or k[483]=1image
or 4k3=1image
k=34image

b. Mean=E(X)=xf(x)dx=3402(2x2x3)dx=34[23x3x44]02=34[163164]=3416[1314]=3.4(43)3.4=1image

c. We have

μ2=3.84.5=65μ3=3.245.6=85μ4=3.256.7=167

image

d. Hence

μ1=1,μ2=μ2(μ1)2=6512=15μ3=μ33μ2μ1+2(μ1)2=85185+2=18185=0μ4=μ44μ3μ1+6μ2(μ1)23(μ1)4=167325+3653=335

image

e. μr=02xrf(x)dx=3402xr(2xx2)dx=3402(2xr+1xr+2)dx=34[2r+22r+2xr+3(r+3)]02=34[2r+22r+22r+3r+3]=34[2r+3(r+2)(r+3)]=[3.2r+1(r+2)(r+3)]image
∴ E(X)=μ1=3.224.5=1image, i.e., μ=1
Therefore

β1=μ32μ23=0

image


and

β2=μ4μ22=157.

image

Exercise 4.5

1. The first four central moments of a distribution are 0, 2.5, 0.7, and 18.75. Test the skewness and kurtosis of the distribution.

Ans: Positively skewed: β1=0.031, Mesokurtic (β2=3)

2. Find the first moments about mean for the series 4, 5, 6, 1, 4.

Ans: μ1=0, μ2=2.8, μ3=−3.6, and μ4=19.6

3. The first four moments of a distribution, about the value 35 are −1.8, 240, −1020, and 14,400. Find the values for μ1, μ2, μ3, and μ4?

Ans: μ1=0, μ2=236.76, μ3=264.36, and μ4=141290.11

4. The first four moments of a distribution about x=4 of the variable are −1.5, 17, −30, and 108. Find the moments for μ1, μ2, μ3, and μ4 about mean?

Ans: 0, 14.75, 39.75, and 142.31.

5. The first three critical moments of a distribution are 0, 2.5, and 0.7. Find the value of moment of coefficient of skewness?

Ans: 0.177

6. The first four moments of a distribution about the value 5 of the variable are 2, 20, 40, and 50. Calculate the moment of skewness.

Ans: −1

7. The first four central moments of a distribution are 0, 2.5, 0.7, and 18.75. Test the kurtosis of the distribution.

Ans: Mesokurtic (β2=3).

8. The first three central moments of a distribution are 0, 15, −31. Find the moment coefficient of skewness?

Ans: −0.53

9. Calculate the first five moments about the mean for the series 4, 7, 10, 13, 16, 19, 22.

Ans: 0, 36, 0, 22, 68

10. The first four moments of a distribution about x=4 are −1.5, 17, 30, and 108. Find the moments about the mean?

Ans: 0, 14.75, 39.75, 142.31

11. X is a random variable whose density function is

f(X)=Aex,0<x<=0,otherwise

image


Find the value of

a. A

b. Mean of X

c. Variance of X

d. Third moment about the mean

e. Kurtosis

f. rth moment about origin.

Ans: (a) A=1; (b) 1; (c) 2; (d) μ3=2; (e) a (Leptokurtic); (f) r!

12. The first three moments of a distribution about the value 2 are 1, 16, −40. Find the mean and variance?

Ans: Mean=3, Variance=15

13. The first three moments of a distribution about the value 3 are 2, 10, −30. Show that the moments about x=0 are 5, 31, 141. Find the mean and variance.

Ans: μ=5, σ2=6

14. Given the pdf f(x)={1x2,0<x<10,elsewhereimage
Obtain

a. kth moment about the origin

b. First three moments about the mean

c. The mean and variance

Ans: (a) 2(k+1)(k+3)image; (b) μ1=0; (c) 14,17240image

15. Show that for the exponential distribution P=y0ex/adx,0x<,σ>0,y0image being a constant, the mean and the SD are equal to σ and the SD are equal to σ and that the interquartile range is σ loge 3. Also find μrimage and show that β1=4image and β2=9image?
Hint:

y0=0ex/σdx=1y0=1σμr=1σ0xrex/σdx=r!σr

image


We get

μ1=σ,μ2=2σ2,μ3=6σ2,μ4=9σ4

image


Hence

β1=μ32μ23=4,β2=μ4μ22=91σ0Q1ex/σdx=14giveseQ1/σ=341σ0Q3ex/σdx=34giveseQ3/σ=14

image


Hence eQ3Q1σ=3image or Q3Q1=σloge3image

16. For the triangular distribution dp=1a[1|xb|a]dx,|xb|<aimage. Show that the mean is band variance a26image.

17. For the rectangular distribution dp=dx, 1≤x≤2 show AM>GM>HM.

18. For the triangular distribution, with the density function f(x)=1−|1−x|, 0<x<2. Show that the mean is 1 and variance is 1/6image.

4.18 Moment Generating Function

Definition 4.26

The moment generating function (mgf) of a random variable X, denoted by MX(t) is defined by

MX(t)=E[etx]=xetxP(X=x)

image

when X is a discrete random variable, and

MX(t)=E[etx]=eixf(x)dx(tisanindependentvariable)

image

when X is a continuous random variable.

The mgf of X is also denoted by M(t). The mgf may exist but the moments may not exist.

4.19 Properties of Moment Generating Function

Theorem 8

If a, c are constants and X is a random variable, then

1. Mcx[t]=MX[ct]

2. Mxac[t]=eat/cMX[tc]image

Proof

1. By definition, we have

Mcx[t]=E[etx]=E[ex(ct)]=MX[ct]

image

2. M[t]xac=E[et(xac)]=E[etxceatc]=eatcE[ex(tc)]=eatcMX(tc)image

Theorem 9

If MX[t] and MY[t] are the mgf of the independent random variables X and Y, then

MX+Y[t]=MX[t]MY[t]

image

i.e., the mgf of the sum of two independent random variables is equal to the product of their respective mgf.

Proof

MX+Y[t]=E[et(X+Y)]=E[etX+tY]=E[etXetY]=E[etx]E[ety]=MX[t]MY[t]

image

Hence proved.

Theorem 10

If X is a random variable, then

MX[t]=MX[t]=1+μ1t+μ2t22!+μ3t33!++μrtrr!+

image

where

μrimage, (r=1, 2, 3,…) are the moments about the origin.

Proof

MX[t]=E[etx]=E[1+tx+t2x22!+t3x33!++trxrr!+]=E(1)+E(tx)+E(t2x22!)+Et3x33!++E(trxrr!)+=1+tE(x)+t22!E(x2)+t33!E(x3)++trr!E(xr)+ (4.11)

image (4.11)

From the definition, we have

E[(xA)r]=μr

image

where A is an arbitrary number.

Putting A=0, we get E[xr]=μrimage

E(x)=μ1,E(x2)=μ2,E(x3)=μ3,

image

Substituting in Eq. (4.11), we get

MX[t]=1+μ1t+μ2t22!+μ3t33!++μrtrr!+

image

4.19.1 Solved Examples

Example 4.36: The pdf of the random variable X has the following probability law:

P(x)=120e|xθ0|,<x<.

image

Find the mgf of X. Hence find E(X) and Var[X]?

Solution: The moment generating function (mgf) of X is

MX[t]=E[etx]=etx12θe|xθθ|dx=12θθetxe|xθθ|dx+12θθetxe|xθθ|dx=12θθetxe(xθθ)dx+12θθetxe(xθθ)dx=e12θθe(t+1θ)xdx+e2θθe(t1θ)xdx=e12θ|e(t+1θ)xt+1θ|θe2θ|e(t1θ)xt1θ|θ=12θetθ(1+tθθ)+12θetθ(1tθθ)=etθ2[11+tθ+11tθ]=etθ1(tθ)2=etθ[1t2θ2]1=etθ[1t2θ2+]=(1+tθ+θ2r22!+)(1+θ2t2+)=(1+tθ+3θ2t22!+) (4.12)

image (4.12)

μ1image=Coefficient of k in Eq. (4.12)=θ

μ2image=Coefficient of t22!image in Eq. (4.12)=3θ2

Hence

μ2=μ2(μ1)2=3θ2θ2=2θ2=3θ2θ2=2θ2E(X)=mean=θ,Var[X]=2θ2

image

Example 4.37: Find the mgf of a random variable X having the density function

f(x)=x2,0x2=0,otherwise

image

and use it to find first four moments about its origin.

Solution: We have

MX[t]=E[etx]=etxf(x)dx=02etxx2dx=1202etxxdx=12[xetxt1etxt2]02=12[2(e2t)t(e2t1)t2]=12[2(e2t)t+1(e2t)t2]=12t2[1+2te2te2t]MX(t)=12t2[1+2te2te2t]

image

Consider

MX(t)=12t2[1+2te2te2t]=12t2+1te2t12t2e2t=12t2+1t[1+2t1!+22t22!+]12t2[1+2t1!+22t22!+]=12t2+[1t+2+22t2!+8t23!+16t34!+]+[12t21t+22!22t3!23t24!]=1+(223)t+(4313)t2+(23215)t3+=1+43t+t3+815t3+29t4+

image

Therefore

MX(t)=1+43t+2t22!+165t33!+16t44!+

image

is the required mgf.

Since MX(t)=1+μ1t+μ2t22!+μ3t33!+μ4t44!+image

Comparing, we get μ1=43,μ2=2,μ3=165,μ4=163image.

Exercise 4.6

1. Define moment generating function (mgf).

2. Find the mgf of a random variable X having the density function

f(x)=x2,0x2=0,otherwise

image

and find the first four moments about the origin?

Ans: 43image, 2, 165image, 163image

3. A random variable X assumes values 1/2image and (1/2)image with probability 1/2image each. Find the mgf and four moments about the origin?

Ans: MX(t)=1+t22!22+t44!24+μ1=43,μ2=2,μ3=165,μ4=163image

4. A random variable has the density function.

f(x)=ex,x0=0,otherwise

image


Determine the mgf about the origin and also about the mean.

Ans:

MX(t)=1+t+2t22!+6t33!+μ1=1,μ2=2,μ3=6,μ4=24E(Xk)=bk+1ak+1(k+1)(ba),μ1=0,μ2=1,μ3=2,μ4=9

image

5. A random variable X has the probability distribution f(x)=18Cx3image for x=0, 1, 2, and 3. Find the mgf of this random variable and use it to determine μ1image and μ2image.

Ans: μ1=b+a2,μ2=b2+ab+a23,μ3=(b+a)(b2+a2)4image

6. If a and b are constants. Prove the following:

a. Mx+a[t]=eatMx(t)image

b. Mx+ab[t]=eat/bMX[tb]image

Ans: 32image, 3, Mx(t)=18(1+et)3image

4.20 Discrete Probability Distributions

In this section we shall study some discrete probability distribution, which are derived using the theory of probability for the outcomes of a conceptual experiment. We discuss the following distributions:

1. Binomial distribution

2. Poisson distribution

3. Geometric distribution

4. Uniform distribution

5. Negative binomial distribution

6. Gamma distribution

7. Weibull distribution

The Binomial distribution, Poisson distribution, and Hypergeometric distribution use integers as Random variables. We begin our study by defining Bernoulli’s distribution.

A random experiment with only two possible outcomes, success or failure is called a Binomial trial and random variable X which takes the values either zero or 1 is called a Bernoulli’s variable. The corresponding distribution is called the Bernoulli’s distribution.

It was discovered in 1713 by James Bernoulli and is defined as follows.

Let X be a Bernoulli random variable. If the probability of success is denoted by p, and the probability of failure is denoted by q=1−p, and the pmf is defined by

P(X=x)=pxq1x,x=0,1=0,otherwise

image

Then the probability distribution of X is called Bernoulli’s distribution, thus we have (Table 4.1):

Table 4.1

Bernoulli’s distribution

X=xi 0 1
P(X=xi) 1p p

The mean of Bernoulli’s distribution is

μ=E(X)=ixipi=0(1p)+1·p=p

image

The variance of Bernoulli’s distribution is

σ2=Var(X)=i(xiμ)2pi=(0p)2(1p)+(1p)2p=(1p)(p2+p(1p))=(1p)(p2+pp2)=p(1p)=pq

image

Standard distribution is σ=pqimage.

If n is the number of Bernoulli’s trials then Bernoulli’s theorem states that the probability of x successes is Cxnpxqnximage.

4.20.1 Binomial Distribution

Binomial distribution is a discrete distribution. It is a commonly used probability distribution. Then it is developed to represent various discrete phenomenons, which occur in business, social sciences, natural sciences, and medical research.

Binomial distribution is widely used due to its relation with binomial distribution. The following should be satisfied for the application of binomial distribution:

1. The experiment consists of n identical trials, where n is finite.

2. There are only two possible outcomes in each trial, i.e., each trial is a Bernoulli’s trial. We denote one outcome by S (for success) and other by F (for failure).

3. The probability of S remains the same from trial to trial. The probability of S (Success) is denoted by p and the probability of failure by q (where p+q=1).

4. All the trials are independent.

5. The Binomial random variable x is the number of success in n trials.

If X denotes the number of success in n trials under the conditions stated above, then x is said to follow binomial distribution with parameters n and p.

Definition

(Binomial distribution) A discrete random variable taking the values 0, 1, 2, …, n is said to follow binomial distribution with parameters n and p if its pmf is given by

P(X=x)=P(x)nCxpxqnx,x=0,1,2,,n0<p<1,q=1p=0,otherwise

image

If x follows Binomial distribution with parameters n and p symbolically we express X~B(n, p) or B(x: n, p).

A binomial random variable is the number of successes x in n repeated trials of a binomial experiment. The probability distribution of a binomial random variable is called a binomial distribution (It is also known as a Bernoulli distribution).

A cumulative binomial probability refers to the probability that the binomial random variable falls within a specified range.

Remark

We have

x=0nP(X=x)=x=0nCxnpxqnx=(q+p)n=1

image

The probabilities are the terms in the binomial expansion of (q+p)n (i.e., (p+q)n), hence name Binomial distribution given.

The Binomial distribution is used to analyze the error in experimental results that estimate the proportion of in a population that satisfy a condition of interest.

4.20.2 Expected Frequencies and Fitting of a Binomial Distribution

If we take large N of sets of n (Bernoulli) trials each, then the expected or theoretical frequencies of getting x success is given by

N.P(X=x)=NCxnpxqnx=N.Cxnqxpnxx=0,1,2,,n

image

The theoretical frequencies of getting 0 success, 1 success, …, n success are respectively the 1st, 2nd,…, (n+1)th terms in the expansion of N(p+q)n (i.e., N(q+p)n).

The expected or theoretical frequencies of Binomial distribution are as shown in the Table 4.2.

Table 4.2

Theoretical or expected frequencies

Number of success Expected frequency
0 N.qn
1 N.nC1pqn−1
2 N.nC2p2qn−2
ent  
X N.nCxpxqn−x
ent  
n N.pn

4.20.3 Recurrence Relation

Let X be a Binomial variable. Since X follows Binomial distribution, we have

P(x)=P(X=x)=Cxnpxqnxx=0,1,2,,n

image

Replacing x by x+1 we get

P(x+1)=P(X=x+1)=Cx+1npx+1qnx1x=0,1,2,,n1

image

Dividing we get

P(x+1)P(x)=Cx+1nqnx1px+1Cxnqnxpx=n!(x+1)(nx1)!·x!(nx)!n!=(nx)(nx1)!x!(x+1)x!(nx1)·pq

image

or

P(x+1)=(nx)(x+1)·pq·P(x)x=0,1,2,,n1

image

Hence the required recurrence relation is

P(x+1)=(nx)(x+1)·pq·P(x)x=0,1,2,,n1

image

Using the above recurrence relation, we can write

P(1)=n·pq·P(0)P(2)=n12·pq·P(1)P(3)=n23·pq·P(2)

image

The recurrence relation is used to find the expected or theoretical frequencies, i.e., for fitting the Binomial distribution of a given data.

4.20.4 Moments, Skewness, and Kurtosis of the Binomial Distribution

Taking an arbitrary origin at 0 successes, we get

μ1=x=0nxP(x)=x=0nx·nCxpxqnx=x=0nxn!x!(nx)!pxqnx=npx=1nn!x![n1(x1)!px1qnx=npx=1nCx1n1px1qn1(x1)=np(p+q)n1=np1=npMean=μ1=E(X)=np

image

μ2=E(X)2=E[X(X1)]+E(X)=x=0nx2Cxnpxqn1=x=0n[x+n(x1)]Cxnpxqnx=x=0nxCxnpxqnx+x=0nx(x1)Cxnpxqnx=np+n(n1)p2x=2nCx2n2px2qnx=np+(n2p2np2)x=2nCx2n2px2q(n2)(x2)=np+(n2p2np2)(p+q)n2=n2p2np2+npVariance=v[x]=μ2μ12

image

i.e.,

μ2=E(X2)[E(X)]2=n2p2np2+np(np)2=npnp2=np(1p)=npq

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset