EFFICIENCY


An unbiased estimator is defined to be efficient if the variance of its sampling distribution is smaller than that of the sampling distribution of any other unbiased estimator of the same parameter. In other words, suppose that there are two unbiased estimators T1 and T2 of the same parameterθ.Then, the estimator T1 will be said to be more efficient than T2 if Var (T1) < Var (T2). In the following diagram, since Var (T1) < Var (T2), hence T1 is more efficient than T2 :

statistics and probability  EFFICIENCY

1 And, if we multiply the above expression by 100, we obtain the relative efficiency in percentage form. It thus provides a criterion for comparing different unbiased estimators of a parameter. Both the sample mean and the sample median for a population that has a normal distribution, are unbiased and consistent estimators of µ but the variance of the sampling distribution of sample means is smaller than the variance of the sampling distribution of sample medians. Hence, the sample mean is more efficient than the sample median as an estimator of µ.The sample mean may therefore be preferred as an estimator. Next, we consider various methods of point estimation. A point estimator of a parameter can be obtained by several methods. We shall be presenting a brief account of the following three methods:

METHODS OF POINTESTIMATION

  • The Method of Moments
  • The Method of Least Squares
  • The Method of Maximum Likelihood

THE METHOD OFLEAST SQUARES

The method of Least Squares, which is due to Gauss (17771855) and Markov (18561922), is based on the theory of linear estimation. It is regarded as one of the important methods of point estimation. An estimator found by minimizing the sum of squared deviations of the sample values from some function that has been hypothesized as a fit for the data, is called the least squares estimator. The method of leastsquares has already been discussed in connection with regression analysis that was presented in Lecture No. 15.

You will recall that, when fitting a straight line y = a+bx to real data, ‘a’ and ‘b’ were determined by minimizing the sum of squared deviations between the fitted line and the datapoints. The yintercept and the slope of the fitted line i.e. ‘a’ and ‘b’ are leastsquare estimates (respectively) of the yintercept and the slope of the TRUE line that would have been obtained by considering the entire population of datapoints, and not just a sample.

METHOD OF MAXIMUM LIKELIHOOD

The method of maximum likelihood is regarded as the MOST important method of estimation, and is the most widely used method. This method was introduced in 1922 by Sir Ronald A. Fisher (18901962).The mathematical technique of finding Maximum Likelihood Estimators is a bit advanced, and involves the concept of the Likelihood Function.

RATIONALE OF THE METHOD OF MAXIMUM LIKELIHOOD (ML)

“To consider every possible value that the parameter might have, and for each value, compute the probability that the given sample would have occurred if that were the true value of the parameter. That value of the parameter for which the probability of a given sample is greatest, is chosen as an estimate.” An estimate obtained by this method is called the maximum likelihood estimate (MLE). It should be noted that the method of maximum likelihood is applicable to both discrete and continuous random variables.

EXAMPLES OF MLE’s IN CASE OF DISCRETE DISTRIBUTIONS

Example1:

For the Poisson distribution given by

eµµ x P(X = x) = , x = 0,1,2,……,

the MLE of µ is  X (the sample mean).

x !

EXAMPLE2

For the geometric distribution given by the MLE of p is Hence, the MLE of p is equal to the reciprocal of the mean.

EXAMPLE3

For the Bernoulli distribution given by x 1− x

P(X = x) = pq ,x = 0,1 , the MLE of p is (the sample mean).

EXAMPLES OF MLE’s IN CASE OF CONTINUOUS DISTRIBUTIONS

Example1

For the exponential distribution given by

−θx

f (x)=θe , x > 0, θ> 0, 1

the MLE of θ is (the reciprocal of the sample mean .)

X

EXAMPLE2

For the normal distribution with parameters µ and σ2, the joint ML estimators of µ and σ2 is the sample mean and the sample variance S2(which is not an unbiased estimator of σ2).As indicated many times earlier, the normal distribution is encountered frequently in practice, and, in this regard, it is both interesting and important to note that, in the case of this frequently encountered distribution, the simplest formulae (i.e. the sample mean and the sample variance) fulfill the criteria of the relatively advanced method of maximum likelihood estimation !The last example among the five presented above (the one on the normal distribution) points to another important fact and that is : The Maximum Likelihood Estimators are consistent and efficient but not necessarily unbiased. (As we know, S2 is not an unbiased estimator of σ2.)

EXAMPLE

It is wellknown that human weight is an approximately normally distributed variable. Suppose that we are interested in estimating the mean and the variance of the weights of adult males in one particular province of a country. A random sample of 15 adult males from this particular population yields the following weights (in pounds):

131.5 136.9 133.8 130.1 133.9
135.2 129.6 134.4 130.5 134.2
131.6 136.7 135.8 134.5 132.7

Find the maximum likelihood estimates for θ1= µ and θ2= σ2.

SOLUTION

The above data is that of a random sample of size 15 from N(µ, σ2). It has been mathematically proved that the joint maximum likelihood estimators of µ and σ2 are X and S2. We compute these quantities for this particular sample, and obtain  X = 133.43, and S2 = 5.10 .These are the Maximum Likelihood Estimates of the mean and variance of the population of weights in this particular example. Having discussed the concept of point estimation in some detail, we now begin the discussion of the concept of interval estimation: As stated earlier, whenever a single quantity computed from the sample acts as an estimate of a population parameter, we call that quantity a point estimate e.g. the sample mean is a point estimate of the population mean µ. The limitation of point estimation is that we have no way of ascertaining how close our point estimate is to the true value (the parameter). For example, we know that is an unbiased estimator of µ i.e. if we had taken all possible samples of a particular size from the population and calculated the mean of each sample, then the mean of the sample means would have been equal to the population mean (µ), but in an actual survey we will be selecting only one sample from the population and will calculate its mean . We will have no way of ascertaining how close this particular is toµ. Whereas a point estimate is a single value that acts as an estimate of the population parameter, interval estimation is a procedure of estimating the unknown parameter which specifies a range of values within which the parameter is expected to lie. A confidence interval is an interval computed from the sample observations x1, x2….xn, with a statement of how confident we are that the interval does contain the population parameter.

We develop the concept of interval estimation with the help of the example of the Ministry of Transport test to which all cars, irrespective of age, have to be submitted.

EXAMPLE

Let us examine the case of an annual Ministry of Transport test to which all cars, irrespective of age, have to be submitted. The test looks for faulty breaks, steering, lights and suspension, and it is discovered after the first year that approximately the same number of cars has 0, 1, 2, 3, or 4 faults. You will recall that when we drew all possible samples of size 2 from this uniformly distributed population, the sampling distribution of X was triangular: Sampling Distribution of X for n =2

P (x )

5/25

4/25 3/25 2/25 1/25

X

0 0. 0. 1. 1. 2. 2. 3. 3. 4.

But when we considered what happened to the shape of the sampling distribution with if the sample size is increased, we found that it was somewhat like a normal distribution: Sampling Distribution of⎯X for n =3

P (x )

20/125

16/125 12/125 8/125 4/125

0

statistics and probability  EFFICIENCY

VN:F [1.9.14_1148]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.14_1148]
Rating: 0 (from 0 votes)