EECS 501__________________________PROBLEM SET #7__________________________Fall 2001

**ASSIGNED:** October 26, 2001. READ: Stark and Woods pp. 269-303 on estimation (skip 303-312).

**DUE DATE:** November 2, 2001. THIS WEEK: Estimation problems. Last homework before Exam #2.

- A wheel of fortune is known to be calibrated from 0 to some
*unknown* number A.

The wheel is spun n times, yielding n independent experimental outcomes
x_{1},x_{2}...x_{n}.

We estimate A using the estimator A=MAX[x_{1},x_{2}...x_{n}].
- Is this estimator unbiased? Is it asymptotically unbiased?
- Is this estimator weakly consistent? HINT: what is the pdf for this estimator?

- A RV x has exponential pdf f
_{x|L}(X|L)=Le^{-LX} for X > 0;
f_{x|L}(X|L)=0 for X < 0.

Compute the maximum likelihood estimate of L

given 5 independent experimental values
x_{1},x_{2},x_{3},x_{4},x_{5} of random variable x.

- A RV r has exponential pdf f
_{r|L}(R|L)=Le^{-LR} for R > 0;
f_{r|L}(R|L)=0 for R < 0.

Now L is *itself* a RV
with exponential pdf f_{L}(L)=(1/T)e^{-(L/T)} for L > 0; f_{L}(L)=0 for L < 0.

- Compute the
*maximum likelihood* estimate (MLE) of L from an observation R of r.
- Compute the
*maximum a posteriori* estimate (MAP) of L from R, assuming T is known.

Explain why answer to (b) approaches answer to (a) when T becomes arbitrarily large.

Also explain what happens, and why, when T goes to 0.

- Compute the
*least-squares* estimate (LS) of L from R, assuming T is known.

Compare your three different estimators for L. What does each estimator assume?

- Joe is taking a true-or-false test with 100 questions on it. Joe knows nothing about the material,

so he answers each question by flipping (independent flips) an *unfair* coin, answering "true"

if the coin comes up "heads." Unknown to Joe, the answer to *all* of the test questions is "true."

(A practical, real-world application of the material covered in EECS 501!)

Joe's professor, while grading Joe's test, sighs and

tries to estimate p=Pr[heads] for the unfair coin Joe used on the test.

- Given Joe's answers to
*each* of the 100 questions,

what is the maximum likelihood estimate of p?
- After Joe has gotten his test back, he tells the professor that

the *a priori* distribution of p is *uniform* between 0 and 1.

Now the only *a posteriori* information available is Joe's score (out of 100).

Compute the *linear* least-squares estimate of p. HINT: iterated expectation.

"President Coolidge, I bet I can get you to say 3 words." "You lose."