EECS 501__________________________PROBLEM SET #7__________________________Fall 2001

ASSIGNED: October   26, 2001. READ: Stark and Woods pp. 269-303 on estimation (skip 303-312).
DUE DATE: November 2, 2001. THIS WEEK: Estimation problems. Last homework before Exam #2.
  1. A wheel of fortune is known to be calibrated from 0 to some unknown number A.
    The wheel is spun n times, yielding n independent experimental outcomes x1,x2...xn.
      We estimate A using the estimator A=MAX[x1,x2...xn].
    1. Is this estimator unbiased? Is it asymptotically unbiased?
    2. Is this estimator weakly consistent? HINT: what is the pdf for this estimator?

  2. A RV x has exponential pdf fx|L(X|L)=Le-LX for X > 0; fx|L(X|L)=0 for X < 0.
    Compute the maximum likelihood estimate of L
    given 5 independent experimental values x1,x2,x3,x4,x5 of random variable x.
  3. A RV r has exponential pdf fr|L(R|L)=Le-LR for R > 0; fr|L(R|L)=0 for R < 0.
      Now L is itself a RV with exponential pdf fL(L)=(1/T)e-(L/T) for L > 0; fL(L)=0 for L < 0.
    1. Compute the maximum likelihood estimate (MLE) of L from an observation R of r.
    2. Compute the maximum a posteriori estimate (MAP) of L from R, assuming T is known.
      Explain why answer to (b) approaches answer to (a) when T becomes arbitrarily large.
      Also explain what happens, and why, when T goes to 0.
    3. Compute the least-squares estimate (LS) of L from R, assuming T is known.
      Compare your three different estimators for L. What does each estimator assume?

  4. Joe is taking a true-or-false test with 100 questions on it. Joe knows nothing about the material,
    so he answers each question by flipping (independent flips) an unfair coin, answering "true"
    if the coin comes up "heads." Unknown to Joe, the answer to all of the test questions is "true."
    (A practical, real-world application of the material covered in EECS 501!)

      Joe's professor, while grading Joe's test, sighs and
      tries to estimate p=Pr[heads] for the unfair coin Joe used on the test.
    1. Given Joe's answers to each of the 100 questions,
      what is the maximum likelihood estimate of p?
    2. After Joe has gotten his test back, he tells the professor that
      the a priori distribution of p is uniform between 0 and 1.
      Now the only a posteriori information available is Joe's score (out of 100).
      Compute the linear least-squares estimate of p. HINT: iterated expectation.

    "President Coolidge, I bet I can get you to say 3 words." "You lose."