100% Guaranteed Results


STAT210A – UC Berkeley Solved
$ 24.99
Category:

Description

5/5 – (1 vote)

Department of Statistics
STAT 210A: Introduction to Mathematical Statistics
Problem Set 3
Fall 2014

Problem 3.1
Let X1,…,Xn be i.i.d. absolutely continuous variables with common density fθ, θ ∈ R, given by
, x < θ; θ.
(This is the density for the standard normal distribution truncated above at θ.)
(a) Derive a formula for the UMVU for g(θ). (Assume that g is differentiable and behaves reasonably as θ → ±∞.)
(b) If n = 3 and the observed data are −2.3,−1.2, and 0, what is the estimate for θ2?
Problem 3.2
Consider a scale family 1θf(x/θ), θ > 0 where f is some fixed density function.
(a) Show that the amount of information that a single observation X contains about θ is given by

(b) Show that the information X contains about ξ = logθ is independent of θ.
(c) For the Cauchy distribution C(0,θ), show that I(θ) = 1/(2θ2).
Problem 3.3
(Poisson birth process) This example illustrates some differences that can arise with dependent data, as opposed to i.i.d. sampling models. Consider a random sequence Y0,Y1,…,Yn such that Y0 ∼ Poi(θ), and Yj given the past (Y0,…,Yj−1) is also Poisson with mean θYj−1. The maximum likelihood estimate (MLE) of θ maximizes the log likelihood `(θ) = logp(Y0,…,Yn;θ) of the data.
(a) Show that the MLE of θ based on (Y0,…,Yn) is given by
(b) Show that the information in (Y0,…,Yn) about θ is given by I(θ) = θ−2(θ+θ2 +…+ θn+1). What happens to this information for θ < 1? Intuitively, what is happening in this model?
1
Problem 3.4
Suppose that the vector X = (X1,…,Xn) has i.i.d. elements with the density
p(x;θ) = exp(θ − x), for x ≥ θ.
Let δ(·) be any unbiased estimator of θ based on X.
(a) Using Cauchy-Schwartz, first show that
.
(b) Hence conclude that
varθ(δ(X)) ≥ a∗/n2,
where a∗ solves the equation 2
(c) The information inequality under i.i.d. sampling predicts scaling of the form varθ(δ(X)) = O(1/n). Explain why the result of (b) differs from this scaling.
(d) Prove that the estimator is unbiased, and has variance 1/n2.
Problem 3.5
Let X1,…,Xn be i.i.d samples from the Poi(λ) distribution truncated on the left at 0 (i.e., a Poisson variate Y ∼ Poi(λ) conditioned on Y ≥ 1). Show that the information inequality for any unbiased estimator of λ is
.
2

Reviews

There are no reviews yet.

Be the first to review “STAT210A – UC Berkeley Solved”

Your email address will not be published. Required fields are marked *

Related products