Description
Department of Statistics
STAT 210A: Introduction to Mathematical Statistics
Problem Set 5
Fall 2014
Problem 5.1
Consider a Bayesian model in which the prior distribution for ✓ is uniform on (0,1) and, given ✓, the observations X1,…,Xn are i.i.d. Bernoulli with success probability ✓. Find
P(Xn+1 = 1|X1,…,Xn).
Problem 5.2
Let X1,X2,…,Xn be sampled conditionally independently from N(µ, 2), where µ and 2 are considered random. In class we presented a conjugate prior for µ when 2 is treated as fixed and a conjugate prior for 2 when µ is treated as fixed. Show that the prior obtained by assuming µ and 2 are independent and taking the product of the priors presented in class is not conjugate for the joint parameter (µ, 2). Provide a conjugate prior. Discuss a practical data analysis situation in which the conjugate prior seems appropriate and a data analysis situation in which the non-conjugate product prior seems appropriate.
Problem 5.3
Find a transform of ✓, ⌘ = h(✓), such that the Fisher information I(⌘) is constant (and therefore the Je↵reys prior is constant) for:
• the binomial distribution, Bin(n,✓);
• the gamma distribution, Ga(a,✓), with a = 1,2,3; and
• the Maxwell distribution, Max(✓) : p(x|✓) /✓3/2x2e ✓x/2,x 0,✓ > 0.
Problem 5.4
In a linear regression model the n-vector of responses y has distribution (y | ) ⇠ N(X ,In), with mean response vector µ = E(y | ) = X and identity variance matrix, where X is the n ⇥ p design matrix of rank p and is the p-vector of regression coe cients. Suppose that the prior for is ⇠ N(0,g 1(X0X) 1) for some number g > 0.
1. What is the posterior distribution of ( |y)?
2. Show that posterior mean E( |y) can be expressed as a function of ˆ, the usual MLE of .
3. What is the posterior mean E(µ|y)?
1
4. What is the posterior variance matrix of µ?
5. Consider the special case of an orthogonal design, so that X0X = Ip. Denote by µi the ith element of µ. Under the posterior p(µ|y) are µj and µk independent for j =6 k?
Problem 5.5
Consider a Bayesian model in which given ✓ the observations X1,…,Xn are i.i.d. Bernoulli with success probability ✓.
1. Let (⇡(1),…,⇡(n)) be a permutation of (1,…,n). Show that
(X⇡(1),…,X⇡(n)) and (X1,…,Xn)
have the same distribution. When this holds the variables are said to be exchangeable.
2. Show that Cov(Xi,Xj) 0. When will this covariance be zero?
2




Reviews
There are no reviews yet.