100% Guaranteed Results


MA 590 Homework 8 Solved
$ 29.99
Category:

Description

5/5 – (1 vote)

Dane Johnson
Exercise 1
Part a
Prove that E[(X − x¯)(Y − y¯)] = E[XY ] − E[X]E[Y ].
Cov(X,Y ) ≡ E[(X − x¯)(Y − y¯)] (1)
= E[XY − yX¯ − xY¯ + ¯xy¯)] (2)
= E[XY ] + E[−yX¯ ] + E[−xY¯ ] + E[¯xy¯] (3)
= E[XY ] − y¯E[X] − x¯E[Y ] + ¯xy¯ (4)
= E[XY ] − y¯x¯ − x¯y¯+ ¯xy¯ (5)
= E[XY ] − x¯y¯ (6)
= E[XY ] − E[X]E[Y ] . (7)
(1) Definition of Cov(X,Y ). (2) Distributive property. (3) Linearity of expectation. (4) Linearity of expectation. (5) Definition of ¯x,y¯. (6) Simplification. (7) Definition of ¯x,y¯.
Part b
Prove that if X and Y are independent random variables, then X and Y are uncorrelated (that if the correlation between X and Y is denoted by r, then r = 0).
Suppose X and Y are independent. Then E[XY ] = E[X]E[Y ]. Using part a, this means that Cov(X,Y ) = E[XY ]−E[X]E[Y ] = E[X]E[Y ]−E[X]E[Y ] =
0. Therefore, r ≡ Cov(X,Y )/(σXσY ) = 0/(σXσY ) = 0, showing that X and Y are uncorrelated.
Part c
Prove that for s ∈ R and X a random variable, that Var(sX) = s2Var(X).
For a random variable Z, Var(Z) ≡ E[(Z − z¯)2], where ¯z = E[Z]. In the case of sX we have E[sX] = sE[X] = sx¯ by linearity and:
Var(sX) = E[(sX − sx¯)2] (8)
= E[s2(X − x¯)2] (9)
= s2E[(X − x¯)2] (10)
= s2Var(X) . (11)
(8) Definition of variance. (9) Factoring. (10) Linearity of expectation. (11) Definition of variance.
Part d
Prove that Var(X + Y ) = Var(X) + Var(Y ) + 2Cov(X,Y ). First note that for a random variable Z:
Var(Z) ≡ E[(Z − z¯)2] = E[Z2 − 2¯zZ + (¯z)2]
= E[Z2] − 2(¯z)2 + (¯z)2 = E[Z2] − (E[Z])2 .
Let X,Y be random variables. The variance of the random variable X + Y is:
Var(X + Y ) = E[(X + Y )2] − (E[X + Y ])2
= E[X2] + 2E[XY ] + E[Y ]2 − (E[X])2 − 2E[X]E[Y ] − (E[Y ])2
= E[X2] − (E[X])2 + E[Y 2] − (E[Y ])2 + 2(E[XY ] − E[X]E[Y ])
= Var(X) + Var(Y ) + 2(E[XY ] − E[X]E[Y ]) = Var(X) + Var(Y ) + 2Cov(X,Y ) .
Exercise 2
Consider the random variable A = Xe1+Y e2, where X,Y ∼ N(0,σ2). If we define R = ||A||2, then since no matter what the dimension of A only the first√

two components of A are nonzero (by our definition of A), R = X2 + Y 2. Since X and Y are continuous random variables, it follows that R is also a continuous random variable. Then the cumulative distribution function of R, which we denote FR(t) is given by:

where fX,Y is the joint probability density function of X and Y . If we assume that X and Y are independent (and so by the above iid) random variables, fX,Y (x,y) = fX(x)fY (y). The pdf of a normal random variable√ Z with mean

zero and standard deviation σ is fZ(z) = 1/( 2πσ)exp(−z2/(2σ2)).Then,
.
Converting the double integral above to polar coordinates we have


We note the restriction t ≥ 0 since t = X2 + Y 2 ≥ 0. The probability density function of R, fR(t), is the derivative of FR(t). Therefore, the probability density function of the Rayleigh distribution is:
.

Reviews

There are no reviews yet.

Be the first to review “MA 590 Homework 8 Solved”

Your email address will not be published. Required fields are marked *

Related products