100% Guaranteed Results


CS108 – Transformation of R.V. and Solved
$ 29.99
Category:

Description

5/5 – (1 vote)

Multivariate Gaussian
Your Name(?)
Contents
1 Introduction 2
2 Transformation of Random Variable 2
3 Multi-variate Gaussian Disribution 2
3.1 Definition 2
3.2 A is diagonal 3
3.3 A is non-singular square matrix 3

1
Transformation of R.V. and Multivariate Gaussian Your Name(?)

1 Introduction
In this article, we will study about the following topics of statistics:
• Transformation of random variables
• Multivariate Gaussian random variable
2 Transformation of Random Variable
Given any continuous r.v. X with PDF PX(x) and given any function g(X)(defined on range of X) we intend to find PDF associated with the r.v. Y = g(X).
For simplicity, let’s assume g(.) is monotonic increasing.
Then by probability mass conservation,

Using y = g(x), we get the below relation upon simplification

To handle monotonically decreasing g(.) as well ,
for g(.) monotonically increasing
(1) for g(.) monotonically decreasing
For more information, refer [1]

(a) Example 1 (b) Example 2
3 Multi-variate Gaussian Disribution
3.1 Definition
Let X be a vector of random variables of dimension D.
A r.v. X has a joint PDF as multi-variate Gaussian distribution ∃ finite i.i.d. standard Gaussian Transformation of R.V. and Multivariate Gaussian Your Name(?)

r.v. W1,W2,…WN with N > D such that
X = AW + µ
Refer fig[1a] and fig[1b] for visual examples. This has many applications in machine learning, refer [3] and [2].
3.2 A is diagonal
In this case, the Xi are independent. The standard deviation of distribution of Xi is Aii.
3.3 A is non-singular square matrix
Let’s take µ = 0 for simplicity.
Similar to univariate case, where scaling was determined by , the scaling for multi-variate case is determined by determinant of matrix of derivatives, Jacobian matrix.
Also, W = A−1X, which is a linear transformation of vector X. A−1 maps a hypercube to parallelepiped. If the vectors describing the hypercube are along cardinal axis, then the parallelepiped is described by vectors which are columns of A−1.
Claim: The volume of parallelepiped described by column vectors of matrix A−1 is given by det(A−1)
Proof: Addition of any scaled column of a matrix M to another column does not change the determinant.
Therefore by Gram-Schmidt orthogonalization process the columns of A−1 can be constructed to be orthogonal to each other, without changing the determinant. Then multiplying by an orthogonal matrix would rotate the orthogonal vectors(to align them with cardinal axis), and this operation would not change the determinant as well. Now the result matrix is diagonal square matrix and the volume of the parallelepiped described by the column vectors is given by product of diagonal elements.
From the above result, an infinitesimal volume δD after transformation becomes δD · det(A−1).

Let C = A · AT. Then det(A) = pdet(C). The above expression can we rewritten as
) (2)
Sample Values of bivariate normal distribution
x y f(x,y)
0 0 1.6
0.096
0.02
References
[1] url: https://bookdown.org/pkaldunn/DistTheory/Transformations.html.
[3] Carl Edward Rasmussen. Gaussian Processes for Machine Learning. The MIT Press.
Page 3

Reviews

There are no reviews yet.

Be the first to review “CS108 – Transformation of R.V. and Solved”

Your email address will not be published. Required fields are marked *

Related products