100% Guaranteed Results


EECS126 – UC Berkeley Solved
$ 24.99
Category:

Description

5/5 – (1 vote)

Department of Electrical Engineering and Computer Sciences
EECS 126: Probability and Random Processes
Discussion 10
1. Entropy Warmup Entropy seems like a very weird concept. We’ll walk through a few examples together to build up your intuition. Let’s say that we have a random variable X that can take on values from {lecture,midterm,pop quiz}. (Don’t worry, we don’t actually have pop quizzes in this class). Each day you go to class, you observe a random value of X, which is determined according to the distribution PX. If P(lecture) = 0.85, P(midterm) = 0.1, and P(pop quiz) = 0.05, we’d mainly see lectures, occasionally have a midterm, and have a pop quiz very rarely. We can describe how “interesting” it is to see a particular X = x with the notion of the “surprise,” which is a function . This function is large for low-probability events and small for high-probability events.
(a) For the probabilities above, calculate S(lecture), S(midterm), and S(pop quiz).
(b) If P(lecture) = (midterm) = , and P(pop quiz) = , calculate the surprises again.
Given that 32, and 32, do the relative magnitudes of the values in (a) and (b) make sense intuitively?
(c) The entropy is the expected surprise. Formally,

We will follow the convention that, if for a particular x, PX(x) = 0, then
0. Calculate the entropy for the original probability values (0.85,0.1,0.05), the entropy of the uniform distribution ( ), and the entropy of the deterministic RV with distribution (1,0,0).
(d) Do the entropies in part (c) make sense to you?
2. Mutual Information and Noisy Typewriter The mutual information of X and Y is defined as
I(X;Y ) := H(X) − H(X | Y )
Here, H(X | Y ) denotes the conditional entropy of X given Y , which is defined as:

The interpretation of conditional entropy is the average amount of uncertainty remaining in the random variable X after observing Y . The interpretation of mutual information is therefore the amount of information about X gained by observing Y .
1
(a) Show that H(X,Y ) = H(Y ) + H(X | Y ) = H(X) + H(Y | X). This is often called the Chain Rule. Interpret this rule.
(b) Show that I(X;Y ) = H(X) + H(Y ) − H(X,Y ). Note that this shows that I(X;Y ) =
I(Y ;X), i.e., mutual information is symmetric.
(c) Consider the noisy typewriter.

Each symbol gets sent to one of the adjacent symbols with probability 1/2. Let X be the input to the noisy typewriter, and let Y be the output (X is a random variable that takes values in the English alphabet). What is the distribution of X that maximizes I(X;Y )?
Note
It turns out that I(X;Y ) ≥ 0 with equality if and only if X and Y are independent. The mutual information is an important quantity for channel coding.
3. Huffman Questions
Consider a set of n objects. Let Xi = 1 or 0 accordingly as the i-th object is good or defective. Let X1,X2,…,Xn be independent with P(Xi = 1) = pi ; and p1 > p2 > ··· > pn > 1/2. We are asked to determine the set of all defective objects. Any yes-no question you can think of is admissible.
(a) Propose an algorithm based on Huffman coding in order to identify all defective objects.
(b) Suppose the worst case scenario happens and we have to ask the maximimum number of questions. What (in words) is the last question we should ask? And what two sets are we distinguishing with this question?
2

Reviews

There are no reviews yet.

Be the first to review “EECS126 – UC Berkeley Solved”

Your email address will not be published. Required fields are marked *

Related products