100% Guaranteed Results


AI61003 – Indian Institute of Technology Kharagpur Solved
$ 25.00
Category:

Description

Rate this product

ANSWER ALL THE QUESTIONS
1. Download the MNIST dataset (both train and test sets) and vectorize each data point.Select samples from classes 1 and 7 and assign their corresponding labels as +1 and −1, respectively. Create an 80-20 train-test split. Using Least Squares, train your linear model on the training set and report the classification accuracy and the confusion matrix on the test set.
2. Download the MNIST dataset (both train and test sets) and vectorize each data point.Select samples from class i, in the train set, and assign the corresponding label as +1 where i ∈ C = {0,1,2,3,4,5,6,7,8,9}. All samples that belong to the classes C i are assigned the label −1. To achieve a balanced dataset, randomly sample x number of data points from the negative class where x is the number of data points in the positive class. Using Least Squares, train 10 separate linear regression models, one for each class in C. For each sample in the test set, get a prediction from all 10 linear models and assign it the class for which it receives the highest score. Finally, report the accuracy and confusion matrix.
3. Generate two vectors, p ∈ R100 and q ∈ R100 sampled from a uniform random distribution with the range [−10,10]. Generate the target variable, b, in the following manner:
(
+1 if piqi > 1 bi =
−1 otherwise
Here, bi,pi and qi denote the i-th index of the corresponding vectors b,p and q, respectively. Now, generate the following basis functions,
f0(pi,qi) = 1 f1(pi,qi) = pi f2(pi,qi) = pi f3(pi,qi) = p2i f4(pi,qi) = qi2 f5(pi,qi) = piqi
Use the basis functions to define A ∈ R100×6 where Aji = fj(pi,qi). Here i is the row index while j is the column index. Use least squares to predict the target variable b by learning the coefficient vector x ∈ R6. Report x.
4. Generate two vectors, p ∈ R100 and q ∈ R100 sampled from a uniform random distribution with the range [−1,1]. The i-th element of the target variable, b, is defined as,
bi = piqi + p2i + qi2
Now, generate the following basis functions,
f0(pi,qi) = 1 f1(pi,qi) = pi f2(pi,qi) = pi f3(pi,qi) = p2i f4(pi,qi) = qi2 f5(pi,qi) = piqi
Use the basis functions to define A ∈ R100×6 where Aji = fj(pi,qi). Here i is the row index while j is the column index. Use least squares to predict the target variable b by learning the coefficient vector x ∈ R6. Report x as well as the Mean Squared Error (MSE).
5. Generate a vector p ∈ R100 from a uniform random distribution with range [0,1]. The i-th element of the target variable, b, is defined as,
bi = 7pi − 3p2i
where pi is the i-th element of vector p. For n ∈ N, we define basis functions
{f1,f2,…,fn}.
For j = 1,2,…,n, the basis function fj(t) is defined as follows.

• On the interval , The graph of the function fj(t) is a triangle with its
vertices at and (δ,1) where δ is the mid-point of and
.
(a) For n = 10 and n = 50, use the least squares method to predict the target variable b by learning the coefficient vector x ∈ Rn by arranging the problem in the standard form Ax = b. Report x as well as the Mean Squared Error (MSE).
(b) For n = 150 and n = 200, use the multiobjective least squares method to predict the target variable b by learning the coefficient vector x ∈ Rn by arranging the problem in the standard form Ax = b. The two objectves J1 and J2 are J1 = and . Minimize J1 + λJ2 for 5 different randomly
chosen values of λ in (0,0.2). Report x as well as the Mean Squared Error (MSE).
6. Download the auto-regressive-data.csv file. The target variable is the Consumption column. Create lag features for each sample with the number of lag variables set to
8. Doing so will generate an input-output pair (xi,yi) where xji = yi−j and yi = yi+1. Here, xji denotes the i-th sample and j-th column. Train a linear regression model on this dataset to predict the next day’s consumption value.
7. Take a single sample from the MNIST dataset. Flatten the image into vector x ∈ R784. Create the following Gaussian blurring kernel,

Now generate A ∈ R784×784, the Toeplitz matrix of K. Finally, create the blurred image of y = Ax. We now have to de-blurr y. This can be done by minimizing the following cost function with respect to ˆx,
∥Axˆ − y∥2 + λ∥Dhxˆ∥2 + λ∥Dvxˆ∥2 (1)
where ˆx ∈ R784 is the flattened, estimated, de-blurred image while λ is a hyperparameter that needs to be set according to your preference (try starting out with
λ = 0.007). Here, Dh ∈ R(27.28×28.28) and Dv ∈ R(27.28×28.28) where,
−I
 0
 …
Dh =
  0

0 I 0
−I I
… …
0 0
0 0 0 ···
0 ···
… ···
0 ···
0 ··· 0
0

−I
0 0
0

I
−I 0
0
…
0 I
 D
0
Dv = …


0 0 ···
D ···
… …
0 ··· 0
0
… 
D
−1
 0
 …
D =
  0

0 1 0
−1 1
… …
0 0
0 0 0 ···
0 ···
… ···
0 ···
0 ··· 0
0

−1
0 0
0

1
−1 0
0
…
0
1
and I ∈ R28×28 while D ∈ R27×28.
8. Generate two vectors, p ∈ R100 and q ∈ R100 sampled from a uniform random distribution with the range [−1,1]. The i-th element of the target variable, b, is defined as,
bi = piqi + p2i + qi2 (2)
Now, generate the following basis functions,
f0(pi,qi) = 1 f1(pi,qi) = pi f2(pi,qi) = pi
2 2 (3)
f3(pi,qi) = pi f4(pi,qi) = qi f5(pi,qi) = piqi
Use the basis functions to define A ∈ R100×6 where ). Here i is the row index while j is the column index. Implement least squares using the gradient descent algorithm to predict the target variable b by learning the coefficient vector x ∈ R6. Report x as well as the Mean Squared Error (MSE).
9. Piece-wise polynomial least squares problem Using given data, fit the piece-wise polynomial function on interval [0,3] with the following constraint. Fit f1 on interval [0,1], f2 on interval [1,2] and f3 on interval [2,3] such that,
(a) Degree of f1 is 2
(b) Degree of f2 is 3
(c) Degree of f3 is 2
(d) f1(1) = f2(1)
(e) f2(2) = f3(2)
(f) f2′(2) = f3′(2)
****************** THE END ******************

Reviews

There are no reviews yet.

Be the first to review “AI61003 – Indian Institute of Technology Kharagpur Solved”

Your email address will not be published. Required fields are marked *

Related products