100% Guaranteed Results


ECE253 – Homework 3 Solved
$ 29.99
Category:

Description

5/5 – (1 vote)

ECE 253
Digital Image Processing
Make sure you follow these instructions carefully during submission :
• All problems are to be solved using Python unless mentioned otherwise.
• You should avoid using loops in your Python code unless you are explicitly permitted to do so.
• Submit your homework electronically by following the two steps listed below –
2. Upload a zip file with all of your scripts and files on Gradescope. Name this file: ECE 253 hw3 lastname studentid.zip. This should include all files necessary to run your code out of the box.
Problem 1. Canny Edge Detection (15 points)
In this problem, you are required to write a function that performs Canny Edge Detection. The function has the following specifications:
• It takes in two inputs: a grayscale image, and a threshold te.
• It returns the edge image.
• You are allowed the use of loops.
A brief description of the algorithm is given below. Make sure your function reproduces the each step as given.
1. Smoothing: It is inevitable that all images taken from a camera will contain some amount of noise. To prevent noise from being mistaken for edges, noise must be reduced. Therefore the image is first smoothed by applying a Gaussian filter. A Gaussian kernel with standard deviation σ = 1.4 (shown below) is to be used.

2. Finding Gradients The next step is to find the horizontal and vertical gradients of the smoothed image using the Sobel operators. The gradient images in the x and y-direction, Gx and Gy are found by applying the kernels kx and ky given below:
.
The corresponding gradient magnitude image is computed using:
q
|G| = G2x + G2y,
and the edge direction image is calculated as follows:
.
3. Non-maximum Suppression (NMS): The purpose of this step is to convert the thick edges in the gradient magnitude image to “sharp” edges. This is done by preserving all local maxima in the gradient image, and deleting everything else. This is carried out by recursively performing the following steps for each pixel in the gradient image:
• Round the gradient direction θ to nearest 45◦, corresponding to the use of an 8-connected neighbourhood.
• Compare the edge strength of the current pixel with the edge strength of the pixel in the positive and negative gradient direction i.e. if the gradient direction is north (θ = 90◦), then compare with the pixels to the north and south. • If the edge strength of the current pixel is largest; preserve the value of the edge strength. If not, suppress (remove) the value.
Evaluate your canny edge detection function on geisel.jpg for a suitable value of te that retains the structural edges, and removes the noisy ones.
Things to turn in:
• The original gradient magnitude image, the image after NMS, and the final edge image after thresholding.
• The value for te that you used to produce the final edge image.
• Code for the function.
Problem 2. Butterworth Notch Reject Filtering in Frequency Domain (15 points)
This problem will follow Figure 4.64 in section 4.10.2 Notch Filters of Gonzalez & Woods 3rd Edition.
(i) Read in the image Car.tif, pad the image to 512×512 (using zero padding on all four sides of the image), and display the 2D-FFT log magnitude (after moving the DC component to the center with fftshift). Please refer to the links available here and here.
You should see symmetric “impulse-like” bursts which we suspect is the cause of the Moire Pattern (the dot pattern from the newspaper image). We would like to filter out this pattern in the frequency domain using a Butterworth Notch Reject Filter given by:
(1)
where
(2)
(3)
Here, we have slightly modified the definition from the textbook slightly in that we removed M/2 and N/2 from the equation so that the center of the DFT image is 0 rather than (M/2,N/2).
Python:
x_axis = np.linspace(-256,255,512) y_axis = np.linspace(-256,255,512) [u,v] = np.meshgrid(x_axis,y_axis)
(ii) Repeat for Street.png, except for K=2 and remove the burst along the u = 0 axis and the v = 0 axis.
Things to turn in:
• All images should have colorbars next to them
• All DFT magnitude images should have the DC frequencies in the center of the image
• 4 images from 2(i): 1 unpadded original image, the corresponding 2D DFT log-magnitude, the butterworth Notch Reject Filter in frequency domain HNR(u,v), the final filtered image
• 10 parameters for 2(i): n, D0, u1, v1, …, u4, v4
• 4 images from 2(ii): 1 unpadded original image, the corresponding 2D DFT log-magnitude, the butterworth Notch Reject Filter in frequency domain HNR(u,v), the final filtered image
• 6 parameters for 2(ii): n, D0, u1, v1, u2, v2
• Code for 2(i), 2(ii)
Problem 3. PyTorch tutorial and questions (5 points)
After seeing some awe-inspiring machine learning results, do you want to try some yourselves? Let’s start with some basic practice of machine learning framework: PyTorch. Please follow the tutorial for cifar10 classifier (cifar10 tutorial) and answer the following questions.
(i) Login to the server for CPU/GPU resources. (0 point)
• https://datahub.ucsd.edu/ (Jupyterhub)
• Select environment (with or without GPU. You don’t need GPUs in this homework.)
• Launch environment.
(ii) How many images and batches are used to train the network?
(iii) Do we normalize the images? What do we do in the example?
(iv) The losses are dropping! Can you plot out the training loss?
(v) Now the network is done training. Can you check some successful cases and some failure cases (show some images classified by the network)?
(vi) Can you visualize the output of the 1st layer of CNN using one image from the training set?
References:
• deep learning 60min blitz
• pytorch with examples

Reviews

There are no reviews yet.

Be the first to review “ECE253 – Homework 3 Solved”

Your email address will not be published. Required fields are marked *

Related products