Description
S C H O O L O F E N G I N E E R I N G
D E P A R T M E N T O F E L E C T R I C A L
& C O M P U T E R E N G I N E E R I N G
Νευρο-Ασαφής Υπολογιστική
Σειρά προβλημάτων: 3η: ΟΜΑΔΙΚΕΣ (2-ΑΤΟΜΩΝ) ΕΡΓΑΣΙΕΣ
SECTION 1: Hebbian and Competitive learning
Use the Hebb rule to determine the weight matrix for a perceptron network to recognize
Problem-02
Consider the input vectors and initial weights shown in the figure below.
1. Draw the diagram of a competitive network that could classify the data above so that each of the three clusters of vectors would have its own class.
2. Train the network graphically (using the initial weights shown) by presenting the labeled vectors in the following order: p1, p2, p3, p4. [Recall that the competitive transfer function chooses the neuron with the lowest index to win if more than one neuron has the same.]
SECTION 2: Recurrent neural networks
Problem-03
Before stating the problem, let’s first introduce some notation that will allow us to efficiently represent dynamic networks:
Using this notation, consider the following network:
Define the network architecture, by showing U, X, Im, DIm,1, DLm,l, Lfm, Lbm, EULW(x), EULW(u). Also select a simulation order and indicate the dimension of each matrix.
Problem-04
Find the computational complexity for the BPTT algorithm applied to the sample network in figure below as a function of the number of neurons in Layer 1 (S1), the number of delays in the tapped delay line (D) and the length of the training sequence (Q).
SECTION 3: Convolutional neural networks
Problem-05
Consider a part of a large CNN illustrated in the figure below, which depicts a 3×3 input layer, a 2×2 kernel, and the resulting 2×2 activation map. Let the performance function be denoted as L. Write down the equation of L/Wij for each i,j as a function of L/hij (which has been computed in the earlier stage of backpropagation).
Problem-06
Consider a convolutional neural network that is used to classify images into two classes. The structure of the network is as follows:
• INPUT: 100×100 grayscale images.
• LAYER 1: Convolutional layer with 100 5×5 convolutional filters.
• LAYER 2: Convolutional layer with 100 5×5 convolutional filters.
• LAYER 3: A max pooling layer that down-samples Layer 2 by a factor of 4 (from 100×100 50×50)
• LAYER 4: Dense layer with 100 units
• LAYER 5: Dense layer with 100 units
• LAYER 6: Single output unit
How many weights does this network have?
Problem-07
We address a convolutional neural network (CNN) with one-dimensional input. While two-dimensional CNNs can be used for example for grayscale images, one-dimensional CNNs could be used for time-series such as temperature or humidity readings. Concepts for the 1D-case are equivalent to 2D networks. We interpret data in our network as three-dimensional arrays where a row denotes a feature map, a column denotes a single dimension of the observation, and the depth of the array represents different observations. As we will only work with a single input vector, the depth will always be one.
Let the following CNN be given:
• Input I: Matrix of size 1x12x1. We therefore have an input with twelve dimensions consisting of a single feature map.
• First convolutional layer with filters F10 = (-1,0,1) and F11 = (1,0,-1) that generates two output feature maps from a single input feature map. Use valid mode for convolutions.
• Max-pooling layer with stride 2 and filter size 2. Note that max-pooling pools each feature map separately.
• Convolutional layer with convolutional kernel F20 = ((-1,0,1),(1,0,-1)) of size 2x3x1.
• Fully connected layer that maps all inputs to two outputs. The first output is calculated as the negative sum of all its inputs, and the second layer is calculated as the positive sum of all its inputs.
• Sigmoidal activation function
Calculate the response of the CNN for the two inputs (0,0,0,0,1,1,1,1,0,0,0,0) and
(0,0,0,0,1,1,1,1,0,0,0,0).
SECTION 4: Fuzzy logic
Problem-08
Consider the following reference set: {A, B, C, D, E, F, G}, and the fuzzy subsets
A= {(A|0), (B|0.3), (C|0.7), (D|1), (E|0), (F|0.2), (G|0.6)},
B= {(A|0.3), (B|1), (C|0.5), (D|0.8), (E|1), (F|0.5), (G|0.6)},
C= {(A|1), (B|0.5), (C|0.5), (D|0.2), (E|0), (F|0.2), (G|0.9)}.
Calculate the following:
1. A B
2. A B
3. A Bc
4. (A Bc) C
5. (A B)c Cc
6. (A Ac) A
Problem-09
Prove the fuzzy DeMorgan laws:
1. X Y = (Xc Yc) c
2. X Y = (Xc Yc) c
Problem-10
Show that Nα(Nα(x)) = x for the generalized negation operator N(x) 1x α>1, 0≤x<1.
1x
Χρηστικές πληροφορίες:
Η προθεσμία παράδοσης είναι αυστηρή. Είναι δυνατή η παροχή παράτασης (μέχρι 3 ημέρες), αλλά μόνο αφού δώσει ο διδάσκων την έγκρισή του και αυτή η παράταση στοιχίζει 10% ποινή στον τελικό βαθμό της συγκεκριμένης Σειράς Προβλημάτων. Η παράδοση γίνεται με email (στο dkatsar@e-ce.uth.gr) του αρχείου λύσεων σε μορφή pdf (ιδανικά typeset σε LATEX). Θέμα του μηνύματος πρέπει να είναι το: CE418-Problem set 03:
AEM1-AEM2
Ερμηνεία συμβόλων:
Δεν απαιτεί την χρήση υπολογιστή ή/και την ανάπτυξη κώδικα.
Απαιτεί την χρήση του Web για ανεύρεση πληροφοριών ή διεξαγωγή πειράματος.
Απαιτεί την ανάπτυξη κώδικα σε όπoια γλώσσα προγραμμαστιμού ή Matlab. Το παραδοτέο θα περιέχει:
Την λύση της άσκησης
Τον πηγαίο κώδικα υλοποίησης




Reviews
There are no reviews yet.