@mShuaiZhao
2018-01-04T11:19:53.000000Z
字数 1918
阅读 329
CNN
2017.12
从明到暗,从暗到明。
数字的选择
可学习的参数
把这些filter中的参数都当成可学习的,通过backpropagation的方法学习到你想要的filter。
You do not want your image to shrink.
The pixels in the corner is used much less than pixels in the center. Throwing away a lot of information near the edge of the image.
Valid and Same convolution
的情况下,
保证padding后图像大小不变,
summary of convolutions
the size of the output is
convolution in math textbook
before convolving rotate the filter by
多个卷积核组合,检测不同的特征
e.g. vertical edges and horizontal edges
depth, also refer to the depth of the NN, so we use channels in next videos.
Types of layer in a convolutional network
convolution
pooling
fully connected
ConvNets often also use pooling layers to reduce the size of the representation, to speed the computation, as well as make some of the features that detects a bit more robust.
So, what the max operates to does is really to say, if these features detected anywhere in this filter, then keep a high number. But if this feature is not detected, so maybe this feature doesn't exist in the upper right-hand quadrant. Then the max of all those numbers is still itself quite small. So maybe that's the intuition behind max pooling. But I have to admit, I think the main reason people use max pooling is because it's been found in a lot of experiments to work well, and the intuition I just described, despite it being often cited, I don't know of anyone fully knows if that is the real underlying reason. I don't have anyone knows if that's the real underlying reason that max pooling works well in ConvNets.
no parameters to learn
average pooling
max pooling is more common than vaerage pooling
parameter sharing
sparsity of connections