Convolutional Neural Network Model_Kevin Shark’s Blog
class=”markdown_views prism-atom-one-dark”> Convolutional Neural Network Model Convolutional Neural Network (LeNet) Model structure: convolutional layer block, fully connected layer block Convolutional layer block: 2 convolutional layers + max pooling layers. Since LeNet is an earlier CNN, there will be a sigmod layer after each convolutional layer + pooling layer to correct the output result. Now, Relu is more used. Fully connected layer block: the input is a two-dimensional vector. When the output of a single convolution layer block is passed to the fully connected layer, each sample will be flattened (flatten) in a small batch LeNet will gradually decrease in width and increase in channels as the network deepens. Deep Convolutional Neural Network (AlexNet) Model structure: 5 layers of convolution + 2 layers of fully connected hidden layer + 1 layer of fully connected output layer Convolution layer: The first two use 11×11 and 5×5 convolution kernels, and the rest are 3×3 convolution kernels. The first, second, and fifth convolutional layers all use a 3×3 max pooling layer with a stride of 2. Fully connected layer: 2 fully connected layers with 4096 outputs carry nearly 1GB of model parameters. Activation function: AlexNet uses the Relu activation function. Compared to sigmod, Relu…