使用 Keras 的用于 CIFAR10 的 ConvNets

让我们在 Keras 重复 LeNet CNN 模型构建和 CIFAR10 数据训练。我们保持架构与前面的示例相同,以便轻松解释概念。在 Keras 中,损失层添加如下:

model.add(Dropout(0.2))

用于 CIFAR10 CNN 模型的 Keras 中的完整代码在笔记本 ch-09b_CNN_CIFAR10_TF_and_Keras 中提供。

在运行模型时,我们得到以下模型描述:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 32, 32, 32)        1568      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 16, 16, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 16, 16, 64)        32832     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64)          0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 8, 8, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 4096)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 1024)              4195328   
_________________________________________________________________
dropout_3 (Dropout)          (None, 1024)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                10250     
=================================================================
Total params: 4,239,978
Trainable params: 4,239,978
Non-trainable params: 0
_________________________________________________________________

我们得到以下训练和评估结果:

Epoch 1/10
50000/50000 [====================] - 191s - loss: 1.5847 - acc: 0.4364   
Epoch 2/10
50000/50000 [====================] - 202s - loss: 1.1491 - acc: 0.5973   
Epoch 3/10
50000/50000 [====================] - 223s - loss: 0.9838 - acc: 0.6582   
Epoch 4/10
50000/50000 [====================] - 223s - loss: 0.8612 - acc: 0.7009   
Epoch 5/10
50000/50000 [====================] - 224s - loss: 0.7564 - acc: 0.7394   
Epoch 6/10
50000/50000 [====================] - 217s - loss: 0.6690 - acc: 0.7710   
Epoch 7/10
50000/50000 [====================] - 222s - loss: 0.5925 - acc: 0.7945   
Epoch 8/10
50000/50000 [====================] - 221s - loss: 0.5263 - acc: 0.8191   
Epoch 9/10
50000/50000 [====================] - 237s - loss: 0.4692 - acc: 0.8387   
Epoch 10/10
50000/50000 [====================] - 230s - loss: 0.4320 - acc: 0.8528   
Test loss: 0.849927025414
Test accuracy: 0.7414

再次,我们将其作为挑战,让读者探索并尝试不同的 LeNet 架构和超参数变体,以实现更高的准确性。

results matching ""

    No results matching ""