用于 MNIST 分类的基于 Keras 的 MLP

现在让我们与 Keras 建立相同的 MLP 网络,Keras 是 TensorFlow 的高级库。我们保留所有参数与本章中用于 TensorFlow 示例的参数相同,例如,隐藏层的激活函数保留为 ReLU 函数。

  1. 从 Keras 导入所需的模块:
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
  1. 定义超参数(我们假设数据集已经加载到X_trainY_trainX_testY_test变量):
num_layers = 2
num_neurons = []
for i in range(num_layers):
   num_neurons.append(256)
learning_rate = 0.01
n_epochs = 50
batch_size = 100
  1. 创建顺序模型:
model = Sequential()
  1. 添加第一个隐藏层。只有在第一个隐藏层中,我们必须指定输入张量的形状:
model.add(Dense(units=num_neurons[0], activation='relu', 
    input_shape=(num_inputs,)))
  1. 添加第二层:
model.add(Dense(units=num_neurons[1], activation='relu'))
  1. 使用 softmax 激活函数添加输出层:
model.add(Dense(units=num_outputs, activation='softmax'))
  1. 打印模型详细信息:
model.summary()

我们得到以下输出:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 256)               200960    
_________________________________________________________________
dense_2 (Dense)              (None, 256)               65792     
_________________________________________________________________
dense_3 (Dense)              (None, 10)                2570      
=================================================================
Total params: 269,322
Trainable params: 269,322
Non-trainable params: 0
_________________________________________________________________
  1. 使用 SGD 优化器编译模型:
model.compile(loss='categorical_crossentropy',
   optimizer=SGD(lr=learning_rate),
   metrics=['accuracy'])
  1. 训练模型:
model.fit(X_train, Y_train,
   batch_size=batch_size,
   epochs=n_epochs)

在训练模型时,我们可以观察每次训练迭代的损失和准确性:

Epoch 1/50
55000/55000 [========================] - 4s - loss: 1.1055 - acc: 0.7413     
Epoch 2/50
55000/55000 [========================] - 3s - loss: 0.4396 - acc: 0.8833     
Epoch 3/50
55000/55000 [========================] - 3s - loss: 0.3523 - acc: 0.9010     
Epoch 4/50
55000/55000 [========================] - 3s - loss: 0.3129 - acc: 0.9112     
Epoch 5/50
55000/55000 [========================] - 3s - loss: 0.2871 - acc: 0.9181     

--- Epoch 6 to 45 output removed for brevity ---     

Epoch 46/50
55000/55000 [========================] - 4s - loss: 0.0689 - acc: 0.9814     
Epoch 47/50
55000/55000 [========================] - 4s - loss: 0.0672 - acc: 0.9819     
Epoch 48/50
55000/55000 [========================] - 4s - loss: 0.0658 - acc: 0.9822     
Epoch 49/50
55000/55000 [========================] - 4s - loss: 0.0643 - acc: 0.9829     
Epoch 50/50
55000/55000 [========================] - 4s - loss: 0.0627 - acc: 0.9829
  1. 评估模型并打印损失和准确性:
score = model.evaluate(X_test, Y_test)
print('\n Test loss:', score[0])
print('Test accuracy:', score[1])

我们得到以下输出:

Test loss: 0.089410082236
Test accuracy: 0.9727

笔记本ch-05_MLP中提供了使用 Keras 进行 MNIST 分类的 MLP 的完整代码。

results matching ""

    No results matching ""