02.模型的保存

程序说明

时间:2016年11月16日

说明:该程序是一个包含两个隐藏层的神经网络,最后会保存模型到h5文件中。

数据集:MNIST

1.加载keras模块

from __future__ import print_function
import numpy as np
np.random.seed(1337)  # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, Adam, RMSprop
from keras.utils import np_utils
Using TensorFlow backend.

2.变量初始化

batch_size = 128 
nb_classes = 10
nb_epoch = 20

3.准备数据

# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
60000 train samples
10000 test samples

转换类标号

# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

4.建立模型

使用Sequential()

model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))

打印模型

model.summary()
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
dense_1 (Dense)                  (None, 512)           401920      dense_input_1[0][0]              
____________________________________________________________________________________________________
activation_1 (Activation)        (None, 512)           0           dense_1[0][0]                    
____________________________________________________________________________________________________
dropout_1 (Dropout)              (None, 512)           0           activation_1[0][0]               
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 512)           262656      dropout_1[0][0]                  
____________________________________________________________________________________________________
activation_2 (Activation)        (None, 512)           0           dense_2[0][0]                    
____________________________________________________________________________________________________
dropout_2 (Dropout)              (None, 512)           0           activation_2[0][0]               
____________________________________________________________________________________________________
dense_3 (Dense)                  (None, 10)            5130        dropout_2[0][0]                  
____________________________________________________________________________________________________
activation_3 (Activation)        (None, 10)            0           dense_3[0][0]                    
====================================================================================================
Total params: 669706
____________________________________________________________________________________________________

5.训练与评估

编译模型

model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

迭代训练

history = model.fit(X_train, Y_train,
                    batch_size=batch_size, nb_epoch=nb_epoch,
                    verbose=1, validation_data=(X_test, Y_test))
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 4s - loss: 0.2451 - acc: 0.9239 - val_loss: 0.1210 - val_acc: 0.9626
Epoch 2/20
60000/60000 [==============================] - 3s - loss: 0.1032 - acc: 0.9683 - val_loss: 0.0780 - val_acc: 0.9763
Epoch 3/20
60000/60000 [==============================] - 3s - loss: 0.0755 - acc: 0.9769 - val_loss: 0.0796 - val_acc: 0.9757
Epoch 4/20
60000/60000 [==============================] - 3s - loss: 0.0612 - acc: 0.9816 - val_loss: 0.0768 - val_acc: 0.9784
Epoch 5/20
60000/60000 [==============================] - 3s - loss: 0.0510 - acc: 0.9848 - val_loss: 0.0845 - val_acc: 0.9795
Epoch 6/20
60000/60000 [==============================] - 3s - loss: 0.0445 - acc: 0.9865 - val_loss: 0.0759 - val_acc: 0.9806
Epoch 7/20
60000/60000 [==============================] - 3s - loss: 0.0402 - acc: 0.9884 - val_loss: 0.0800 - val_acc: 0.9816
Epoch 8/20
60000/60000 [==============================] - 3s - loss: 0.0351 - acc: 0.9900 - val_loss: 0.0916 - val_acc: 0.9821
Epoch 9/20
60000/60000 [==============================] - 3s - loss: 0.0314 - acc: 0.9905 - val_loss: 0.0930 - val_acc: 0.9807
Epoch 10/20
60000/60000 [==============================] - 3s - loss: 0.0285 - acc: 0.9920 - val_loss: 0.0916 - val_acc: 0.9835
Epoch 11/20
60000/60000 [==============================] - 3s - loss: 0.0279 - acc: 0.9918 - val_loss: 0.0853 - val_acc: 0.9820
Epoch 12/20
60000/60000 [==============================] - 3s - loss: 0.0238 - acc: 0.9930 - val_loss: 0.0997 - val_acc: 0.9811
Epoch 13/20
60000/60000 [==============================] - 3s - loss: 0.0242 - acc: 0.9938 - val_loss: 0.1083 - val_acc: 0.9796
Epoch 14/20
60000/60000 [==============================] - 3s - loss: 0.0229 - acc: 0.9934 - val_loss: 0.1037 - val_acc: 0.9832
Epoch 15/20
60000/60000 [==============================] - 3s - loss: 0.0202 - acc: 0.9944 - val_loss: 0.1019 - val_acc: 0.9831
Epoch 16/20
60000/60000 [==============================] - 3s - loss: 0.0216 - acc: 0.9939 - val_loss: 0.1071 - val_acc: 0.9810
Epoch 17/20
60000/60000 [==============================] - 3s - loss: 0.0205 - acc: 0.9947 - val_loss: 0.1085 - val_acc: 0.9835
Epoch 18/20
60000/60000 [==============================] - 3s - loss: 0.0182 - acc: 0.9950 - val_loss: 0.1245 - val_acc: 0.9807
Epoch 19/20
60000/60000 [==============================] - 3s - loss: 0.0172 - acc: 0.9952 - val_loss: 0.1264 - val_acc: 0.9837
Epoch 20/20
60000/60000 [==============================] - 3s - loss: 0.0187 - acc: 0.9954 - val_loss: 0.1132 - val_acc: 0.9831

模型评估

score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
Test score: 0.113199677604
Test accuracy: 0.9831

5.模型保存

model.save('mnist-mpl.h5')

这个时候你的文件夹下就有一个名字为“mnist-mpl.h5”的模型文件了。

results matching ""

    No results matching ""