13.Keras和Tensorflow联合使用

程序说明

名称:将Keras作为tensorflow的精简接口

时间:2016年11月17日

说明:该程序使用Keras方便的定义模型和接口,然后使用Tensorflow做训练。最终达到他们协同工作的一个状态。

数据集:MNIST

原文地址

image

1.加载keras模块

from __future__ import print_function
import numpy as np
np.random.seed(1337)  # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, Adam, RMSprop
from keras.utils import np_utils
from keras.objectives import categorical_crossentropy
Using TensorFlow backend.

加载Keras的后端和tensorflow

import tensorflow as tf
from keras import backend as K

2.变量初始化

batch_size = 128 
nb_classes = 10
nb_epoch = 20

3.准备数据

# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
60000 train samples
10000 test samples

转换类标号

# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

4.建立模型

使用Sequential()

model = Sequential()
model.add(Dense(512, input_shape=(784,)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))

打印模型

model.summary()
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
dense_1 (Dense)                  (None, 512)           401920      dense_input_1[0][0]              
____________________________________________________________________________________________________
activation_1 (Activation)        (None, 512)           0           dense_1[0][0]                    
____________________________________________________________________________________________________
dropout_1 (Dropout)              (None, 512)           0           activation_1[0][0]               
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 512)           262656      dropout_1[0][0]                  
____________________________________________________________________________________________________
activation_2 (Activation)        (None, 512)           0           dense_2[0][0]                    
____________________________________________________________________________________________________
dropout_2 (Dropout)              (None, 512)           0           activation_2[0][0]               
____________________________________________________________________________________________________
dense_3 (Dense)                  (None, 10)            5130        dropout_2[0][0]                  
____________________________________________________________________________________________________
activation_3 (Activation)        (None, 10)            0           dense_3[0][0]                    
====================================================================================================
Total params: 669706
____________________________________________________________________________________________________

5.训练与评估

model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

迭代训练

history = model.fit(X_train, Y_train,
                    batch_size=batch_size, nb_epoch=nb_epoch,
                    verbose=1, validation_data=(X_test, Y_test))
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 4s - loss: 0.2448 - acc: 0.9239 - val_loss: 0.1220 - val_acc: 0.9623
Epoch 2/20
60000/60000 [==============================] - 4s - loss: 0.1026 - acc: 0.9689 - val_loss: 0.0788 - val_acc: 0.9749
Epoch 3/20
60000/60000 [==============================] - 5s - loss: 0.0752 - acc: 0.9770 - val_loss: 0.0734 - val_acc: 0.9779
Epoch 4/20
60000/60000 [==============================] - 5s - loss: 0.0609 - acc: 0.9817 - val_loss: 0.0777 - val_acc: 0.9780
Epoch 5/20
60000/60000 [==============================] - 5s - loss: 0.0515 - acc: 0.9847 - val_loss: 0.0888 - val_acc: 0.9782
Epoch 6/20
60000/60000 [==============================] - 5s - loss: 0.0451 - acc: 0.9864 - val_loss: 0.0799 - val_acc: 0.9803
Epoch 7/20
60000/60000 [==============================] - 5s - loss: 0.0398 - acc: 0.9878 - val_loss: 0.0814 - val_acc: 0.9809
Epoch 8/20
60000/60000 [==============================] - 5s - loss: 0.0362 - acc: 0.9896 - val_loss: 0.0765 - val_acc: 0.9830
Epoch 9/20
60000/60000 [==============================] - 5s - loss: 0.0325 - acc: 0.9905 - val_loss: 0.0917 - val_acc: 0.9802
Epoch 10/20
60000/60000 [==============================] - 5s - loss: 0.0279 - acc: 0.9921 - val_loss: 0.0808 - val_acc: 0.9844
Epoch 11/20
60000/60000 [==============================] - 5s - loss: 0.0272 - acc: 0.9925 - val_loss: 0.0991 - val_acc: 0.9811
Epoch 12/20
60000/60000 [==============================] - 5s - loss: 0.0248 - acc: 0.9930 - val_loss: 0.0864 - val_acc: 0.9839
Epoch 13/20
60000/60000 [==============================] - 5s - loss: 0.0240 - acc: 0.9935 - val_loss: 0.1061 - val_acc: 0.9809
Epoch 14/20
60000/60000 [==============================] - 5s - loss: 0.0240 - acc: 0.9931 - val_loss: 0.1010 - val_acc: 0.9843
Epoch 15/20
60000/60000 [==============================] - 5s - loss: 0.0200 - acc: 0.9946 - val_loss: 0.1102 - val_acc: 0.9803
Epoch 16/20
60000/60000 [==============================] - 5s - loss: 0.0207 - acc: 0.9942 - val_loss: 0.1020 - val_acc: 0.9833
Epoch 17/20
60000/60000 [==============================] - 5s - loss: 0.0196 - acc: 0.9946 - val_loss: 0.1205 - val_acc: 0.9812
Epoch 18/20
60000/60000 [==============================] - 4s - loss: 0.0208 - acc: 0.9950 - val_loss: 0.1081 - val_acc: 0.9829
Epoch 19/20
60000/60000 [==============================] - 3s - loss: 0.0199 - acc: 0.9951 - val_loss: 0.1113 - val_acc: 0.9835
Epoch 20/20
60000/60000 [==============================] - 4s - loss: 0.0186 - acc: 0.9953 - val_loss: 0.1168 - val_acc: 0.9849

模型评估

score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
Test score: 0.11684127673
Test accuracy: 0.9849

results matching ""

    No results matching ""