keras 的 skip-gram 模型
使用 Keras 的嵌入模型的流程与 TensorFlow 保持一致。
- 在 Keras 函数式或顺序模型中创建网络体系结构
- 将目标和上下文单词的真假对提供给网络
- 查找目标和上下文单词的单词向量
- 执行单词向量的点积以获得相似性得分
- 将相似性得分通过 sigmoid 层以将输出作为真或假对
现在让我们使用 Keras 函数式 API 实现这些步骤:
- 导入所需的库:
from keras.models import Model
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.preprocessing.sequence import skipgrams
from keras.layers import Input, Dense, Reshape, Dot, merge
import keras
重置图,以便清除以前在 Jupyter Notebook 中运行的任何后续效果:
# reset the jupyter buffers
tf.reset_default_graph()
keras.backend.clear_session()
- 创建一个验证集,我们将用它来打印我们的模型在训练结束时找到的相似单词:
valid_size = 8
x_valid = np.random.choice(valid_size * 10, valid_size, replace=False)
print('valid: ',x_valid)
- 定义所需的超参数:
batch_size = 1024
embedding_size = 512
n_negative_samples = 64
ptb.skip_window=2
- 使用
keras.preprocessing.sequence
中的make_sampling_table()
函数创建一个大小等于词汇长度的样本表。接下来,使用keras.preprocessing.sequence
中的函数skipgrams()
生成上下文和目标词对以及表示它们是真对还是假对的标签。
sample_table = sequence.make_sampling_table(ptb.vocab_len)
pairs, labels= sequence.skipgrams(ptb.part['train'],
ptb.vocab_len,window_size=ptb.skip_window,
sampling_table=sample_table)
- 让我们打印一些使用以下代码生成的伪造和真实对:
print('The skip-gram pairs : target,context')
for i in range(5 * ptb.skip_window):
print(['{} {}'.format(id,ptb.id2word[id]) \
for id in pairs[i]],':',labels[i])
对配对如下:
The skip-gram pairs : target,context
['547 trying', '5 to'] : 1
['4845 bargain', '2 <eos>'] : 1
['1705 election', '198 during'] : 1
['4704 flows', '8117 gun'] : 0
['13 is', '37 company'] : 1
['625 above', '132 three'] : 1
['5768 pessimistic', '1934 immediate'] : 0
['637 china', '2 <eos>'] : 1
['258 five', '1345 pence'] : 1
['1956 chrysler', '8928 exercises'] : 0
- 从上面生成的对中拆分目标和上下文单词,以便将它们输入模型。将目标和上下文单词转换为二维数组。
x,y=zip(*pairs)
x=np.array(x,dtype=np.int32)
x=dsu.to2d(x,unit_axis=1)
y=np.array(y,dtype=np.int32)
y=dsu.to2d(y,unit_axis=1)
labels=np.array(labels,dtype=np.int32)
labels=dsu.to2d(labels,unit_axis=1)
- 定义网络的体系结构。正如我们所讨论的,必须将目标和上下文单词输入网络,并且需要从嵌入层中查找它们的向量。因此,首先我们分别为目标和上下文单词定义输入,嵌入和重塑层:
# build the target word model
target_in = Input(shape=(1,),name='target_in')
target = Embedding(ptb.vocab_len,embedding_size,input_length=1,
name='target_em')(target_in)
target = Reshape((embedding_size,1),name='target_re')(target)
# build the context word model
context_in = Input((1,),name='context_in')
context = Embedding(ptb.vocab_len,embedding_size,input_length=1,
name='context_em')(context_in)
context = Reshape((embedding_size,1),name='context_re')(context)
- 接下来,构建这两个模型的点积,将其输入 sigmoid 层以生成输出标签:
# merge the models with the dot product to check for
# similarity and add sigmoid layer
output = Dot(axes=1,name='output_dot')([target,context])
output = Reshape((1,),name='output_re')(output)
output = Dense(1, activation='sigmoid',name='output_sig')(output)
- 从我们刚刚创建的输入和输出模型构建函数式模型:
# create the functional model for finding word vectors
model = Model(inputs=[target_in,context_in],outputs=output)
model.compile(loss='binary_crossentropy', optimizer='adam')
- 此外,在给定输入目标词的情况下,构建一个模型,用于预测与所有单词的相似性:
# merge the models and create model to check for cosine similarity
similarity = Dot(axes=0,normalize=True,
name='sim_dot')([target,context])
similarity_model = Model(inputs=[target_in,context_in],
outputs=similarity)
让我们打印模型摘要:
__________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==========================================================================
target_in (InputLayer) (None, 1) 0
__________________________________________________________________________
context_in (InputLayer) (None, 1) 0
__________________________________________________________________________
target_em (Embedding) (None, 1, 512) 5120000 target_in[0][0]
__________________________________________________________________________
context_em (Embedding) (None, 1, 512) 5120000 context_in[0][0]
__________________________________________________________________________
target_re (Reshape) (None, 512, 1) 0 target_em[0][0]
__________________________________________________________________________
context_re (Reshape) (None, 512, 1) 0 context_em[0][0]
__________________________________________________________________________
output_dot (Dot) (None, 1, 1) 0 target_re[0][0]
context_re[0][0]
__________________________________________________________________________
output_re (Reshape) (None, 1) 0 output_dot[0][0]
__________________________________________________________________________
output_sig (Dense) (None, 1) 2 output_re[0][0]
==========================================================================
Total params: 10,240,002
Trainable params: 10,240,002
Non-trainable params: 0
__________________________________________________________________________
- 接下来,训练模型。我们只训练了 5 个周期,但你应该尝试更多的周期,至少 1000 或 10,000 个周期。
请记住,这将需要几个小时,因为这不是最优化的代码。 欢迎您使用本书和其他来源的提示和技巧进一步优化代码。
n_epochs = 5
batch_size = 1024
model.fit([x,y],labels,batch_size=batch_size, epochs=n_epochs)
让我们根据这个模型发现的单词向量打印单词的相似度:
# print closest words to validation set at end of training
top_k = 5
y_val = np.arange(ptb.vocab_len, dtype=np.int32)
y_val = dsu.to2d(y_val,unit_axis=1)
for i in range(valid_size):
x_val = np.full(shape=(ptb.vocab_len,1),fill_value=x_valid[i],
dtype=np.int32)
similarity_scores = similarity_model.predict([x_val,y_val])
similarity_scores=similarity_scores.flatten()
similar_words = (-similarity_scores).argsort()[1:top_k + 1]
similar_str = 'Similar to {0:}:'.format(ptb.id2word[x_valid[i]])
for k in range(top_k):
similar_str = '{0:} {1:},'.format(similar_str,
ptb.id2word[similar_words[k]])
print(similar_str)
我们得到以下输出:
Similar to we: rake, kia, sim, ssangyong, memotec,
Similar to been: nahb, sim, rake, punts, rubens,
Similar to also: photography, snack-food, rubens, nahb, ssangyong,
Similar to of: isi, rake, memotec, kia, mlx,
Similar to last: rubens, punts, memotec, sim, photography,
Similar to u.s.: mlx, memotec, punts, rubens, kia,
Similar to an: memotec, isi, ssangyong, rake, sim,
Similar to trading: rake, rubens, swapo, mlx, nahb,
到目前为止,我们已经看到了如何使用 TensorFlow 及其高级库 Keras 创建单词向量或嵌入。现在让我们看看如何使用 TensorFlow 和 Keras 来学习模型并将模型应用于一些与 NLP 相关的任务的预测。