一区二区三区在线-一区二区三区亚洲视频-一区二区三区亚洲-一区二区三区午夜-一区二区三区四区在线视频-一区二区三区四区在线免费观看

腳本之家,腳本語言編程技術(shù)及教程分享平臺(tái)!
分類導(dǎo)航

Python|VBS|Ruby|Lua|perl|VBA|Golang|PowerShell|Erlang|autoit|Dos|bat|

服務(wù)器之家 - 腳本之家 - Python - 基于Tensorflow的MNIST手寫數(shù)字識(shí)別分類

基于Tensorflow的MNIST手寫數(shù)字識(shí)別分類

2020-06-17 11:06qq_40579095 Python

這篇文章主要為大家詳細(xì)介紹了基于Tensorflow的MNIST手寫數(shù)字識(shí)別分類,文中示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下

本文實(shí)例為大家分享了基于Tensorflow的MNIST手寫數(shù)字識(shí)別分類的具體實(shí)現(xiàn)代碼,供大家參考,具體內(nèi)容如下

代碼如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
from tensorflow.contrib.tensorboard.plugins import projector
import time
 
IMAGE_PIXELS = 28
hidden_unit = 100
output_nums = 10
learning_rate = 0.001
train_steps = 50000
batch_size = 500
test_data_size = 10000
#日志目錄(這里根據(jù)自己的目錄修改)
logdir = 'D:/Develop_Software/Anaconda3/WorkDirectory/summary/mnist'
#導(dǎo)入mnist數(shù)據(jù)
mnist = input_data.read_data_sets('MNIST_data', one_hot = True)
 
 #全局訓(xùn)練步數(shù)
global_step = tf.Variable(0, name = 'global_step', trainable = False)
with tf.name_scope('input'):
 #輸入數(shù)據(jù)
 with tf.name_scope('x'):
 x = tf.placeholder(
  dtype = tf.float32, shape = (None, IMAGE_PIXELS * IMAGE_PIXELS))
 #收集x圖像的會(huì)總數(shù)據(jù)
 with tf.name_scope('x_summary'):
 shaped_image_batch = tf.reshape(
  tensor = x,
  shape = (-1, IMAGE_PIXELS, IMAGE_PIXELS, 1),
  name = 'shaped_image_batch')
 tf.summary.image(name = 'image_summary',
      tensor = shaped_image_batch,
      max_outputs = 10)
 with tf.name_scope('y_'):
 y_ = tf.placeholder(dtype = tf.float32, shape = (None, 10))
 
with tf.name_scope('hidden_layer'):
 with tf.name_scope('hidden_arg'):
 #隱層模型參數(shù)
 with tf.name_scope('hid_w'):
  
  hid_w = tf.Variable(
   tf.truncated_normal(shape = (IMAGE_PIXELS * IMAGE_PIXELS, hidden_unit)),
   name = 'hidden_w')
  #添加獲取隱層權(quán)重統(tǒng)計(jì)值匯總數(shù)據(jù)的匯總操作
  tf.summary.histogram(name = 'weights', values = hid_w)
  with tf.name_scope('hid_b'):
  hid_b = tf.Variable(tf.zeros(shape = (1, hidden_unit), dtype = tf.float32),
       name = 'hidden_b')
 #隱層輸出
 with tf.name_scope('relu'):
 hid_out = tf.nn.relu(tf.matmul(x, hid_w) + hid_b)
with tf.name_scope('softmax_layer'):
 with tf.name_scope('softmax_arg'):
 #softmax層參數(shù)
 with tf.name_scope('sm_w'):
  
  sm_w = tf.Variable(
   tf.truncated_normal(shape = (hidden_unit, output_nums)),
   name = 'softmax_w')
  #添加獲取softmax層權(quán)重統(tǒng)計(jì)值匯總數(shù)據(jù)的匯總操作
  tf.summary.histogram(name = 'weights', values = sm_w)
  with tf.name_scope('sm_b'):
  sm_b = tf.Variable(tf.zeros(shape = (1, output_nums), dtype = tf.float32),
       name = 'softmax_b')
 #softmax層的輸出
 with tf.name_scope('softmax'):
 y = tf.nn.softmax(tf.matmul(hid_out, sm_w) + sm_b)
 #梯度裁剪,因?yàn)楦怕嗜≈禐閇0, 1]為避免出現(xiàn)無意義的log(0),故將y值裁剪到[1e-10, 1]
 y_clip = tf.clip_by_value(y, 1.0e-10, 1 - 1.0e-5)
with tf.name_scope('cross_entropy'):
 #使用交叉熵代價(jià)函數(shù)
 cross_entropy = -tf.reduce_sum(y_ * tf.log(y_clip) + (1 - y_) * tf.log(1 - y_clip))
 #添加獲取交叉熵的匯總操作
 tf.summary.scalar(name = 'cross_entropy', tensor = cross_entropy)
 
with tf.name_scope('train'):
 #若不使用同步訓(xùn)練機(jī)制,使用Adam優(yōu)化器
 optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
 #單步訓(xùn)練操作,
 train_op = optimizer.minimize(cross_entropy, global_step = global_step)
#加載測試數(shù)據(jù)
test_image = mnist.test.images
test_label = mnist.test.labels
test_feed = {x:test_image, y_:test_label}
 
with tf.name_scope('accuracy'):
 prediction = tf.equal(tf.argmax(input = y, axis = 1),
      tf.argmax(input = y_, axis = 1))
 accuracy = tf.reduce_mean(
  input_tensor = tf.cast(x = prediction, dtype = tf.float32))
#創(chuàng)建嵌入變量
embedding_var = tf.Variable(test_image, trainable = False, name = 'embedding')
saver = tf.train.Saver({'embedding':embedding_var})
#創(chuàng)建元數(shù)據(jù)文件,將MNIST圖像測試集對(duì)應(yīng)的標(biāo)簽寫入文件
def CreateMedaDataFile():
 with open(logdir + '/metadata.tsv', 'w') as f:
 label = np.nonzero(test_label)[1]
 for i in range(test_data_size):
  f.write('%d\n' % label[i])
#創(chuàng)建投影配置參數(shù)
def CreateProjectorConfig():
 config = projector.ProjectorConfig()
 embeddings = config.embeddings.add()
 embeddings.tensor_name = 'embedding:0'
 embeddings.metadata_path = logdir + '/metadata.tsv'
 
 projector.visualize_embeddings(writer, config)
 #聚集匯總操作
merged = tf.summary.merge_all()
#創(chuàng)建會(huì)話的配置參數(shù)
sess_config = tf.ConfigProto(
 allow_soft_placement = True,
 log_device_placement = False)
#創(chuàng)建會(huì)話
with tf.Session(config = sess_config) as sess:
 #創(chuàng)建FileWriter實(shí)例
 writer = tf.summary.FileWriter(logdir = logdir, graph = sess.graph)
 #初始化全局變量
 sess.run(tf.global_variables_initializer())
 time_begin = time.time()
 print('Training begin time: %f' % time_begin)
 while True:
 #加載訓(xùn)練批數(shù)據(jù)
 batch_x, batch_y = mnist.train.next_batch(batch_size)
 train_feed = {x:batch_x, y_:batch_y}
 loss, _, summary= sess.run([cross_entropy, train_op, merged], feed_dict = train_feed)
 step = global_step.eval()
 #如果step為100的整數(shù)倍
 if step % 100 == 0:
  now = time.time()
  print('%f: global_step = %d, loss = %f' % (
   now, step, loss))
  #向事件文件中添加匯總數(shù)據(jù)
  writer.add_summary(summary = summary, global_step = step)
 #若大于等于訓(xùn)練總步數(shù),退出訓(xùn)練
 if step >= train_steps:
  break
 time_end = time.time()
 print('Training end time: %f' % time_end)
 print('Training time: %f' % (time_end - time_begin))
 #測試模型精度
 test_accuracy = sess.run(accuracy, feed_dict = test_feed)
 print('accuracy: %f' % test_accuracy)
 
 saver.save(sess = sess, save_path = logdir + '/embedding_var.ckpt')
 CreateMedaDataFile()
 CreateProjectorConfig()
 #關(guān)閉FileWriter
 writer.close()

基于Tensorflow的MNIST手寫數(shù)字識(shí)別分類

以上就是本文的全部內(nèi)容,希望對(duì)大家的學(xué)習(xí)有所幫助,也希望大家多多支持服務(wù)器之家。

原文鏈接:https://blog.csdn.net/qq_40579095/article/details/88804019

延伸 · 閱讀

精彩推薦
主站蜘蛛池模板: 26uuu久久| 公翁的舌尖研磨她的花蒂小说 | 国产成人精品福利色多多 | 91久久99热青草国产 | 国产日日干 | 国内体内she精视频免费 | 亚洲欧美成人综合 | 香蕉精品视频 | 日韩高清无砖砖区2022 | 91小视频在线观看免费版高清 | 久久re视频这里精品一本到99 | 久久三级网站 | 国产精品国产色综合色 | 欧美特欧美特级一片 | 2020韩国三级理论在线观看 | 青青草在视线频久久 | 欧美日韩一区二区综合在线视频 | 强行扒开美女大腿挺进 | 免费毛片大全 | 欧美操大逼视频 | 女子张腿让男人桶免费 | 香蕉久久夜色精品国产尤物 | 精品久久一区 | 国产精品久久久久久久久久久威 | 男同桌脱我奶罩吸我奶作文 | 亚洲大片免费观看 | 操碰人人 | zoofilivideo杂交3d | 国产色站 | 午夜电影三级还珠格格 | 暖暖的免费观看高清视频韩国 | 亚洲六月丁香六月婷婷蜜芽 | 男人叼女人的痛爽视频免费 | 黄色大片网站 | 红色播放器 | 奇米色7777 | 免费一级欧美片在线观免看 | 国产欧美成人免费观看 | 好大好猛好爽好深视频免费 | 单身男女韩剧在线看 | 精品国产一区二区 |