解決tensoflow如何在已訓練模型上繼續(xù)訓練fineturn的問題。
訓練代碼
任務描述: x = 3.0, y = 100.0, 運算公式 x×W+b = y,求 W和b的最優(yōu)解。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
# -*- coding: utf-8 -*-) import tensorflow as tf # 聲明占位變量x、y x = tf.placeholder( "float" , shape = [ None , 1 ]) y = tf.placeholder( "float" , [ None , 1 ]) # 聲明變量 W = tf.Variable(tf.zeros([ 1 , 1 ]),name = 'w' ) b = tf.Variable(tf.zeros([ 1 ]),name = 'b' ) # 操作 result = tf.matmul(x, W) + b # 損失函數(shù) lost = tf.reduce_sum(tf. pow ((result - y), 2 )) # 優(yōu)化 train_step = tf.train.GradientDescentOptimizer( 0.0007 ).minimize(lost) with tf.Session() as sess: # 初始化變量 sess.run(tf.global_variables_initializer()) saver = tf.train.Saver(max_to_keep = 3 ) # 這里x、y給固定的值 x_s = [[ 3.0 ]] y_s = [[ 100.0 ]] step = 0 while ( True ): step + = 1 feed = {x: x_s, y: y_s} # 通過sess.run執(zhí)行優(yōu)化 sess.run(train_step, feed_dict = feed) if step % 1000 = = 0 : print 'step: {0}, loss: {1}' . format (step, sess.run(lost, feed_dict = feed)) if sess.run(lost, feed_dict = feed) < 1e - 10 or step > 4e3 : print '' # print 'final loss is: {}'.format(sess.run(lost, feed_dict=feed)) print 'final result of {0} = {1}(目標值是100.0)' . format ( 'x×W+b' , 3.0 * sess.run(W) + sess.run(b)) print '' print ( "模型保存的W值 : %f" % sess.run(W)) print ( "模型保存的b : %f" % sess.run(b)) break saver.save(sess, "./save_model/re-train" , global_step = step) # 保存模型 |
訓練完成之后生成模型文件:
訓練輸出:
1
2
3
4
5
6
7
8
9
10
11
|
step: 1000 , loss: 4.89526428282e - 08 step: 2000 , loss: 4.89526428282e - 08 step: 3000 , loss: 4.89526428282e - 08 step: 4000 , loss: 4.89526428282e - 08 step: 5000 , loss: 4.89526428282e - 08 final result of x×W + b = [[ 99.99978 ]](目標值是 100.0 ) 模型保存的W值 : 29.999931 模型保存的b : 9.999982 |
保存在模型中的W值是 29.999931,b是 9.999982。
以下代碼從保存的模型中恢復出訓練狀態(tài),繼續(xù)訓練
任務描述: x = 3.0, y = 200.0, 運算公式 x×W+b = y,從上次訓練的模型中恢復出訓練參數(shù),繼續(xù)訓練,求 W和b的最優(yōu)解。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
|
# -*- coding: utf-8 -*-) import tensorflow as tf # 聲明占位變量x、y x = tf.placeholder( "float" , shape = [ None , 1 ]) y = tf.placeholder( "float" , [ None , 1 ]) with tf.Session() as sess: # 初始化變量 sess.run(tf.global_variables_initializer()) # saver = tf.train.Saver(max_to_keep=3) saver = tf.train.import_meta_graph(r './save_model/re-train-5000.meta' ) # 加載模型圖結(jié)構(gòu) saver.restore(sess, tf.train.latest_checkpoint(r './save_model' )) # 恢復數(shù)據(jù) # 從保存模型中恢復變量 graph = tf.get_default_graph() W = graph.get_tensor_by_name( "w:0" ) b = graph.get_tensor_by_name( "b:0" ) print ( "從保存的模型中恢復出來的W值 : %f" % sess.run( "w:0" )) print ( "從保存的模型中恢復出來的b值 : %f" % sess.run( "b:0" )) # 操作 result = tf.matmul(x, W) + b # 損失函數(shù) lost = tf.reduce_sum(tf. pow ((result - y), 2 )) # 優(yōu)化 train_step = tf.train.GradientDescentOptimizer( 0.0007 ).minimize(lost) # 這里x、y給固定的值 x_s = [[ 3.0 ]] y_s = [[ 200.0 ]] step = 0 while ( True ): step + = 1 feed = {x: x_s, y: y_s} # 通過sess.run執(zhí)行優(yōu)化 sess.run(train_step, feed_dict = feed) if step % 1000 = = 0 : print 'step: {0}, loss: {1}' . format (step, sess.run(lost, feed_dict = feed)) if sess.run(lost, feed_dict = feed) < 1e - 10 or step > 4e3 : print '' # print 'final loss is: {}'.format(sess.run(lost, feed_dict=feed)) print 'final result of {0} = {1}(目標值是200.0)' . format ( 'x×W+b' , 3.0 * sess.run(W) + sess.run(b)) print ( "模型保存的W值 : %f" % sess.run(W)) print ( "模型保存的b : %f" % sess.run(b)) break saver.save(sess, "./save_mode/re-train" , global_step = step) # 保存模型 |
訓練輸出:
1
2
3
4
5
6
7
8
9
10
11
12
|
從保存的模型中恢復出來的W值 : 29.999931 從保存的模型中恢復出來的b值 : 9.999982 step: 1000 , loss: 1.95810571313e - 07 step: 2000 , loss: 1.95810571313e - 07 step: 3000 , loss: 1.95810571313e - 07 step: 4000 , loss: 1.95810571313e - 07 step: 5000 , loss: 1.95810571313e - 07 final result of x×W + b = [[ 199.99956 ]](目標值是 200.0 ) 模型保存的W值 : 59.999866 模型保存的b : 19.999958 |
從保存的模型中恢復出來的W值是 29.999931,b是 9.999982,跟模型保存的值一致,說明加載成功。
總結(jié)
從頭開始訓練一個模型,需要通過 tf.train.Saver創(chuàng)建一個保存器,完成之后使用save方法保存模型到本地:
1
2
3
|
saver = tf.train.Saver(max_to_keep = 3 ) …… saver.save(sess, "./save_model/re-train" , global_step = step) # 保存模型 |
在訓練好的模型上繼續(xù)訓練,fineturn一個模型,可以使用tf.train.import_meta_graph方法加載圖結(jié)構(gòu),使用restore方法恢復訓練數(shù)據(jù),最后使用同樣的save方法保存到本地:
1
2
3
|
saver = tf.train.import_meta_graph(r './save_model/re-train-10050.meta' ) # 加載模型圖結(jié)構(gòu) saver.restore(sess, tf.train.latest_checkpoint(r './save_model' )) # 恢復數(shù)據(jù) saver.save(sess, "./save_mode/re-train" , global_step = step) # 保存模型 |
注:特殊情況下(如本例)需要從恢復的模型中加載出數(shù)據(jù):
1
2
3
4
|
# 從保存模型中恢復變量 graph = tf.get_default_graph() W = graph.get_tensor_by_name( "w:0" ) b = graph.get_tensor_by_name( "b:0" ) |
以上這篇tensorflow模型繼續(xù)訓練 fineturn實例就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持服務器之家。
原文鏈接:https://blog.csdn.net/dcrmg/article/details/83031488