飞道的博客

TensorFlow学习笔记(一)

189人阅读  评论(0)

TensorFlow版本2发布后,使用TensorFlow变得更简单和方便,但看网上的很多代码是使用的TensorFlow1进行完成的,每次遇到不懂的函数去查,理解记忆的一般,感觉还是有点不清楚

所以又快速看了一下TensorFlow1常用的方法,感觉似乎比之前清楚了一下。整理记录下方便阅读代码

这个TensorFlow教程挺好的 https://www.w3cschool.cn/tensorflow_python/

课程的话个人感觉莫凡系列的挺好懂

TensorFlow 的特点:

  • 使用图 (graph) 来表示计算任务.
  • 在被称之为 会话 (Session) 的上下文 (context) 中执行图.
  • 使用 tensor 表示数据.
  • 通过 变量 (Variable) 维护状态.
  • 使用 feed 和 fetch 可以为任意的操作(arbitrary operation) 赋值或者从其中获取数据

TensorFlow基本使用之创建图 ,在会话中使用图


  
  1. import tensorflow as tf
  2. #
  3. # 创建一个常量,产生 1* 2的矩阵
  4. m1 = tf.constant( [[2, 2]])
  5. m2 = tf.constant( [[3],
  6. [3]])
  7. product = tf.matmul(m1, m2)
  8. print(product) # wrong! no result
  9. # 在一个会话中启动图
  10. # 调用 sess 的 'run()' 方法来执行矩阵乘法 op, 传入 'product' 作为该方法的参数.
  11. # method1 use session
  12. sess = tf.Session()
  13. result = sess.run(product)
  14. # 上面提到, 'product' 代表了矩阵乘法 op 的输出, 传入它是向方法表明, 我们希望取回矩阵乘法 op 的输出.
  15. print(result) # 返回值 'result' 是一个 numpy `ndarray` 对象
  16. sess. close()
  17. # method2 use session
  18. with tf.Session() as sess:
  19. result_ = sess.run(product)
  20. print(result_)

使用cpu,GPU

如果机器上有超过一个可用的 GPU, 除第一个外的其它 GPU 默认是不参与计算的. 为了让 TensorFlow 使用这些 GPU, 你必须将 op 明确指派给它们执行. with...Device 语句用来指派特定的 CPU 或 GPU 执行操作:


  
  1. with tf.Session() as sess:
  2. with tf.device( "/gpu:1"):
  3. matrix1 = tf.constant( [[3., 3.]])
  4. matrix2 = tf.constant( [[2.],[2.]])
  5. product = tf.matmul(matrix1, matrix2)
  6. ...

设备用字符串进行标识. 目前支持的设备包括:

  • "/cpu:0": 机器的 CPU.
  • "/gpu:0": 机器的第一个 GPU, 如果有的话.
  • "/gpu:1": 机器的第二个 GPU, 以此类推.

变量or feed


  
  1. import tensorflow as tf
  2. input1 = tf.placeholder(dtype=tf.float32)
  3. input2 = tf.placeholder(dtype=tf.float32)
  4. output = tf.multiply(input1, input2)
  5. with tf.Session() as sess:
  6. result=sess.run([ output], feed_dict={input1:[ 7.], input2:[ 2.]})
  7. print( result)
  8. import tensorflow as tf
  9. # 创建一个变量
  10. var = tf.Variable( 0) # our first variable in the "global_variable" set
  11. add_operation = tf.add( var, 1)
  12. update_operation = tf.assign( var, add_operation)
  13. with tf.Session() as sess:
  14. # 必须初始化 once define variables, you have to initialize them by doing this
  15. sess.run(tf.global_variables_initializer())
  16. for _ in range( 3):
  17. sess.run(update_operation)
  18. print(sess.run( var))

激活函数


  
  1. import tensorflow as tf
  2. import numpy as np
  3. import matplotlib.pyplot as plt
  4. # fake data
  5. x = np.linspace(- 5, 5, 200) # x data, shape=( 100, 1)
  6. # following are popular activation functions
  7. y_relu = tf.nn.relu(x)
  8. y_sigmoid = tf.nn.sigmoid(x)
  9. y_tanh = tf.nn.tanh(x)
  10. y_softplus = tf.nn.softplus(x)
  11. # y_softmax = tf.nn.softmax(x) softmax is a special kind of activation function, it is about probability
  12. sess = tf. Session ()
  13. y_relu, y_sigmoid, y_tanh, y_softplus = sess. run ([y_relu, y_sigmoid, y_tanh, y_softplus])
  14. # plt to visualize these activation function
  15. plt. figure (1, figsize=(8, 6))
  16. plt. subplot (221)
  17. plt. plot (x, y_relu, c='red', label='relu')
  18. plt. ylim ((-1, 5))
  19. plt. legend (loc='best')
  20. plt. subplot (222)
  21. plt. plot (x, y_sigmoid, c='red', label='sigmoid')
  22. plt. ylim ((-0.2, 1.2))
  23. plt. legend (loc='best')
  24. plt. subplot (223)
  25. plt. plot (x, y_tanh, c='red', label='tanh')
  26. plt. ylim ((-1.2, 1.2))
  27. plt. legend (loc='best')
  28. plt. subplot (224)
  29. plt. plot (x, y_softplus, c='red', label='softplus')
  30. plt. ylim ((-0.2, 6))
  31. plt. legend (loc='best')
  32. plt. show ()

简单神经网络构造


  
  1. import tensorflow as tf
  2. import matplotlib.pyplot as plt
  3. import numpy as np
  4. tf.set_random_seed( 1)
  5. np.random.seed( 1)
  6. # fake data
  7. x = np.linspace(- 1, 1, 100)[:, np.newaxis] # shape ( 100, 1)
  8. noise = np.random.normal( 0, 0. 1, size=x.shape)
  9. y = np.power(x, 2) + noise # shape ( 100, 1) + some noise
  10. # plot data
  11. plt.scatter(x, y)
  12. plt.show()
  13. tf_x = tf.placeholder(tf.float 32, x.shape) # input x
  14. tf_y = tf.placeholder(tf.float 32, y.shape) # input y
  15. # neural network layers
  16. l1 = tf.layers.dense(tf_x, 10, tf.nn.relu) # hidden layer
  17. output = tf.layers.dense(l 1, 1) # output layer
  18. loss = tf.losses.mean_squared_error(tf_y, output) # compute cost
  19. optimizer = tf.train.GradientDescentOptimizer(learning_rate= 0. 5)
  20. train_op = optimizer.minimize(loss)
  21. sess = tf.Session() # control training and others
  22. sess.run(tf.global_variables_initializer()) # initialize var in graph
  23. plt.ion() # something about plotting 打开交互模式
  24. for step in range( 100):
  25. # train and net output
  26. _, l, pred = sess.run([train_op, loss, output], {tf_x: x, tf_y: y})
  27. if step % 5 == 0:
  28. # plot and show learning process
  29. plt.cla()#清除活动轴
  30. plt.scatter(x, y)
  31. plt.plot(x, pred, 'r-', lw= 5)
  32. plt.text( 0. 5, 0, 'Loss=%. 4f' % l, fontdict={'size': 20, 'color': 'red'})
  33. plt.pause( 0. 1)
  34. plt.ioff()#关闭交互模式用于阻塞程序,不让图片关闭
  35. plt.show()

优化器


  
  1. import tensorflow as tf
  2. import matplotlib.pyplot as plt
  3. import numpy as np
  4. tf.set_random_seed( 1)
  5. np.random.seed( 1)
  6. LR = 0.01
  7. BATCH_SIZE = 32
  8. # fake data
  9. x = np.linspace( -1, 1, 100)[:, np.newaxis] # shape ( 100, 1)
  10. noise = np.random.normal( 0, 0.1, size=x.shape)
  11. y = np.power(x, 2) + noise # shape ( 100, 1) + some noise
  12. # plot dataset
  13. plt.scatter(x, y)
  14. plt.show()
  15. # default network
  16. class Net:
  17. def __init__(self, opt, **kwargs):
  18. self.x = tf.placeholder(tf.float32, [None, 1])
  19. self.y = tf.placeholder(tf.float32, [None, 1])
  20. l = tf.layers.dense(self.x, 20, tf.nn.relu)
  21. out = tf.layers.dense(l, 1)
  22. self.loss = tf.losses.mean_squared_error(self.y, out)
  23. self.train = opt(LR, **kwargs).minimize(self.loss)
  24. # different nets
  25. net_SGD = Net(tf.train.GradientDescentOptimizer)
  26. net_Momentum = Net(tf.train.MomentumOptimizer, momentum=0.9)
  27. net_RMSprop = Net(tf.train.RMSPropOptimizer)
  28. net_Adam = Net(tf.train.AdamOptimizer)
  29. nets = [net_SGD, net_Momentum, net_RMSprop, net_Adam]
  30. sess = tf.Session()
  31. sess.run(tf.global_variables_initializer())
  32. losses_his = [[], [], [], []] # record loss
  33. # training
  34. for step in range(300): # for each training step
  35. index = np.random.randint(0, x.shape[0], BATCH_SIZE)
  36. b_x = x[index]
  37. b_y = y[index]
  38. for net, l_his in zip(nets, losses_his):
  39. _, l = sess.run([net.train, net.loss], {net.x: b_x, net.y: b_y})
  40. l_his.append(l) # loss recoder
  41. # plot loss history
  42. labels = ['SGD', 'Momentum', 'RMSprop', 'Adam']
  43. for i, l_his in enumerate(losses_his):
  44. plt.plot(l_his, label=labels[i])
  45. plt.legend(loc='best')
  46. plt.xlabel('Steps')
  47. plt.ylabel('Loss')
  48. plt.ylim((0, 0.2))
  49. plt.show()

 

保存模型与加载模型


  
  1. """
  2. Know more, visit my Python tutorial page: https://morvanzhou.github.io/tutorials/
  3. My Youtube Channel: https://www.youtube.com/user/MorvanZhou
  4. Dependencies:
  5. tensorflow: 1.1.0
  6. matplotlib
  7. numpy
  8. """
  9. import tensorflow as tf
  10. import matplotlib.pyplot as plt
  11. import numpy as np
  12. tf.set_random_seed( 1)
  13. np.random.seed( 1)
  14. # fake data
  15. x = np.linspace( -1, 1, 100)[:, np.newaxis] # shape (100, 1)
  16. noise = np.random.normal( 0, 0.1, size=x.shape)
  17. y = np.power(x, 2) + noise # shape (100, 1) + some noise
  18. def save():
  19. print( 'This is save')
  20. # build neural network
  21. tf_x = tf.placeholder(tf.float32, x.shape) # input x
  22. tf_y = tf.placeholder(tf.float32, y.shape) # input y
  23. l = tf.layers.dense(tf_x, 10, tf.nn.relu) # hidden layer
  24. o = tf.layers.dense(l, 1) # output layer
  25. loss = tf.losses.mean_squared_error(tf_y, o) # compute cost
  26. train_op = tf.train.GradientDescentOptimizer(learning_rate= 0.5).minimize(loss)
  27. sess = tf.Session()
  28. sess.run(tf.global_variables_initializer()) # initialize var in graph
  29. saver = tf.train.Saver() # define a saver for saving and restoring
  30. for step in range( 100): # train
  31. sess.run(train_op, {tf_x: x, tf_y: y})
  32. saver.save(sess, './params', write_meta_graph= False) # meta_graph is not recommended
  33. # plotting
  34. pred, l = sess.run([o, loss], {tf_x: x, tf_y: y})
  35. plt.figure( 1, figsize=( 10, 5))
  36. plt.subplot( 121)
  37. plt.scatter(x, y)
  38. plt.plot(x, pred, 'r-', lw= 5)
  39. plt.text( -1, 1.2, 'Save Loss=%.4f' % l, fontdict={ 'size': 15, 'color': 'red'})
  40. def reload():
  41. print( 'This is reload')
  42. # build entire net again and restore
  43. tf_x = tf.placeholder(tf.float32, x.shape) # input x
  44. tf_y = tf.placeholder(tf.float32, y.shape) # input y
  45. l_ = tf.layers.dense(tf_x, 10, tf.nn.relu) # hidden layer
  46. o_ = tf.layers.dense(l_, 1) # output layer
  47. loss_ = tf.losses.mean_squared_error(tf_y, o_) # compute cost
  48. sess = tf.Session()
  49. # don't need to initialize variables, just restoring trained variables
  50. saver = tf.train.Saver() # define a saver for saving and restoring
  51. saver.restore(sess, './params')
  52. # plotting
  53. pred, l = sess.run([o_, loss_], {tf_x: x, tf_y: y})
  54. plt.subplot( 122)
  55. plt.scatter(x, y)
  56. plt.plot(x, pred, 'r-', lw= 5)
  57. plt.text( -1, 1.2, 'Reload Loss=%.4f' % l, fontdict={ 'size': 15, 'color': 'red'})
  58. plt.show()
  59. save()
  60. # destroy previous net
  61. tf.reset_default_graph()
  62. reload()

tensorboard可视化

参考文档:TensorBoard:图表可视化 - TensorFlow官方文档中文版 (pythontab.com)


  
  1. 1.创建writer,写日志文件
  2. writer=tf.summary.FileWriter( '/path/to/logs', tf.get_default_graph())
  3. 2.保存日志文件
  4. writer.close()
  5. 3.运行可视化命令,启动服务
  6. tensorboard –logdir /path/to/logs
  7. 4.打开可视化界面
  8. 通过浏览器打开服务器访问端口 http: //xxx.xxx.xxx.xxx:6006

TensorBoard的使用流程

  1. 添加记录节点:tf.summary.scalar/image/histogram()
  2. 汇总记录节点:merged = tf.summary.merge_all()
  3. 运行汇总节点:summary = sess.run(merged),得到汇总结果
  4. 日志书写器实例化:summary_writer = tf.summary.FileWriter(logdir, graph=sess.graph),实例化的同时传入 graph 将当前计算图写入日志
  5. 调用日志书写器实例对象summary_writeradd_summary(summary, global_step=i)方法将所有汇总日志写入文件
  6. 调用日志书写器实例对象summary_writerclose()方法写入内存,否则它每隔120s写入一次


  
  1. """
  2. Know more, visit my Python tutorial page: https://morvanzhou.github.io/tutorials/
  3. My Youtube Channel: https://www.youtube.com/user/MorvanZhou
  4. Dependencies:
  5. tensorflow: 1.1.0
  6. numpy
  7. """
  8. import tensorflow as tf
  9. import numpy as np
  10. tf.set_random_seed( 1)
  11. np.random.seed( 1)
  12. # fake data
  13. x = np.linspace( -1, 1, 100)[:, np.newaxis] # shape (100, 1)
  14. noise = np.random.normal( 0, 0.1, size=x.shape)
  15. y = np.power(x, 2) + noise # shape (100, 1) + some noise
  16. with tf.variable_scope( 'Inputs'): #用tf.variable_scope命名Inputs(名称),x,y属于Inputs层级下的节点
  17. tf_x = tf.placeholder(tf.float32, x.shape, name= 'x')
  18. tf_y = tf.placeholder(tf.float32, y.shape, name= 'y')
  19. with tf.variable_scope( 'Net'):
  20. l1 = tf.layers.dense(tf_x, 10, tf.nn.relu, name= 'hidden_layer')
  21. output = tf.layers.dense(l1, 1, name= 'output_layer')
  22. # add to histogram summary
  23. tf.summary.histogram( 'h_out', l1)
  24. tf.summary.histogram( 'pred', output)
  25. loss = tf.losses.mean_squared_error(tf_y, output, scope= 'loss')
  26. train_op = tf.train.GradientDescentOptimizer(learning_rate= 0.5).minimize(loss)
  27. tf.summary.scalar( 'loss', loss) # add loss to scalar summary
  28. sess = tf.Session()
  29. sess.run(tf.global_variables_initializer())
  30. writer = tf.summary.FileWriter( './log', sess.graph) # write to file
  31. merge_op = tf.summary.merge_all() # operation to merge all summary
  32. for step in range( 100):
  33. # train and net output
  34. _, result = sess.run([train_op, merge_op], {tf_x: x, tf_y: y})
  35. writer.add_summary(result, step)
  36. # Lastly, in your terminal or CMD, type this :
  37. # $ tensorboard --logdir path/to/log
  38. # open you google chrome, type the link shown on your terminal or CMD. (something like this: http://localhost:6006)

注意:节点域:tf2使用tf.name_scope('XX'):使可视化更简洁

 

启动TensorBoard 

输入下面的指令来启动TensorBoard

tensorboard --logdir=/path/to/log-directory

这里的参数 logdir 指向 SummaryWriter 序列化数据的存储路径。如果logdir目录的子目录中包含另一次运行时的数据,那么 TensorBoard 会展示所有运行的数据。一旦 TensorBoard 开始运行,你可以通过在浏览器中输入 localhost:6006 来查看 TensorBoard。

梯度下降


  
  1. """
  2. Know more, visit my Python tutorial page: https://morvanzhou.github.io/tutorials/
  3. My Youtube Channel: https://www.youtube.com/user/MorvanZhou
  4. Dependencies:
  5. tensorflow: 1.1.0
  6. matplotlib
  7. numpy
  8. """
  9. import tensorflow as tf
  10. import numpy as np
  11. import matplotlib.pyplot as plt
  12. from mpl_toolkits.mplot3d import Axes3D
  13. LR = 0.1
  14. REAL_PARAMS = [ 1.2, 2.5]
  15. INIT_PARAMS = [[ 5, 4],
  16. [ 5, 1],
  17. [ 2, 4.5]][ 2]
  18. x = np.linspace( -1, 1, 200, dtype=np.float32) # x data
  19. # Test (1): Visualize a simple linear function with two parameters,
  20. # you can change LR to 1 to see the different pattern in gradient descent.
  21. # y_fun = lambda a, b: a * x + b
  22. # tf_y_fun = lambda a, b: a * x + b
  23. # Test (2): Using Tensorflow as a calibrating tool for empirical formula like following.
  24. # y_fun = lambda a, b: a * x**3 + b * x**2
  25. # tf_y_fun = lambda a, b: a * x**3 + b * x**2
  26. # Test (3): Most simplest two parameters and two layers Neural Net, and their local & global minimum,
  27. # you can try different INIT_PARAMS set to visualize the gradient descent.
  28. y_fun = lambda a, b: np.sin(b*np.cos(a*x))
  29. tf_y_fun = lambda a, b: tf.sin(b*tf.cos(a*x))
  30. noise = np.random.randn( 200)/ 10
  31. y = y_fun(*REAL_PARAMS) + noise # target
  32. # tensorflow graph
  33. a, b = [tf.Variable(initial_value=p, dtype=tf.float32) for p in INIT_PARAMS]
  34. pred = tf_y_fun(a, b)
  35. mse = tf.reduce_mean(tf.square(y-pred))
  36. train_op = tf.train.GradientDescentOptimizer(LR).minimize(mse)
  37. a_list, b_list, cost_list = [], [], []
  38. with tf.Session() as sess:
  39. sess.run(tf.global_variables_initializer())
  40. for t in range( 400):
  41. a_, b_, mse_ = sess.run([a, b, mse])
  42. a_list.append(a_); b_list.append(b_); cost_list.append(mse_) # record parameter changes
  43. result, _ = sess.run([pred, train_op]) # training
  44. # visualization codes:
  45. print( 'a=', a_, 'b=', b_)
  46. plt.figure( 1)
  47. plt.scatter(x, y, c= 'b') # plot data
  48. plt.plot(x, result, 'r-', lw= 2) # plot line fitting
  49. # 3D cost figure
  50. fig = plt.figure( 2); ax = Axes3D(fig)
  51. a3D, b3D = np.meshgrid(np.linspace( -2, 7, 30), np.linspace( -2, 7, 30)) # parameter space
  52. cost3D = np.array([np.mean(np.square(y_fun(a_, b_) - y)) for a_, b_ in zip(a3D.flatten(), b3D.flatten())]).reshape(a3D.shape)
  53. ax.plot_surface(a3D, b3D, cost3D, rstride= 1, cstride= 1, cmap=plt.get_cmap( 'rainbow'), alpha= 0.5)
  54. ax.scatter(a_list[ 0], b_list[ 0], zs=cost_list[ 0], s= 300, c= 'r') # initial parameter place
  55. ax.set_xlabel( 'a'); ax.set_ylabel( 'b')
  56. ax.plot(a_list, b_list, zs=cost_list, zdir= 'z', c= 'r', lw= 3) # plot 3D gradient descent
  57. plt.show()

 


转载:https://blog.csdn.net/sereasuesue/article/details/116534048
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场