@devilloser
2017-07-05T01:45:31.000000Z
字数 2173
阅读 798
tensorflow
输出值之前要
sess.run(tf.initialize_all_variables())print sess.run(variable)
tensorflow支持七种可视化:
scalars:训练过程中的准确率、损失值、权重/偏置的变化
images:训练过程中记录的图像
audio:训练过程中记录的音频
graphs:数据流图,以及消耗
distributions:训练过程中记录的数据的分布
histograms:训练过程中记录的数据的柱状图
embeddings:词向量后的投影分布
variable_scope:给variable_name,op_name加前缀
name_scope:给op_name加前缀
作用是在调用同一个封装好的模块时,不会重复创建变量
with tf.variable_scope("foo") as scope:v=tf.get_variable("v",[1])with tf.variable_scope("foo",reuse=True):v=tf.get_variable("v",[1])
举个栗子:
with tf.variable_scope("foo"):with tf.name_scope("bar"):v=tf.get_variable("v",[1])b=tf.Variable(tf.zero([1]),name='b')x=1.0+v
v.name="foo/v:0"
b.name="foo/bar/b:0"
x.op.name="foo/bar/add"
tf.nn.relu
tf.nn.sigmoid
tf.nn.tanh
tf.nn.elu
tf.nn.bias_add
tf.nn.crelu
tf.nn.relu6
tf.nn.softplus
tf.nn.softsign
tf.nn.dropout
a=tf.constant([[-1.0,2.0,3.0,4.0]])with tf.Session() as sess:b=tf.nn.dropout(a,0.5,noise_shape[1,4])print sess.run(b)b=tf.nn.dropout(a,0.5,noise_shape[1,1])print sess.run(b)
tf.nn.convolution(input,filter,padding,strides=None,dilation_rate=Nome,name=None,data_format=None)
N维卷积的和
tf.nn.conv2d(input,filter,strides,padding,use_cudnn_on_gpu=None,data_format=None,name=None)
filter:
filter_height
filter_width
input_channel
out_channel
tf.nn.depthwise_conv2d(input,filter,strides,padding,rate=None,name=None,data_format=None)
将不同卷积核独立的应用在不同通道上,所以输出的通道为input_channel*output_channel
tf.nn.separable_conv2d(input,depthwise_filter,pointwise_filter,strides,padding,rate=None,name=None,data_format=None)
depwise_filter:等同于depthwise_conv2d
pointwise_filter:等同于conv2d
tf.nn.atrous_conv2d(value,filters,rate,padding,name=None)
孔卷积
tf.nn.conv2d_transpose(value,filter,output_shape,strides,padding='SAME',data_format='NHWC',name=None)
解卷积网络
tf.nn.avg_pool(value,ksize,strides,padding,data_format='NHWC',name=None)
tf.nn.max_pool(value,ksize,strides,padding.data_format='NHWC',name=None)
tf.nn.max_pool_with_argmax(input,ksize,strides,padding,Targmax=None,name=None)
tf.nn.sigmoid_cross_entropy_with_logits(logits,targets,name=None)
tf.nn.softmax(logits,dim=-1,name=None)
tf.nn.log_softmax(logits,dim=-1,name=None)
BGD批梯度下降
SGD随机梯度下降
Momentum法
Nesterov Momentum法
Adagrad法
Adadelta法
RMSprop法
Adam法