tensorflow中next_batch的切实可行应用

作者: 韦德国际1946手机版  发布:2019-06-04

本文介绍了tensorflow中next_batch的具体应用,分享给大家,具体如下:

tensorflow中next_batch的切切实进行使,

本文介绍了tensorflow中next_batch的切实运用,分享给我们,具体如下:

那边给出了三种分裂的next_batch方法,该小说只是做出代码片段的解说,以备未来翻看:

 def next_batch(self, batch_size, fake_data=False):
  """Return the next `batch_size` examples from this data set."""
  if fake_data:
   fake_image = [1] * 784
   if self.one_hot:
    fake_label = [1]   [0] * 9
   else:
    fake_label = 0
   return [fake_image for _ in xrange(batch_size)], [
     fake_label for _ in xrange(batch_size)
   ]
  start = self._index_in_epoch
  self._index_in_epoch  = batch_size
  if self._index_in_epoch > self._num_examples: # epoch中的句子下标是否大于所有语料的个数,如果为True,开始新一轮的遍历
   # Finished epoch
   self._epochs_completed  = 1
   # Shuffle the data
   perm = numpy.arange(self._num_examples) # arange函数用于创建等差数组
   numpy.random.shuffle(perm) # 打乱
   self._images = self._images[perm]
   self._labels = self._labels[perm]
   # Start next epoch
   start = 0
   self._index_in_epoch = batch_size
   assert batch_size <= self._num_examples
  end = self._index_in_epoch
  return self._images[start:end], self._labels[start:end]

该段代码摘自mnist.py文件,从代码第三2行start = self._index_in_epoch发轫分解,_index_in_tensorflow中next_batch的切实可行应用。epoch-一是上三次batch个图片中最后一张图纸的底下,本次epoch第一张图纸的下标是从 _index_in_epoch开头,最终一张图纸的下标是_index_in_epoch batch, 如果 _index_in_epoch 大于语料中图纸的个数,表示那么些epoch是不正好的,就到底完了了语言材料的叁遍的遍历,所以理应对图纸洗牌然后起首新1轮的语言材质组成batch初始

def ptb_iterator(raw_data, batch_size, num_steps):
 """Iterate on the raw PTB data.

 This generates batch_size pointers into the raw PTB data, and allows
 minibatch iteration along these pointers.

 Args:
  raw_data: one of the raw data outputs from ptb_raw_data.
  batch_size: int, the batch size.
  num_steps: int, the number of unrolls.

 Yields:
  Pairs of the batched data, each a matrix of shape [batch_size, num_steps].
  The second element of the tuple is the same data time-shifted to the
  right by one.

 Raises:
  ValueError: if batch_size or num_steps are too high.
 """
 raw_data = np.array(raw_data, dtype=np.int32)

 data_len = len(raw_data)
 batch_len = data_len // batch_size #有多少个batch
 data = np.zeros([batch_size, batch_len], dtype=np.int32) # batch_len 有多少个单词
 for i in range(batch_size): # batch_size 有多少个batch
  data[i] = raw_data[batch_len * i:batch_len * (i   1)]

 epoch_size = (batch_len - 1) // num_steps # batch_len 是指一个batch中有多少个句子
 #epoch_size = ((len(data) // model.batch_size) - 1) // model.num_steps # // 表示整数除法
 if epoch_size == 0:
  raise ValueError("epoch_size == 0, decrease batch_size or num_steps")

 for i in range(epoch_size):
  x = data[:, i*num_steps:(i 1)*num_steps]
  y = data[:, i*num_steps 1:(i 1)*num_steps 1]
  yield (x, y)

其两种方法:

  def next(self, batch_size):
    """ Return a batch of data. When dataset end is reached, start over.
    """
    if self.batch_id == len(self.data):
      self.batch_id = 0
    batch_data = (self.data[self.batch_id:min(self.batch_id  
                         batch_size, len(self.data))])
    batch_labels = (self.labels[self.batch_id:min(self.batch_id  
                         batch_size, len(self.data))])
    batch_seqlen = (self.seqlen[self.batch_id:min(self.batch_id  
                         batch_size, len(self.data))])
    self.batch_id = min(self.batch_id   batch_size, len(self.data))
    return batch_data, batch_labels, batch_seqlen

第三种办法:

def batch_iter(sourceData, batch_size, num_epochs, shuffle=True):
  data = np.array(sourceData) # 将sourceData转换为array存储
  data_size = len(sourceData)
  num_batches_per_epoch = int(len(sourceData) / batch_size)   1
  for epoch in range(num_epochs):
    # Shuffle the data at each epoch
    if shuffle:
      shuffle_indices = np.random.permutation(np.arange(data_size))
      shuffled_data = sourceData[shuffle_indices]
    else:
      shuffled_data = sourceData

    for batch_num in range(num_batches_per_epoch):
      start_index = batch_num * batch_size
      end_index = min((batch_num   1) * batch_size, data_size)

      yield shuffled_data[start_index:end_index]

迭代器的用法,具体学习Python迭代器的用法

其余索要专注的是,前三种方法只是具备语料遍历壹次,而结尾壹种办法是,全数语言材质遍历了num_epochs次

以上就是本文的全体内容,希望对我们的求学抱有辅助,也意在大家多多帮助帮客之家。

本文介绍了tensorflow中next_batch的具体应用,分享给大家,具体如下: 此处给出了三种不相同的next_batch方法,该...

import tensorflow as tf

input1 = tf.placeholder(tf.float32) # 设置placeholder
input2 = tf.placeholder(tf.float32)

output = tf.multiply(input1,input2) # 乘法运算

# 执行
with tf.Session() as sess:
    print(sess.run(output,feed_dict={input1:[7.],input2:[2.]})) # 输入字典的数据

此处给出了三种差别的next_batch方法,该小说只是做出代码片段的表达,以备今后翻看:

输出

 def next_batch(self, batch_size, fake_data=False):
  """Return the next `batch_size` examples from this data set."""
  if fake_data:
   fake_image = [1] * 784
   if self.one_hot:
    fake_label = [1]   [0] * 9
   else:
    fake_label = 0
   return [fake_image for _ in xrange(batch_size)], [
     fake_label for _ in xrange(batch_size)
   ]
  start = self._index_in_epoch
  self._index_in_epoch  = batch_size
  if self._index_in_epoch > self._num_examples: # epoch中的句子下标是否大于所有语料的个数,如果为True,开始新一轮的遍历
   # Finished epoch
   self._epochs_completed  = 1
   # Shuffle the data
   perm = numpy.arange(self._num_examples) # arange函数用于创建等差数组
   numpy.random.shuffle(perm) # 打乱
   self._images = self._images[perm]
   self._labels = self._labels[perm]
   # Start next epoch
   start = 0
   self._index_in_epoch = batch_size
   assert batch_size <= self._num_examples
  end = self._index_in_epoch
  return self._images[start:end], self._labels[start:end]
[14.]

该段代码摘自mnist.py文件,从代码第二二行start = self._index_in_epoch初始解释,_index_in_epoch-一是上一遍batch个图片中最后一张图纸的底下,此次epoch第3张图片的下标是从 _index_in_epoch开首,最终一张图纸的下标是_index_in_epoch batch, 如果 _index_in_epoch 大于语言材料中图纸的个数,表示那个epoch是不适于的,固然是完毕了语言材料的三遍的遍历,所以应该对图纸洗牌然后早先新壹轮的语言质地组成batch开端

def ptb_iterator(raw_data, batch_size, num_steps):
 """Iterate on the raw PTB data.

 This generates batch_size pointers into the raw PTB data, and allows
 minibatch iteration along these pointers.

 Args:
  raw_data: one of the raw data outputs from ptb_raw_data.
  batch_size: int, the batch size.
  num_steps: int, the number of unrolls.

 Yields:
  Pairs of the batched data, each a matrix of shape [batch_size, num_steps].
  The second element of the tuple is the same data time-shifted to the
  right by one.

 Raises:
  ValueError: if batch_size or num_steps are too high.
 """
 raw_data = np.array(raw_data, dtype=np.int32)

 data_len = len(raw_data)
 batch_len = data_len // batch_size #有多少个batch
 data = np.zeros([batch_size, batch_len], dtype=np.int32) # batch_len 有多少个单词
 for i in range(batch_size): # batch_size 有多少个batch
  data[i] = raw_data[batch_len * i:batch_len * (i   1)]

 epoch_size = (batch_len - 1) // num_steps # batch_len 是指一个batch中有多少个句子
 #epoch_size = ((len(data) // model.batch_size) - 1) // model.num_steps # // 表示整数除法
 if epoch_size == 0:
  raise ValueError("epoch_size == 0, decrease batch_size or num_steps")

 for i in range(epoch_size):
  x = data[:, i*num_steps:(i 1)*num_steps]
  y = data[:, i*num_steps 1:(i 1)*num_steps 1]
  yield (x, y)

本文由韦德国际1946发布于韦德国际1946手机版,转载请注明出处:tensorflow中next_batch的切实可行应用

关键词: tensorflow