step by step带你fastText文本分类

本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6300/

**写在前面

**



今天的教程是基于FAIR的Bag of Tricks for Efficient Text Classification[1]。也就是我们常说的fastText。

最让人欣喜的这篇论文配套提供了fasttext工具包。这个工具包代码质量非常高,论文结果一键还原,目前已经是包装地非常专业了,这是fastText官网和其github代码库,以及提供了python接口,可以直接通过pip安装。这样准确率高又快的模型绝对是实战利器。

为了更好地理解fasttext原理,我们现在直接复现来一遍,但是代码中仅仅实现了最简单的基于单词的词向量求平均,并未使用b-gram的词向量,所以自己实现的文本分类效果会低于facebook开源的库。

论文概览

?

We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.

首先引用论文中的一段话来看看作者们是怎么评价fasttext模型的表现的。

这篇论文的模型非常之简单,之前了解过word2vec的同学可以发现这跟CBOW的模型框架非常相似。

对应上面这个模型,比如输入是一句话,到就是这句话的单词或者是n-gram。每一个都对应一个向量,然后对这些向量取平均就得到了文本向量,然后用这个平均向量取预测标签。当类别不多的时候,就是最简单的softmax;当标签数量巨大的时候,就要用到「hierarchical softmax」了。

模型真的很简单,也没什么可以说的了。下面提一下论文中的两个tricks:

  • 「hierarchical softmax」

    类别数较多时,通过构建一个霍夫曼编码树来加速softmax layer的计算,和之前word2vec中的trick相同

  • 「N-gram features」

    只用unigram的话会丢掉word order信息,所以通过加入N-gram features进行补充 用hashing来减少N-gram的存储

看了论文的实验部分,如此简单的模型竟然能取得这么好的效果 !

但是也有人指出论文中选取的数据集都是对句子词序不是很敏感的数据集,所以得到文中的试验结果并不奇怪。

代码实现

看完阉割版代码大家记得去看看源码噢~ 跟之前系列的一样,定义一个fastTextModel类,然后写网络框架,输入输出placeholder,损失,训练步骤等。

class fastTextModel(BaseModel):
    """
    A simple implementation of fasttext for text classification
    """
    def __init__(self, sequence_length, num_classes, vocab_size,
                 embedding_size, learning_rate, decay_steps, decay_rate,
                 l2_reg_lambda, is_training=True,
                 initializer=tf.random_normal_initializer(stddev=0.1)):
        self.vocab_size = vocab_size
        self.embedding_size = embedding_size
        self.num_classes = num_classes
        self.sequence_length = sequence_length
        self.learning_rate = learning_rate
        self.decay_steps = decay_steps
        self.decay_rate = decay_rate
        self.is_training = is_training
        self.l2_reg_lambda = l2_reg_lambda
        self.initializer = initializer
        self.input_x = tf.placeholder(tf.int32, [None, self.sequence_length], name=‘input_x‘)
        self.input_y = tf.placeholder(tf.int32, [None, self.num_classes], name=‘input_y‘)
        self.global_step = tf.Variable(0, trainable=False, name=‘global_step‘)
        self.instantiate_weight()
        self.logits = self.inference()
        self.loss_val = self.loss()
        self.train_op = self.train()
        self.predictions = tf.argmax(self.logits, axis=1, name=‘predictions‘)
        correct_prediction = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_prediction, ‘float‘), name=‘accuracy‘)
    def instantiate_weight(self):
        with tf.name_scope(‘weights‘):
            self.Embedding = tf.get_variable(‘Embedding‘, shape=[self.vocab_size, self.embedding_size],
                                             initializer=self.initializer)
            self.W_projection = tf.get_variable(‘W_projection‘, shape=[self.embedding_size, self.num_classes],
                                                initializer=self.initializer)
            self.b_projection = tf.get_variable(‘b_projection‘, shape=[self.num_classes])
    def inference(self):
        """
        1. word embedding
        2. average embedding
        3. linear classifier
        :return:
        """
        # embedding layer
        with tf.name_scope(‘embedding‘):
            words_embedding = tf.nn.embedding_lookup(self.Embedding, self.input_x)
            self.average_embedding = tf.reduce_mean(words_embedding, axis=1)
        logits = tf.matmul(self.average_embedding, self.W_projection) +self.b_projection
        return logits
    def loss(self):
        # loss
        with tf.name_scope(‘loss‘):
            losses = tf.nn.softmax_cross_entropy_with_logits(labels=self.input_y, logits=self.logits)
            data_loss = tf.reduce_mean(losses)
            l2_loss = tf.add_n([tf.nn.l2_loss(cand_var) for cand_var in tf.trainable_variables()
                                if ‘bias‘ not in cand_var.name]) * self.l2_reg_lambda
            data_loss += l2_loss * self.l2_reg_lambda
            return data_loss
    def train(self):
        with tf.name_scope(‘train‘):
            learning_rate = tf.train.exponential_decay(self.learning_rate, self.global_step,
                                                       self.decay_steps, self.decay_rate,
                                                       staircase=True)
            train_op = tf.contrib.layers.optimize_loss(self.loss_val, global_step=self.global_step,
                                                      learning_rate=learning_rate, optimizer=‘Adam‘)
        return train_op  

def prepocess():
    """
    For load and process data
    :return:
    """
    print("Loading data...")
    x_text, y = data_process.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)
    # bulid vocabulary
    max_document_length = max(len(x.split(‘ ‘)) for x in x_text)
    vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length)
    x = np.array(list(vocab_processor.fit_transform(x_text)))
    # shuffle
    np.random.seed(10)
    shuffle_indices = np.random.permutation(np.arange(len(y)))
    x_shuffled = x[shuffle_indices]
    y_shuffled = y[shuffle_indices]
    # split train/test dataset
    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))
    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]
    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]
    del x, y, x_shuffled, y_shuffled
    print(‘Vocabulary Size: {:d}‘.format(len(vocab_processor.vocabulary_)))
    print(‘Train/Dev split: {:d}/{:d}‘.format(len(y_train), len(y_dev)))
    return x_train, y_train, vocab_processor, x_dev, y_dev
def train(x_train, y_train, vocab_processor, x_dev, y_dev):
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(
            # allows TensorFlow to fall back on a device with a certain operation implemented
            allow_soft_placement= FLAGS.allow_soft_placement,
            # allows TensorFlow log on which devices (CPU or GPU) it places operations
            log_device_placement=FLAGS.log_device_placement
        )
        sess = tf.Session(config=session_conf)
        with sess.as_default():
            # initialize cnn
            fasttext = fastTextModel(sequence_length=x_train.shape[1],
                      num_classes=y_train.shape[1],
                      vocab_size=len(vocab_processor.vocabulary_),
                      embedding_size=FLAGS.embedding_size,
                      l2_reg_lambda=FLAGS.l2_reg_lambda,
                      is_training=True,
                      learning_rate=FLAGS.learning_rate,
                      decay_steps=FLAGS.decay_steps,
                      decay_rate=FLAGS.decay_rate
                    )
            # output dir for models and summaries
            timestamp = str(time.time())
            out_dir = os.path.abspath(os.path.join(os.path.curdir, ‘run‘, timestamp))
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
            print(‘Writing to {} \n‘.format(out_dir))
            # checkpoint dir. checkpointing – saving the parameters of your model to restore them later on.
            checkpoint_dir = os.path.abspath(os.path.join(out_dir, FLAGS.ckpt_dir))
            checkpoint_prefix = os.path.join(checkpoint_dir, ‘model‘)
            if not os.path.exists(checkpoint_dir):
                os.makedirs(checkpoint_dir)
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=FLAGS.num_checkpoints)
            # Write vocabulary
            vocab_processor.save(os.path.join(out_dir, ‘vocab‘))
            # Initialize all
            sess.run(tf.global_variables_initializer())
            def train_step(x_batch, y_batch):
                """
                A single training step
                :param x_batch:
                :param y_batch:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                _, step, loss, accuracy = sess.run(
                    [fasttext.train_op, fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            def dev_step(x_batch, y_batch):
                """
                Evaluate model on a dev set
                Disable dropout
                :param x_batch:
                :param y_batch:
                :param writer:
                :return:
                """
                feed_dict = {
                    fasttext.input_x: x_batch,
                    fasttext.input_y: y_batch,
                }
                step, loss, accuracy = sess.run(
                    [fasttext.global_step, fasttext.loss_val, fasttext.accuracy],
                    feed_dict=feed_dict
                )
                time_str = datetime.datetime.now().isoformat()
                print("dev results:{}: step {}, loss {:g}, acc {:g}".format(time_str, step, loss, accuracy))
            # generate batches
            batches = data_process.batch_iter(list(zip(x_train, y_train)), FLAGS.batch_size, FLAGS.num_epochs)
            # training loop
            for batch in batches:
                x_batch, y_batch = zip(*batch)
                train_step(x_batch, y_batch)
                current_step = tf.train.global_step(sess, fasttext.global_step)
                if current_step % FLAGS.validate_every == 0:
                    print(‘\n Evaluation:‘)
                    dev_step(x_dev, y_dev)
                    print(‘‘)
            path = saver.save(sess, checkpoint_prefix, global_step=current_step)
            print(‘Save model checkpoint to {} \n‘.format(path))
def main(argv=None):
    x_train, y_train, vocab_processor, x_dev, y_dev = prepocess()
    train(x_train, y_train, vocab_processor, x_dev, y_dev)
if __name__ == ‘__main__‘:
    tf.app.run()

本文参考资料

[1] Bag of Tricks for Efficient Text Classification: https://arxiv.org/abs/1607.01759

The End

原文地址:https://www.cnblogs.com/lihanlin/p/12571871.html

时间: 2024-10-06 18:33:33

step by step带你fastText文本分类的相关文章

带监督的文本分类算法FastText

该算法由facebook在2016年开源,典型应用场景是"带监督的文本分类问题". 模型 模型的优化目标如下: 其中,$<x_n,y_n>$是一条训练样本,$y_n$是训练目标,$x_n$是normalized bag of features.矩阵参数A是基于word的look-up table,也就是A是词的embedding向量.$Ax_n$矩阵运算的数学意义是将word的embedding向量找到后相加或者取平均,得到hidden向量.矩阵参数B是函数f的参数,函数f

step by step带你RCNN文本分类

本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6304/ **传统文本分类 ** 之前介绍的都是属于深度神经网络框架的,那么在Deep Learning出现或者风靡之前,文本分类是怎么做的呢? 传统的文本分类工作主要分为三个过程:特征工程.特征选择和不同分类机器学习算法. 1.1 特征工程 对于文本数据的特征工程来说,最广泛使用的功能是bag-of-words.tf-idf等.此外,还可以设计一些更复杂的特征,比如词性标签.名词短语以及tree k

step by step带你HAN文本分类

本文参考原文-http://bjbsair.com/2020-03-25/tech-info/6302/ 今天来看看网红Attention的效果,来自ACL的论文Hierarchical Attention Networks for Document Classification **论文概述 ** 近年来,在NLP领域,好像最流行的就是RNN.LSTM.GRU.Attention等及其变体的组合框架.这篇论文里作者就对文本的结构进行分析,使用了双向GRU的结构,并且对Attention进行调整

文本分类需要CNN?No!fastText完美解决你的需求(前篇)

http://blog.csdn.net/weixin_36604953/article/details/78195462?locationNum=8&fps=1 文本分类需要CNN?No!fastText完美解决你的需求(前篇) fastText是个啥?简单一点说,就是一种可以得到和深度学习结果准确率相同,但是速度快出几个世纪的文本分类算法.这个算法类似与CBOW,可爱的读着是不是要问CBOW又是个什么鬼?莫急,听小编给你慢慢到来,一篇文章,让你了解word2vec的原理,CBOW.Skip-

文本分类需要CNN?No!fastText完美解决你的需求(后篇)

http://blog.csdn.net/weixin_36604953/article/details/78324834 想必通过前一篇的介绍,各位小主已经对word2vec以及CBOW和Skip-gram有了比较清晰的了解.在这一篇中,小编带大家走进业内最新潮的文本分类算法,也就是fastText分类器.fastText与word2vec的提出者之所以会想到用fastText取代CNN(卷积神经网络)等深度学习模型,目的是为了在大数据情况下提高运算速度. 其实,文本的学习与图像的学习是不同的

文本分类(六):使用fastText对文本进行分类--小插曲

http://blog.csdn.net/lxg0807/article/details/52960072 环境说明:python2.7.linux 自己打自己脸,目前官方的包只能在linux,mac环境下使用.误导大家了,对不起. 测试facebook开源的基于深度学习的对文本分类的fastText模型 fasttext python包的安装: pip install fasttext 1 第一步获取分类文本,文本直接用的清华大学的新闻分本,可在文本系列的第三篇找到下载地址. 输出数据格式:

机器学习基础——带你实战朴素贝叶斯模型文本分类

本文始发于个人公众号:TechFlow 上一篇文章当中我们介绍了朴素贝叶斯模型的基本原理. 朴素贝叶斯的核心本质是假设样本当中的变量服从某个分布,从而利用条件概率计算出样本属于某个类别的概率.一般来说一个样本往往会含有许多特征,这些特征之间很有可能是有相关性的.为了简化模型,朴素贝叶斯模型假设这些变量是独立的.这样我们就可以很简单地计算出样本的概率. 想要回顾其中细节的同学,可以点击链接回到之前的文章: 机器学习基础--让你一文学会朴素贝叶斯模型 在我们学习算法的过程中,如果只看模型的原理以及理

LingPipe-TextClassification(文本分类)

What is Text Classification? Text classification typically involves assigning a document to a category by automated or human means. LingPipe provides a classification facility that takes examples of text classifications--typically generated by a huma

2017知乎看山杯总结(多标签文本分类)

http://blog.csdn.net/jerr__y/article/details/77751885 关于比赛详情,请戳:2017 知乎看山杯机器学习挑战赛 代码:https://github.com/yongyehuang/zhihu-text-classification 基于:python 2.7, TensorFlow 1.2.1 任务描述:参赛者需要根据知乎给出的问题及话题标签的绑定关系的训练数据,训练出对未标注数据自动标注的模型. 标注数据中包含 300 万个问题,每个问题有