How much training data do you need?

How much training data do you need?

 

//@樵夫上校: 0. 经验上,10X规则(训练数据是模型参数量的10倍)适用与大多数模型,包括shallow network. 1.线性模型可以应用10X的经验规则,模型参数是特征选择后的数量(PCA等方法)。2.NN可以将10X规则当做训练数据量的下限。

The quality and amount of training data is often the single most dominant factor that determines the performance of a model. Once you have the training data angle covered, the rest usually follows. But exactly how much training data do you need? The correct answer is: it depends. It depends on the task you are trying to perform, the performance you want to achieve, the input features you have, the noise in the training data, the noise in your extracted features, the complexity of your model and so on. So the way to find out the interaction of all these variables is to train your model on varying amounts of training data and plot learning curves. But this requires you to already have some decent amount of training data to construct interesting plots. What do you do when you are just starting out? Or when you suspect you have too little training data and want to estimate how big a problem you are in?

So instead of the dead accurate “correct” answer to the problem, how about an estimate, a practical rule of thumb? One way out is to take an empirical approach as follows. First, automatically generate a lot of logistic regression problems. For each generated problem, study the relationship between the amount of training data and the performance of the trained models. Observing this relationship over a range of problems, generalize to a simple rule.

Here is the code to generate a range of logistic regression problems and study the effect of varying the amount of training data. The code is based onTensorflow. Running the code doesn’t require any special software or hardware (Tensorflow is open sourced by Google), and I was able to run the entire experiment on my laptop. Upon running, the code spits out the graph below.

The x-axis is the ratio of the number of training samples to the number of model parameters. The y-axis is the f-score of the trained model. The curves in different colors correspond to models that differ in the number of parameters. For example, the red curve which corresponds to a model with 128 parameters indicate how the fscore changes as one varies the number of training samples to 128 x 1, 128 x 2 and so on.

The first observation is that the f-score curves don’t vary as the parameters scale. This is expected given the models are linear and it’s good to see that some hidden non-linearity doesn’t creep in. Of course, larger models need more training data, but for a given ratio of the number of training samples to the number of model parameters you get the same peformance. The second observation is that when the ratio of training samples to model parameters is 10:1, the f-score lands in the vicinity of 0.85 which we take as the definition of a well performing model. This leads us to the rule of 10, namely the amount of training data you need for a well performing model is 10x the number of parameters in the model.

The rule of 10 transforms the problem of estimating the amount of training data required to knowing the number of parameters in the model, so it deserves some discussion. For linear models such as logistic regression, the number of parameters equal the number of input features since the model assigns a parameter corresponding to each feature. However there could be some complications:

  • Your features may be sparse, so counting the number of features may not be straightforward.
  • Due to regularization and feature selection techniques a lot of features may be discarded, so the real feature count is much smaller than the number of raw features that are input to the model.

One way to tackle the issue is to observe that you don’t really need labeled data to get an estimate of the number of features, even unlabeled examples are sufficient for that purpose. For example, given a large corpus of text, you can generate histograms of word frequencies to understand your feature space before beginning to label the data for training. Given the histogram, you can discard the words in the long tail to get an estimate of the real feature count, which then gives an estimate of the amount of training data you need applying the rule of 10.

Neural networks pose a different set of problems than linear models like logistic regression. To get the number of parameters in a neural network you need to

  • Count the number of parameters used in the embedding layer if your input is sparse (see the Tensorflow tutorial on word embeddings for example).
  • Count the number of edges in your network.

The problem is the relationship between the parameters in a neural network is no longer linear, so the emperical study we did based on logistic regression doesn’t really apply anymore. In such cases you can treat the rule of 10 as a lower bound to the amount of training data needed.

Despite the complications above, in my experience the rule of 10 seem to work across a wide range of problems, including shallow neural nets. However when in doubt, plug in your own model and assumptions in the Tensorflow code and run the simulation to study it’s effects. Please feel free to share if you gain any insight in the process.

时间: 2024-08-16 00:13:24

How much training data do you need?的相关文章

论文笔记《The Impact of Imbalanced Training Data for CNN》

原文是:<The Impact of Imbalanced Training Data for Convolutional Neural Networks> 本博客是该论文的阅读笔记,不免有很多细节不对之处. 还望各位看官能够见谅,欢迎批评指正. 更多相关博客请猛戳:http://blog.csdn.net/cyh_24 如需转载,请附上本文链接:http://blog.csdn.net/cyh_24/article/details/49871387 Abstract 本文主要研究使用不平衡数

什么情况下使用large training data会非常有效

收集大量的数据可能比算法的优劣更重要 Banko和Brill在2001年做了一个研究,是关于在句子中对易混单词进行识别,画出了上图的右边的那个图,这个图显示了对于不同的算法,它们的表现相似,但是随着training set size的增加,不同的算法的性能都增加.这个说明了一个较劣势的算法,如果它有大量的数据的话,在这个例子中,它的表现会对优秀的算法只有少量的数据要好.了解到这个情况,我们就知道了,在特定的情况下(数据量的提升对改进算法有效),我们应该把精力放在收集大量的数据上,而不是用来选择某

CF with friends and user&#39;s influence considered on NYC data(updated Aug,11st)

Here is the code link: https://github.com/FassyGit/LightFM_liu/blob/master/U_F1.py I use NYC data as other experimens. The split of the training data was seperated by the timeline, and I have normalised the interaction matrix by replacing the checkin

16 On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima 1609.04836v1

Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang Northwestern University & Intel code: https://github.com/keskarnitish/large-batch-training * SGD及其变种在batch size增大的时候会有泛化能力的明显下降 generalization drop/deg

Deep Learning in a Nutshell: History and Training

Deep Learning in a Nutshell: History and Training This series of blog posts aims to provide an intuitive and gentle introduction to deep learning that does not rely heavily on math or theoretical constructs. The first part in this series provided an

Distant Supervision for relation extraction without labeled data

Distant Supervision for relation extraction without labeled data 远程监督:使用未标注语料做关系抽取 1. 背景: 关系抽取(某个人是否属于某个组织等)     关系抽取中使用的3种方法: a) 监督学习 优点:准确率很高 缺点:1.手工标注金标语料代价昂贵,时间金钱上需要很大的开销,并且数量受限,得不到大量的训练数据; 2.领域受限,标注都是在一个特定的语料中,训练的系统受限于那个领域 b) 无监督学习 优点:可以使用大规模的数据

data mining,machine learning,AI,data science,data science,business analytics

数据挖掘(data mining),机器学习(machine learning),和人工智能(AI)的区别是什么? 数据科学(data science)和商业分析(business analytics)之间有什么关系? 本来我以为不需要解释这个问题的,到底数据挖掘(data mining),机器学习(machine learning),和人工智能(AI)有什么区别,但是前几天因为有个学弟问我,我想了想发现我竟然也回答不出来,我在知乎和博客上查了查这个问题,发现还没有人写过比较详细和有说服力的对比

python data analysis | python数据预处理(基于scikit-learn模块)

原文:http://www.jianshu.com/p/94516a58314d Dataset transformations| 数据转换 Combining estimators|组合学习器 Feature extration|特征提取 Preprocessing data|数据预处理 1 Dataset transformations scikit-learn provides a library of transformers, which may clean (see Preproce

OpenCV Tutorials &mdash;&mdash; Support Vector Machines for Non-Linearly Separable Data

与上一篇类似 ~~ 只不过是非线性数据 #include "stdafx.h" #include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/ml/ml.hpp> #define NTRAINING_SAMPLES 100 // Number of training samples