Unsupervised Learning: Use Cases

Unsupervised Learning: Use Cases

Contents

The features learned by deep neural networks can be used for the purposes of classification, clustering and regression.

Neural nets are simply universal approximators using non-linearities. They produce “good” features by learning to reconstruct data through pretraining or through backpropagation. In the latter case, neural nets plug into arbitrary loss functions to map inputs to outputs.

The features learned by neural networks can be fed into any variety of other algorithms, including traditional machine-learning algorithms that group input, softmax/logistic regression that classifies it, or simple regression that predicts a value.

So you can think of neural networks as feature-producers that plug modularly into other functions. For example, you could make a convolutional neural network learn image features on ImageNet with supervised training, and then you could take the activations/features learned by that neural network and feed it into a second algorithm that would learn to group images.

Here is a list of use cases for features generated by neural networks:

Visualization

t-distributed stochastic neighbor embedding (T-SNE) is an algorithm used to reduce high-dimensional data into two or three dimensions, which can then be represented in a scatterplot. T-SNE is used for finding latent trends in data. Deeplearning4j relies on T-SNE for some visualizations, and it is an interesting end point for neural network features. For more information and downloads, see thispage on T-SNE.

Renders - Deeplearning4j relies on visual renders as heuristics to monitor how well a neural network is learning. That is, renders are used to debug. They help us visualize activations over time, and activations over time are an indicator of what and how much the network is learning.

K-Means Clustering

K-Means is an algorithm used for automatically labeling activations based on their raw distances from other input in a vector space. There is no target or loss function; k-means picks so-called centroids. K-means creates centroids through a repeated averaging of all the data points. K-means classifies new data by its proximity to a given centroid. Each centroid is associated with a label. This is an example of unsupervised learning (learning lacking a loss function) that applies labels.

Transfer Learning

Transfer learning takes the activations of one neural network and puts them to use as features for another algorithm or classifier. For example, you can take the model of a ConvNet trained on ImageNet, and pass fresh images through it into another algorithm, such as K-Nearest Neighbor. The strict definition of transfer learning is just that: taking the model trained on one set of data, and plugging it into another problem.

K-Nearest Neighbors

This algorithm serves the purposes of classification and regression, and relies on a kd-tree. A kd-tree is a data structure for storing a finite set of points from a k-dimensional space. It partitions a space of arbitrary dimensions into a tree, which may also be called a vantage point tree. kd-trees subdivide a space with a tree structure, and you navigate the tree to find the closest points. The label associated with the closest points is applied to input.

Let your input and training examples be vectors. Training vectors might be arranged in a binary tree like so:

If you were to visualize those nodes in two dimensions, partitioning space at each branch, then the kd-tree would look like this:

Now, let’s saw you place a new input, X, in the tree’s partitioned space. This allows you to identify both the parent and child of that space within the tree. The X then constitutes the center of a circle whose radius is the distance to the child node of that space. By definition, only other nodes within the circle’s circumference can be nearer.

And finally, if you want to make art with kd-trees, you could do a lot worse than this:

(Hat tip to Andrew Moore of CMU for his excellent diagrams.)

Other Resources

时间: 2024-10-12 13:16:45

Unsupervised Learning: Use Cases的相关文章

PredNet --- Deep Predictive coding networks for video prediction and unsupervised learning --- 论文笔记

PredNet --- Deep Predictive coding networks for video prediction and unsupervised learning   ICLR 2017  2017.03.12  Code and video examples can be found at: https://coxlab.github.io/prednet/ 摘要:基于监督训练的深度学习技术取得了非常大的成功,但是无监督问题仍然是一个未能解决的一大难题(从未标注的数据中学习到

Unsupervised Learning and Text Mining of Emotion Terms Using R

Unsupervised learning refers to data science approaches that involve learning without a prior knowledge about the classification of sample data. In Wikipedia, unsupervised learning has been described as "the task of inferring a function to describe h

Machine Learning Algorithms Study Notes(4)—无监督学习(unsupervised learning)

1    Unsupervised Learning 1.1    k-means clustering algorithm 1.1.1    算法思想 1.1.2    k-means的不足之处 1.1.3    如何选择K值 1.1.4    Spark MLlib 实现 k-means 算法 1.2    Mixture of Gaussians and the EM algorithm 1.3    The EM Algorithm 1.4    Principal Components

Coursera机器学习-第八周-Unsupervised Learning(K-Means)

Clustering K-means算法是硬聚类算法,是典型的基于原型的目标函数聚类方法的代表,它是数据点到原型的某种距离作为优化的目标函数,利用函数求极值的方法得到迭代运算的调整规则.K-means算法以欧式距离作为相似度测度,它是求对应某一初始聚类中心向量V最优分类,使得评价指标J最小.算法采用误差平方和准则函数作为聚类准则函数. Unsuperivised Learning:Intruduction 典型的Supervised Learning 有一组附标记(y(i))的训练数据集, 我们

Unsupervised learning, attention, and other mysteries

Unsupervised learning, attention, and other mysteries Get notified when our free report “Future of Machine Intelligence: Perspectives from Leading Practitioners” is available for download. The following interview is one of many that will be included

Stanford机器学习课程笔记(1) Supervised Learning and Unsupervised Learning

最近跟完了Andrew Ng的Machine Learning前三周的课,主要讲解了机器学习中的线性回归(Linear Regression)和逻辑回归(Logistic Regression)模型.在这里做一下记录. 另外推荐一本统计学习的书,<统计学习方法>李航,书短小精悍,才200多页,但是内容基本上覆盖了机器学习中的理论基础. 笔记<1> 主要了解一下监督学习和无监督学习 机器学习:是关于计算机基于数据 构建概率统计模型 并运用模型对数据进行预测与分析的一门学科. 机器学习

Unsupervised Learning: Linear Dimension Reduction---无监督学习:线性降维

一 Unsupervised Learning 把Unsupervised Learning分为两大类: 化繁为简:有很多种input,进行抽象化处理,只有input没有output 无中生有:随机给一个input,自动画一张图,只有output没有input 二 Clustering 有一大堆image ,把他们分为几大类,给他们贴上标签,将不同的image用相同的 cluster表示. 也面临一个问题,要有多少种cluster呢? 有两种clustering的方法: 2.1 K-means(

131.007 Unsupervised Learning - Feature Selection | 非监督学习 - 特征选择

1 Why? Reason1 Knowledge Discovery (about human beings limitaitons) Reason2 Cause of Dimensionality (维度灾难) (about ML algorithm itself) 所需的数据量会根据你所拥有的特征数量以指数速度增长 2 NP-Hard Problem arbitrarily choose m features from n features (m≤n),don't know what m t

unsupervised learning: K-means 算法

k-means算法是目前最流行的,用得最多的一种clustering算法 K-means算法 如果我们想要将上图中的绿色的点分为两类,首先随机的选取两个cluster centroids(聚类中心),然后迭代(循环)地做两件事:cluster assignment和move centroids(图1) cluster assignment: 然后将训练集中的每个样本,根据是距离红色的cluster centroid近还是蓝色的cluster centroid近来进行分配cluster.(图2)