矩阵分解(rank decomposition)文章代码汇总

矩阵分解(rank decomposition)文章代码汇总

矩阵分解(rank decomposition)

本文收集了现有矩阵分解的几乎所有算法和应用,原文链接:https://sites.google.com/site/igorcarron2/matrixfactorizations

Matrix Decompositions has a long history and generally centers around a set of known factorizations such as LU, QR, SVD and eigendecompositions. More recent factorizations have seen the light of the day with work started with the advent of NMF, k-means and related algorithm [1]. However, with the advent of new methods based on random projections and convex optimization that started in part in the compressive sensing literature, we are seeing another surge of very diverse algorithms dedicated to many different kinds of matrix factorizations with new constraints based on rank and/or positivity and/or sparsity,… As a result of this large increase in interest, I have decided to keep a list of them here following the success of the big picture in compressive sensing.

The sources for this list include the following most excellent sites: Stephen Becker’s pageRaghunandan H. Keshavan‘ s pageNuclear Norm and Matrix Recovery through SDP by Christoph HelmbergArvind Ganesh‘s Low-Rank Matrix Recovery and Completion via Convex Optimization who provide more in-depth additional information.  Additional codes were featured also on Nuit Blanche. The following people provided additional inputs: Olivier GriselMatthieu Puigt.

Most of the algorithms listed below generally rely on using the nuclear norm as a proxy to the rank functional. It may not be optimal. Currently, CVX ( Michael Grant and Stephen  Boyd) consistently allows one to explore other proxies for the rank functional such as thelog-det as found by Maryam  FazellHaitham HindiStephen Boyd. ** is used to show that the algorithm uses another heuristic than the nuclear norm.

In terms of notations, A refers to a matrix, L refers to a low rank matrix, S a sparse one and N to a noisy one. This page lists the different codes that implement the following matrix factorizations: Matrix Completion, Robust PCA , Noisy Robust PCA, Sparse PCA, NMF, Dictionary Learning, MMV, Randomized Algorithms and other factorizations. Some of these toolboxes can sometimes implement several of these decompositions and are listed accordingly. Before I list algorithm here, I generally feature them on Nuit Blanche under the MF tag: http://nuit-blanche.blogspot.com/search/label/MF or. you can also subscribe to the Nuit Blanche feed,

Matrix Completion, A = H.*L with H a known mask, L unknown solve for L lowest rank possible

The idea of this approach is to complete the unknown coefficients of a matrix based on the fact that the matrix is low rank:

Noisy Robust PCA,  A = L + S + N with L, S, N unknown, solve for L low rank, S sparse, N noise

Robust PCA : A = L + S with L, S, N unknown, solve for L low rank, S sparse

Sparse PCA: A = DX  with unknown D and X, solve for sparse D

Sparse PCA on wikipedia

  • R. Jenatton, G. Obozinski, F. Bach. Structured Sparse Principal Component Analysis. International Conference on Artificial Intelligence and Statistics (AISTATS). [pdf] [code]
  • SPAMs
  • DSPCA: Sparse PCA using SDP . Code ishere.
  • PathPCA: A fast greedy algorithm for Sparse PCA. The code is here.

Dictionary Learning: A = DX  with unknown D and X, solve for sparse X

Some implementation of dictionary learning implement the NMF

NMF: A = DX with unknown D and X, solve for elements of D,X > 0

Non-negative Matrix Factorization (NMF) on wikipedia

Multiple Measurement Vector (MMV) Y = A X with unknown X and rows of X are sparse.

Blind Source Separation (BSS) Y = A X with unknown A and X and statistical independence between columns of X or subspaces of columns of X

Include Independent Component Analysis (ICA), Independent Subspace Analysis (ISA), and Sparse Component Analysis (SCA). There are many available codes for ICA and some for SCA. Here is a non-exhaustive list of some famous ones (which are not limited to linear instantaneous mixtures). TBC

ICA:

SCA:

Randomized Algorithms

These algorithms uses generally random projections to shrink very large problems into smaller ones that can be amenable to traditional matrix factorization methods.

Resource
Randomized algorithms for matrices and data by Michael W. Mahoney
Randomized Algorithms for Low-Rank Matrix Decomposition

Other factorization

D(T(.)) = L + E with unknown L, E and unknown transformation T and solve for transformation T, Low Rank L and Noise E

Frameworks featuring advanced Matrix factorizations

For the time being, few have integrated the most recent factorizations.

GraphLab / Hadoop

Books

Example of use

Sources

Arvind Ganesh‘s Low-Rank Matrix Recovery and Completion via Convex Optimization

Relevant links

Reference:

A Uni?ed View of Matrix Factorization Models by Ajit P. Singh and Geoffrey J. Gordon

时间: 2024-08-06 19:50:43

矩阵分解(rank decomposition)文章代码汇总的相关文章

基于One-Class的矩阵分解方法

在矩阵分解中. 有类问题比較常见,即矩阵的元素仅仅有0和1. 相应实际应用中的场景是:用户对新闻的点击情况,对某些物品的购买情况等. 基于graphchi里面的矩阵分解结果不太理想.调研了下相关的文献,代码主要实现了基于PLSA的分解方法,具体请參考后面的參考文献 #!/usr/local/bin/python #-*-coding:utf-8-*- import sys import math import numpy as np import string import random "&q

计算机视觉牛人博客和代码汇总

每个做过或者正在做研究工作的人都会关注一些自己认为有价值的.活跃的研究组和个人的主页,关注他们的主页有时候比盲目的去搜索一些论文有用多了,大牛的或者活跃的研究者主页往往提供了他们的最新研究线索,顺便还可八一下各位大牛的经历,对于我这样的小菜鸟来说最最实惠的是有时可以找到源码,很多时候光看论文是理不清思路的. 1 牛人Homepages(随意排序,不分先后): 1.USC Computer Vision Group:南加大,多目标跟踪/检测等: 2.ETHZ Computer Vision Lab

计算机视觉牛人博客和代码汇总(全)

每个做过或者正在做研究工作的人都会关注一些自己认为有价值的.活跃的研究组和个人的主页,关注他们的主页有时候比盲目的去搜索一些论文有用多了,大牛的或者活跃的研究者主页往往提供了他们的最新研究线索,顺便还可八一下各位大牛的经历,对于我这样的小菜鸟来说最最实惠的是有时可以找到源码,很多时候光看论文是理不清思路的. 1 牛人Homepages(随意排序,不分先后): 1.USC Computer Vision Group:南加大,多目标跟踪/检测等: 2.ETHZ Computer Vision Lab

用Spark学习矩阵分解推荐算法

在矩阵分解在协同过滤推荐算法中的应用中,我们对矩阵分解在推荐算法中的应用原理做了总结,这里我们就从实践的角度来用Spark学习矩阵分解推荐算法. 1. Spark推荐算法概述 在Spark MLlib中,推荐算法这块只实现了基于矩阵分解的协同过滤推荐算法.而基于的算法是FunkSVD算法,即将m个用户和n个物品对应的评分矩阵M分解为两个低维的矩阵:$$M_{m \times n}=P_{m \times k}^TQ_{k \times n}$$ 其中k为分解成低维的维数,一般远比m和n小.如果大

SVD(奇异值矩阵分解) 转载

http://www.cnblogs.com/LeftNotEasy/archive/2011/01/19/svd-and-applications.html 一.奇异值与特征值基础知识: 特征值分解和奇异值分解在机器学习领域都是属于满地可见的方法.两者有着很紧密的关系,我在接下来会谈到,特征值分解和奇异值分解的目的都是一样,就是提取出一个矩阵最重要的特征.先谈谈特征值分解吧: 1)特征值: 如果说一个向量v是方阵A的特征向量,将一定可以表示成下面的形式: 这时候λ就被称为特征向量v对应的特征值

计算机视觉牛人博客和代码汇总(全)-转载

每个做过或者正在做研究工作的人都会关注一些自己认为有价值的.活跃的研究组和个人的主页,关注他们的主页有时候比盲目的去搜索一些论文有用多了,大牛的或者活跃的研究者主页往往提供了他们的最新研究线索,顺便还可八一下各位大牛的经历,对于我这样的小菜鸟来说最最实惠的是有时可以找到源码,很多时候光看论文是理不清思路的. 1 牛人Homepages(随意排序,不分先后): 1.USC Computer Vision Group:南加大,多目标跟踪/检测等: 2.ETHZ Computer Vision Lab

ALS矩阵分解推荐模型

其实通过模型来预测一个user对一个item的评分,思想类似线性回归做预测,大致如下 定义一个预测模型(数学公式), 然后确定一个损失函数, 将已有数据作为训练集, 不断迭代来最小化损失函数的值, 最终确定参数,把参数套到预测模型中做预测. 矩阵分解的预测模型是: 损失函数是: 我们就是要最小化损失函数,从而求得参数q和p. 矩阵分解模型的物理意义 我们希望学习到一个P代表user的特征,Q代表item的特征.特征的每一个维度代表一个隐性因子,比如对电影来说,这些隐性因子可能是导演,演员等.当然

NMath矩阵分解的两种方式

概述:本教程为您介绍.Net唯一的数学与统计学运算库NMath,实现矩阵分解的两种方法. Nmath中包括用于构造和操作矩阵QR和奇异值分解的分解类.QR分解如下表示: 1 AP=QR 其中P是一个可置换矩阵,Q是正交的,且R为上梯形.矩阵A的奇异值分解(SVD)的形式表示为: 1 A=USV* 其中U和V是正交的,S是对角的,和V *表示一个真正的矩阵V或一个复杂的矩阵V的条目沿对角线S的共轭转置的奇异值. 接下来带来一个矩阵分解类的实例,下面代码示例为从FloatMatrix创建FloatQ

推荐系统中的矩阵分解演变方式

推荐算法主要分为基于内容的算法和协同过滤. 协同过滤的两种基本方法是基于邻居的方法(基于内容/物品的协同过滤)和隐语义模型. 矩阵分解乃是实现隐语义模型的基石. 矩阵分解根据用户对物品的评分, 推断出用户和物品的隐语义向量, 然后根据用户和物品的隐语义向量来进行推荐. 推荐系统用到的数据可以有显式评分和隐式评分. 显式评分时用户对物品的打分, 显式评分矩阵通常非常稀疏. 隐式评分是指用户的浏览, 购买, 搜索等历史记录, 表示的是用户行为的有无, 所以是一个密集矩阵. 1. 基本矩阵分解 矩阵分