sklearn.feature_extraction.FeatureHasher(n_features=1048576, input_type="dict", dtype=<class ‘numpy.float64‘>, alternate_sign=True, non_negative=False):
特征散列化的实现类。
此类将符号特性名称(字符串)的序列转换为scipy.sparse矩阵,使用哈希函数计算与名称对应的矩阵列。使用的散列函数是带符号的32位版本的Murmurhash3.
字节字符串类型的特征名称按原样使用。Unicode字符串首先转换为UTF-8,但没有进行Unicode规范化。特征值必须是(有限)数字
本类是DictVectorizer和CountVectorizer的低内存替代品,用于大规模(在线)学习和内存紧张的情况,例如在嵌入式设备上运行预测代码时。
n_features: integer
输出矩阵的特征数,少量的特征可能引发hash冲突,大量的特征会导致线性学习的维度扩大。
input_type:
"dict"表示输入数据是字典形式的[{feature_name: value}, …],
"pair"表示输入数据是pair形式的[[(feature_name1, value1), (feature_name2, value2)], …]
"string"表示数据是字符串形式的[[feature_name1, feature_name1]],其中有个value1个feature_name1,value2个feature_name2
其中feature_name必须是字符串,value必须是数字。在"string"的情况下,每个feature_name隐含value是1。特征名称会进行hash处理,来计算该特征名称对应的hash列。value的符号在输出的数据中可能会发生反转。
dtype:
特征值得类型。这个值将传递给scipy.sparse矩阵作为构造器dtype参数的值。这个参数不能设置为bool,np.boolean或者其他无符号的整型。
alternate_sign:
如果为True,则在特征计算出的hash值上交替添加一个符号(正数变成负数),以便于在散列空间中大致的保留内部积。这种方法类似于稀疏随机投影。
non_negative:
如果为真,在计算结果返回前,对特征矩阵进行绝对值计算。当与alternate_sign=True一同使用时,会显著降低内部积的保存性能。
该类的方法与其他的特征提取类的方法一致。
以下代码例子来源自sklearn官网API。
地址: https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html#sklearn.feature_extraction.FeatureHasher
例子1:
from sklearn.feature_extraction import FeatureHasher h = FeatureHasher(n_features=10, input_type=‘string‘, dtype=int, alternate_sign=False) d = [{‘dog‘: 1, ‘cat‘: 2, ‘elephant‘: 4}, {‘dog‘: 2, ‘run‘: 5}] d = [[(‘dog‘, 1), (‘cat‘, 2), (‘elephant‘, 4)], [(‘dog‘, 2), (‘run‘, 5)]] d = [[‘dog‘, ‘cat‘, ‘cat‘, ‘elephant‘, ‘elephant‘,‘elephant‘,‘elephant‘,], ["dog", "dog", "run", ‘run‘, ‘run‘, ‘run‘, ‘run‘], ["run", "run"]] f = h.transform(d) print(f.toarray()) print(h.get_params())
例子2:
from __future__ import print_function from collections import defaultdict import re import sys from time import time import numpy as np from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction import DictVectorizer, FeatureHasher from memory_profiler import profile def n_nonzero_columns(X): """Returns the number of non-zero columns in a CSR matrix X.""" return len(np.unique(X.nonzero()[1])) def tokens(doc): """ 简单的将doc拆分成词语,删除英文字母外的符号,并且都小写化 :param doc: :return: """ return (tok.lower() for tok in re.findall(r"\w+", doc)) def token_freqs(doc): """ 对doc中的词语进行频率统计 :param doc: :return: """ freq = defaultdict(int) for tok in tokens(doc): freq[tok] += 1 return freq @profile def dict_vectorizer(raw_data, data_size_mb): print("DictVectorizer") t0 = time() vectorizer = DictVectorizer() X = vectorizer.fit_transform(token_freqs(d) for d in raw_data) duration = time() - t0 print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration)) print("Found %d unique terms\n" % len(vectorizer.get_feature_names())) print("X.shape: ", X.shape) @profile def feature_hasher_freq(raw_data, data_size_mb, n_features): print("FeatureHasher on frequency dicts") t0 = time() hasher = FeatureHasher(n_features=n_features) X = hasher.transform(token_freqs(d) for d in raw_data) duration = time() - t0 print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration)) print("Found %d unique terms\n" % n_nonzero_columns(X)) print("X.shape: ", X.shape) del X @profile def feature_hasher_terms(raw_data, data_size_mb, n_features): print("FeatureHasher on raw tokens") t0 = time() hasher = FeatureHasher(n_features=n_features, input_type="string") X = hasher.transform(tokens(d) for d in raw_data) duration = time() - t0 print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration)) print("Found %d unique terms" % n_nonzero_columns(X)) print("X.shape: ", X.shape) del X @profile def compare(): # 1. 只选择一部分数据 categories = [ ‘alt.atheism‘, ‘comp.graphics‘, ‘comp.sys.ibm.pc.hardware‘, ‘misc.forsale‘, ‘rec.autos‘, ‘sci.space‘, ‘talk.religion.misc‘, ] print("Usage: %s [n_features_for_hashing]" % sys.argv[0]) print(" The default number of features is 2**18.\n\n") try: n_features = int(sys.argv[1]) except IndexError: n_features = 2 ** 18 except ValueError: print("not a valid number of features: %r" % sys.argv[1]) sys.exit(1) print("Loading 20 newsgroups training data") # 2. 第一次运行时,下载文件需要较长的时间 # data_home 下载下来的文件保存的位置 # 如果data_home中没有文件,download_if_missing设置为True,程序会自动下载文件到data_home raw_data = fetch_20newsgroups(data_home=r"D:\学习\sklearn_dataset\20newsbydate", subset=‘train‘, categories=categories, download_if_missing=True ).data # 3. 计算文本的大小 data_size_mb = sum(len(s.encode(‘utf-8‘)) for s in raw_data) / 1e6 print("%d documents - %0.3fMB\n" % (len(raw_data), data_size_mb)) dict_vectorizer(raw_data, data_size_mb) feature_hasher_freq(raw_data, data_size_mb, n_features) feature_hasher_terms(raw_data, data_size_mb, n_features) if __name__ == ‘__main__‘: compare()
例子2输出:
Usage: D:/Project/nlplearn/sklearn_learn/plot_hashing_vs_dictvectorizer.py [n_features_for_hashing] The default number of features is 2**18. Loading 20 newsgroups training data 3803 documents - 6.245MB DictVectorizer done in 16.495944s at 0.379MB/s Found 47928 unique terms X.shape: (3803, 47928) Filename: D:/Project/nlplearn/sklearn_learn/plot_hashing_vs_dictvectorizer.py Line # Mem usage Increment Line Contents ================================================ 42 98.9 MiB 98.9 MiB @profile 43 def dict_vectorizer(raw_data, data_size_mb): 44 98.9 MiB 0.0 MiB print("DictVectorizer") 45 98.9 MiB 0.0 MiB t0 = time() 46 98.9 MiB 0.0 MiB vectorizer = DictVectorizer() 47 130.7 MiB 1.3 MiB X = vectorizer.fit_transform(token_freqs(d) for d in raw_data) 48 130.7 MiB 0.0 MiB duration = time() - t0 49 130.7 MiB 0.0 MiB print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration)) 50 130.7 MiB 0.0 MiB print("Found %d unique terms\n" % len(vectorizer.get_feature_names())) 51 130.7 MiB 0.0 MiB print("X.shape: ", X.shape) FeatureHasher on frequency dicts done in 8.953512s at 0.697MB/s Found 43873 unique terms X.shape: (3803, 262144) Filename: D:/Project/nlplearn/sklearn_learn/plot_hashing_vs_dictvectorizer.py Line # Mem usage Increment Line Contents ================================================ 53 106.5 MiB 106.5 MiB @profile 54 def feature_hasher_freq(raw_data, data_size_mb, n_features): 55 106.5 MiB 0.0 MiB print("FeatureHasher on frequency dicts") 56 106.5 MiB 0.0 MiB t0 = time() 57 106.5 MiB 0.0 MiB hasher = FeatureHasher(n_features=n_features) 58 116.8 MiB 4.0 MiB X = hasher.transform(token_freqs(d) for d in raw_data) 59 116.8 MiB 0.0 MiB duration = time() - t0 60 116.8 MiB 0.0 MiB print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration)) 61 116.8 MiB 0.0 MiB print("Found %d unique terms\n" % n_nonzero_columns(X)) 62 116.8 MiB 0.0 MiB print("X.shape: ", X.shape) 63 106.6 MiB 0.0 MiB del X FeatureHasher on raw tokens done in 9.989571s at 0.625MB/s Found 43873 unique terms X.shape: (3803, 262144) Filename: D:/Project/nlplearn/sklearn_learn/plot_hashing_vs_dictvectorizer.py Line # Mem usage Increment Line Contents ================================================ 65 106.6 MiB 106.6 MiB @profile 66 def feature_hasher_terms(raw_data, data_size_mb, n_features): 67 106.6 MiB 0.0 MiB print("FeatureHasher on raw tokens") 68 106.6 MiB 0.0 MiB t0 = time() 69 106.6 MiB 0.0 MiB hasher = FeatureHasher(n_features=n_features, input_type="string") 70 118.6 MiB 4.0 MiB X = hasher.transform(tokens(d) for d in raw_data) 71 118.6 MiB 0.0 MiB duration = time() - t0 72 118.6 MiB 0.0 MiB print("done in %fs at %0.3fMB/s" % (duration, data_size_mb / duration)) 73 118.6 MiB 0.0 MiB print("Found %d unique terms" % n_nonzero_columns(X)) 74 118.6 MiB 0.0 MiB print("X.shape: ", X.shape) 75 106.7 MiB 0.0 MiB del X Filename: D:/Project/nlplearn/sklearn_learn/plot_hashing_vs_dictvectorizer.py Line # Mem usage Increment Line Contents ================================================ 78 71.5 MiB 71.5 MiB @profile 79 def compare(): 80 # 1. 只选择一部分数据 81 categories = [ 82 71.5 MiB 0.0 MiB ‘alt.atheism‘, 83 71.5 MiB 0.0 MiB ‘comp.graphics‘, 84 71.5 MiB 0.0 MiB ‘comp.sys.ibm.pc.hardware‘, 85 71.5 MiB 0.0 MiB ‘misc.forsale‘, 86 71.5 MiB 0.0 MiB ‘rec.autos‘, 87 71.5 MiB 0.0 MiB ‘sci.space‘, 88 71.5 MiB 0.0 MiB ‘talk.religion.misc‘, 89 ] 90 91 71.5 MiB 0.0 MiB print("Usage: %s [n_features_for_hashing]" % sys.argv[0]) 92 71.5 MiB 0.0 MiB print(" The default number of features is 2**18.\n\n") 93 94 71.5 MiB 0.0 MiB try: 95 71.5 MiB 0.0 MiB n_features = int(sys.argv[1]) 96 71.5 MiB 0.0 MiB except IndexError: 97 71.5 MiB 0.0 MiB n_features = 2 ** 18 98 except ValueError: 99 print("not a valid number of features: %r" % sys.argv[1]) 100 sys.exit(1) 101 102 71.5 MiB 0.0 MiB print("Loading 20 newsgroups training data") 103 # 2. 第一次运行时,下载文件需要较长的时间 104 # data_home 下载下来的文件保存的位置 105 # 如果data_home中没有文件,download_if_missing设置为True,程序会自动下载文件到data_home 106 71.5 MiB 0.0 MiB raw_data = fetch_20newsgroups(data_home=r"D:\学习\sklearn_dataset\20newsbydate", 107 71.5 MiB 0.0 MiB subset=‘train‘, 108 71.5 MiB 0.0 MiB categories=categories, 109 98.0 MiB 26.5 MiB download_if_missing=True 110 ).data 111 112 # 3. 计算文本的大小 113 98.9 MiB 0.1 MiB data_size_mb = sum(len(s.encode(‘utf-8‘)) for s in raw_data) / 1e6 114 98.9 MiB 0.0 MiB print("%d documents - %0.3fMB\n" % (len(raw_data), data_size_mb)) 115 116 106.5 MiB 7.6 MiB dict_vectorizer(raw_data, data_size_mb) 117 106.6 MiB 0.1 MiB feature_hasher_freq(raw_data, data_size_mb, n_features) 118 106.7 MiB 0.1 MiB feature_hasher_terms(raw_data, data_size_mb, n_features)
从输出信息可以看出:
FeatureHasher相比于DictVectorizer:
1. FeatureHasher转化的速度更快。如果更改n_features, FeatureHasher的速度会发生变化,但是仍然比DictVectorizer更快一些。
2. FeatureHasher的特征数少于DictVectorizer,部分特征被压缩了。
原文地址:https://www.cnblogs.com/hufulinblog/p/10600156.html