mahout部署实践

一 下载mahout并解压

unzip  unzip mahout-distribution-0.9-src.zip

二 设置环境变量

1一些说明

JAVA_HOME mahout运行需指定jdk的目录

MAHOUT_JAVA_HOME指定此变量可覆盖JAVA_HOME值

HADOOP_HOME  如果配置,则在hadoop分布式平台上运行,否则单机运行

HADOOP_CONF_DIR指定hadoop的配置文件目录

MAHOUT_LOCAL  如果此变量值丌为空,则单机运行mahout。

MAHOUT_CONF_DIR mahout配置文件的路径,默认值是$MAHOUT_HOME/src/conf

MAHOUT_HEAPSIZE mahout运行时可用的最大heap大小

2具体操作

[email protected]:~/mahout-distribution-0.9$ sudo vim /etc/profile

环境变量的修改,在该文件最后面添加

export JAVA_HOME=/usr/programs/jdk1.7.0_65

export HADOOP_HOME=/home/hadoop/hadoop-1.2.1

export HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/conf

export MAHOUT_HOME=/home/hadoop/mahout-distribution-0.9

export MAHOUT_CONF_DIR=/home/hadoop/mahout-distribution-0.9/conf

PATH=$MAHOUT_CONF_DIR:$MAHOUT_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH

然后source /etc/profile

3一个问题

如果你遇到了如下问题

Could not find mahout-examples-*.job in /home/hadoop/mahout-distribution-0.9 or /home/hadoop/mahout-distribution-0.9/examples/target, please run ‘mvn install‘ to create the .job file

原因是下载的版本不对,你是不是下的有源代码的版本呀,下没有源代码的版本试试

三 验证是否安装成功

[email protected]:~$ mahout

MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.

Warning: $HADOOP_HOME is deprecated.

Running on hadoop, using /home/hadoop/hadoop-1.2.1/bin/hadoop and HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/conf

MAHOUT-JOB: /home/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jar

Warning: $HADOOP_HOME is deprecated.

An example program must be given as the first argument.

Valid program names are:

arff.vector: : Generate Vectors from an ARFF file or directory

baumwelch: : Baum-Welch algorithm for unsupervised HMM training

canopy: : Canopy clustering

cat: : Print a file or resource as the logistic regression models would see it

cleansvd: : Cleanup and verification of SVD output

clusterdump: : Dump cluster output to text

clusterpp: : Groups Clustering Output In Clusters

cmdump: : Dump confusion matrix in HTML or text formats

concatmatrices: : Concatenates 2 matrices of same cardinality into a single matrix

cvb: : LDA via Collapsed Variation Bayes (0th deriv. approx)

cvb0_local: : LDA via Collapsed Variation Bayes, in memory locally.

evaluateFactorization: : compute RMSE and MAE of a rating matrix factorization against probes

fkmeans: : Fuzzy K-means clustering

hmmpredict: : Generate random sequence of observations by given HMM

itemsimilarity: : Compute the item-item-similarities for item-based collaborative filtering

kmeans: : K-means clustering

lucene.vector: : Generate Vectors from a Lucene index

lucene2seq: : Generate Text SequenceFiles from a Lucene index

matrixdump: : Dump matrix in CSV format

matrixmult: : Take the product of two matrices

parallelALS: : ALS-WR factorization of a rating matrix

qualcluster: : Runs clustering experiments and summarizes results in a CSV

recommendfactorized: : Compute recommendations using the factorization of a rating matrix

recommenditembased: : Compute recommendations using item-based collaborative filtering

regexconverter: : Convert text files on a per line basis based on regular expressions

resplit: : Splits a set of SequenceFiles into a number of equal splits

rowid: : Map SequenceFile<Text,VectorWritable> to {SequenceFile<IntWritable,VectorWritable>, SequenceFile<IntWritable,Text>}

rowsimilarity: : Compute the pairwise similarities of the rows of a matrix

runAdaptiveLogistic: : Score new production data using a probably trained and validated AdaptivelogisticRegression model

runlogistic: : Run a logistic regression model against CSV data

seq2encoded: : Encoded Sparse Vector generation from Text sequence files

seq2sparse: : Sparse Vector generation from Text sequence files

seqdirectory: : Generate sequence files (of Text) from a directory

seqdumper: : Generic Sequence File dumper

seqmailarchives: : Creates SequenceFile from a directory containing gzipped mail archives

seqwiki: : Wikipedia xml dump to sequence file

spectralkmeans: : Spectral k-means clustering

split: : Split Input data into test and train sets

splitDataset: : split a rating dataset into training and probe parts

ssvd: : Stochastic SVD

streamingkmeans: : Streaming k-means clustering

svd: : Lanczos Singular Value Decomposition

testnb: : Test the Vector-based Bayes classifier

trainAdaptiveLogistic: : Train an AdaptivelogisticRegression model

trainlogistic: : Train a logistic regression using stochastic gradient descent

trainnb: : Train the Vector-based Bayes classifier

transpose: : Take the transpose of a matrix

validateAdaptiveLogistic: : Validate an AdaptivelogisticRegression model against hold-out data set

vecdist: : Compute the distances between a set of Vectors (or Cluster or Canopy, they must fit in memory) and a list of Vectors

vectordump: : Dump vectors from a sequence file to text

viterbi: : Viterbi decoding of hidden states from given output states sequence

以上则表示安转成功

四 测试kmeans算法

1.下载测试数据

wget http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data

[email protected]:~$ wget http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data

--2014-11-08 06:40:16--  http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data

Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.1.87

Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.1.87|:80... connected.

HTTP request sent, awaiting response... 200 OK

Length: 288374 (282K) [text/plain]

Saving to: `synthetic_control.data‘

100%[=======================================================================>] 288,374     79.5K/s   in 3.5s

2014-11-08 06:40:20 (79.5 KB/s) - `synthetic_control.data‘ saved [288374/288374]

2.把测试数据放到hdfs中

[email protected]:~$ hadoop fs -mkdir ./testdata
Warning: $HADOOP_HOME is deprecated.

[email protected]:~$ hadoop fs -ls
Warning: $HADOOP_HOME is deprecated.

Found 4 items
drwxr-xr-x   - hadoop        supergroup          0 2014-11-06 07:48 /user/hadoop/input
drwxr-xr-x   - hadoop        supergroup          0 2014-11-06 07:49 /user/hadoop/output
drwxr-xr-x   - Administrator supergroup          0 2014-11-06 08:01 /user/hadoop/output1
drwxr-xr-x   - hadoop        supergroup          0 2014-11-08 06:41 /user/hadoop/testdata
[email protected]:~$ hadoop fs -put synthetic_control.data  ./testdata
Warning: $HADOOP_HOME is deprecated.

[email protected]:~$ hadoop fs -ls
Warning: $HADOOP_HOME is deprecated.

Found 4 items
drwxr-xr-x   - hadoop        supergroup          0 2014-11-06 07:48 /user/hadoop/input
drwxr-xr-x   - hadoop        supergroup          0 2014-11-06 07:49 /user/hadoop/output
drwxr-xr-x   - Administrator supergroup          0 2014-11-06 08:01 /user/hadoop/output1
drwxr-xr-x   - hadoop        supergroup          0 2014-11-08 06:42 /user/hadoop/testdata
[email protected]:~$ hadoop fs -ls ./testdata
Warning: $HADOOP_HOME is deprecated.

Found 1 items
-rw-r--r--   1 hadoop supergroup     288374 2014-11-08 06:42 /user/hadoop/testdata/synthetic_control.data

3.开始测试

[email protected]:~$ mahout org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.
Warning: $HADOOP_HOME is deprecated.

Running on hadoop, using /home/hadoop/hadoop-1.2.1/bin/hadoop and HADOOP_CONF_DIR=/home/hadoop/hadoop-1.2.1/conf
MAHOUT-JOB: /home/hadoop/mahout-distribution-0.9/mahout-examples-0.9-job.jar
Warning: $HADOOP_HOME is deprecated.

14/11/08 06:47:25 WARN driver.MahoutDriver: No org.apache.mahout.clustering.syntheticcontrol.kmeans.Job.props found on classpath, will use command-line arguments only
14/11/08 06:47:25 INFO kmeans.Job: Running with default arguments
14/11/08 06:47:27 INFO kmeans.Job: Preparing Input
14/11/08 06:47:27 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/11/08 06:47:30 INFO input.FileInputFormat: Total input paths to process : 1
14/11/08 06:47:30 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/11/08 06:47:30 WARN snappy.LoadSnappy: Snappy native library not loaded
14/11/08 06:47:31 INFO mapred.JobClient: Running job: job_201411080632_0002
14/11/08 06:47:32 INFO mapred.JobClient:  map 0% reduce 0%
14/11/08 06:48:18 INFO mapred.JobClient:  map 100% reduce 0%
14/11/08 06:48:21 INFO mapred.JobClient: Job complete: job_201411080632_0002
14/11/08 06:48:21 INFO mapred.JobClient: Counters: 19
14/11/08 06:48:21 INFO mapred.JobClient:   Job Counters 
14/11/08 06:48:21 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=19688
14/11/08 06:48:21 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/11/08 06:48:21 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/11/08 06:48:21 INFO mapred.JobClient:     Rack-local map tasks=1
14/11/08 06:48:21 INFO mapred.JobClient:     Launched map tasks=1
14/11/08 06:48:21 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
14/11/08 06:48:21 INFO mapred.JobClient:   File Output Format Counters 
14/11/08 06:48:21 INFO mapred.JobClient:     Bytes Written=335470
14/11/08 06:48:21 INFO mapred.JobClient:   FileSystemCounters
14/11/08 06:48:21 INFO mapred.JobClient:     HDFS_BYTES_READ=288503
14/11/08 06:48:21 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=58838
14/11/08 06:48:21 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=335470
14/11/08 06:48:21 INFO mapred.JobClient:   File Input Format Counters 
14/11/08 06:48:21 INFO mapred.JobClient:     Bytes Read=288374
14/11/08 06:48:21 INFO mapred.JobClient:   Map-Reduce Framework
14/11/08 06:48:21 INFO mapred.JobClient:     Map input records=600
14/11/08 06:48:21 INFO mapred.JobClient:     Physical memory (bytes) snapshot=38473728
14/11/08 06:48:21 INFO mapred.JobClient:     Spilled Records=0
14/11/08 06:48:21 INFO mapred.JobClient:     CPU time spent (ms)=910
14/11/08 06:48:21 INFO mapred.JobClient:     Total committed heap usage (bytes)=16252928
14/11/08 06:48:21 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=347992064
14/11/08 06:48:21 INFO mapred.JobClient:     Map output records=600
14/11/08 06:48:21 INFO mapred.JobClient:     SPLIT_RAW_BYTES=129
14/11/08 06:48:21 INFO kmeans.Job: Running random seed to get initial clusters
14/11/08 06:48:21 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
14/11/08 06:48:21 INFO compress.CodecPool: Got brand-new compressor
14/11/08 06:48:22 INFO kmeans.RandomSeedGenerator: Wrote 6 Klusters to output/random-seeds/part-randomSeed
14/11/08 06:48:22 INFO kmeans.Job: Running KMeans with k = 6
14/11/08 06:48:22 INFO kmeans.KMeansDriver: Input: output/data Clusters In: output/random-seeds/part-randomSeed Out: output
14/11/08 06:48:22 INFO kmeans.KMeansDriver: convergence: 0.5 max Iterations: 10
14/11/08 06:48:22 INFO compress.CodecPool: Got brand-new decompressor
14/11/08 06:48:23 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/11/08 06:48:24 INFO input.FileInputFormat: Total input paths to process : 1
14/11/08 06:48:25 INFO mapred.JobClient: Running job: job_201411080632_0003
14/11/08 06:48:26 INFO mapred.JobClient:  map 0% reduce 0%
14/11/08 06:48:56 INFO mapred.JobClient:  map 100% reduce 0%
14/11/08 06:49:09 INFO mapred.JobClient:  map 100% reduce 100%
14/11/08 06:49:12 INFO mapred.JobClient: Job complete: job_201411080632_0003
14/11/08 06:49:12 INFO mapred.JobClient: Counters: 29
14/11/08 06:49:12 INFO mapred.JobClient:   Job Counters 
14/11/08 06:49:12 INFO mapred.JobClient:     Launched reduce tasks=1
14/11/08 06:49:12 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=21258
14/11/08 06:49:12 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/11/08 06:49:12 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/11/08 06:49:12 INFO mapred.JobClient:     Launched map tasks=1
14/11/08 06:49:12 INFO mapred.JobClient:     Data-local map tasks=1
14/11/08 06:49:12 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=12821
14/11/08 06:49:12 INFO mapred.JobClient:   File Output Format Counters 
14/11/08 06:49:12 INFO mapred.JobClient:     Bytes Written=7581
14/11/08 06:49:12 INFO mapred.JobClient:   FileSystemCounters
14/11/08 06:49:12 INFO mapred.JobClient:     FILE_BYTES_READ=10650
14/11/08 06:49:12 INFO mapred.JobClient:     HDFS_BYTES_READ=358672
14/11/08 06:49:12 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=141341
14/11/08 06:49:12 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=7581
14/11/08 06:49:12 INFO mapred.JobClient:   File Input Format Counters 
14/11/08 06:49:12 INFO mapred.JobClient:     Bytes Read=335470
14/11/08 06:49:12 INFO mapred.JobClient:   Map-Reduce Framework
14/11/08 06:49:12 INFO mapred.JobClient:     Map output materialized bytes=10650
14/11/08 06:49:12 INFO mapred.JobClient:     Map input records=600
14/11/08 06:49:12 INFO mapred.JobClient:     Reduce shuffle bytes=10650
14/11/08 06:49:12 INFO mapred.JobClient:     Spilled Records=12
14/11/08 06:49:12 INFO mapred.JobClient:     Map output bytes=10620
14/11/08 06:49:12 INFO mapred.JobClient:     Total committed heap usage (bytes)=132190208
14/11/08 06:49:12 INFO mapred.JobClient:     CPU time spent (ms)=7490
14/11/08 06:49:12 INFO mapred.JobClient:     Combine input records=0
14/11/08 06:49:12 INFO mapred.JobClient:     SPLIT_RAW_BYTES=122
14/11/08 06:49:12 INFO mapred.JobClient:     Reduce input records=6
14/11/08 06:49:12 INFO mapred.JobClient:     Reduce input groups=6
14/11/08 06:49:12 INFO mapred.JobClient:     Combine output records=0
14/11/08 06:49:12 INFO mapred.JobClient:     Physical memory (bytes) snapshot=183877632
14/11/08 06:49:12 INFO mapred.JobClient:     Reduce output records=6
14/11/08 06:49:12 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=696659968
14/11/08 06:49:12 INFO mapred.JobClient:     Map output records=6
14/11/08 06:49:12 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/11/08 06:49:14 INFO input.FileInputFormat: Total input paths to process : 1
14/11/08 06:49:15 INFO mapred.JobClient: Running job: job_201411080632_0004
14/11/08 06:49:16 INFO mapred.JobClient:  map 0% reduce 0%
14/11/08 06:50:02 INFO mapred.JobClient:  map 100% reduce 0%
14/11/08 06:50:15 INFO mapred.JobClient:  map 100% reduce 100%
14/11/08 06:50:19 INFO mapred.JobClient: Job complete: job_201411080632_0004
14/11/08 06:50:19 INFO mapred.JobClient: Counters: 29
14/11/08 06:50:19 INFO mapred.JobClient:   Job Counters 
14/11/08 06:50:19 INFO mapred.JobClient:     Launched reduce tasks=1
14/11/08 06:50:19 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=25946
14/11/08 06:50:19 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/11/08 06:50:19 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/11/08 06:50:19 INFO mapred.JobClient:     Rack-local map tasks=1
14/11/08 06:50:19 INFO mapred.JobClient:     Launched map tasks=1
14/11/08 06:50:19 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=12738
14/11/08 06:50:19 INFO mapred.JobClient:   File Output Format Counters 
14/11/08 06:50:19 INFO mapred.JobClient:     Bytes Written=7581
14/11/08 06:50:19 INFO mapred.JobClient:   FileSystemCounters
14/11/08 06:50:19 INFO mapred.JobClient:     FILE_BYTES_READ=13890
14/11/08 06:50:19 INFO mapred.JobClient:     HDFS_BYTES_READ=351142
14/11/08 06:50:19 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=147821
14/11/08 06:50:19 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=7581
14/11/08 06:50:19 INFO mapred.JobClient:   File Input Format Counters 
14/11/08 06:50:19 INFO mapred.JobClient:     Bytes Read=335470
14/11/08 06:50:19 INFO mapred.JobClient:   Map-Reduce Framework
14/11/08 06:50:19 INFO mapred.JobClient:     Map output materialized bytes=13890
14/11/08 06:50:19 INFO mapred.JobClient:     Map input records=600
14/11/08 06:50:19 INFO mapred.JobClient:     Reduce shuffle bytes=13890
14/11/08 06:50:19 INFO mapred.JobClient:     Spilled Records=12
14/11/08 06:50:19 INFO mapred.JobClient:     Map output bytes=13860
14/11/08 06:50:19 INFO mapred.JobClient:     Total committed heap usage (bytes)=132190208
14/11/08 06:50:19 INFO mapred.JobClient:     CPU time spent (ms)=6550
14/11/08 06:50:19 INFO mapred.JobClient:     Combine input records=0
14/11/08 06:50:19 INFO mapred.JobClient:     SPLIT_RAW_BYTES=122
14/11/08 06:50:19 INFO mapred.JobClient:     Reduce input records=6
14/11/08 06:50:19 INFO mapred.JobClient:     Reduce input groups=6
14/11/08 06:50:19 INFO mapred.JobClient:     Combine output records=0
14/11/08 06:50:19 INFO mapred.JobClient:     Physical memory (bytes) snapshot=183332864
14/11/08 06:50:19 INFO mapred.JobClient:     Reduce output records=6
14/11/08 06:50:19 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=695611392
14/11/08 06:50:19 INFO mapred.JobClient:     Map output records=6
14/11/08 06:50:19 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/11/08 06:50:20 INFO input.FileInputFormat: Total input paths to process : 1
14/11/08 06:50:21 INFO mapred.JobClient: Running job: job_201411080632_0005
14/11/08 06:50:22 INFO mapred.JobClient:  map 0% reduce 0%
14/11/08 06:50:48 INFO mapred.JobClient:  map 100% reduce 0%
14/11/08 06:50:59 INFO mapred.JobClient:  map 100% reduce 33%
14/11/08 06:51:01 INFO mapred.JobClient:  map 100% reduce 100%
14/11/08 06:51:05 INFO mapred.JobClient: Job complete: job_201411080632_0005
14/11/08 06:51:05 INFO mapred.JobClient: Counters: 29
14/11/08 06:51:05 INFO mapred.JobClient:   Job Counters 
14/11/08 06:51:05 INFO mapred.JobClient:     Launched reduce tasks=1
14/11/08 06:51:05 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=17047
14/11/08 06:51:05 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/11/08 06:51:05 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/11/08 06:51:05 INFO mapred.JobClient:     Rack-local map tasks=1
14/11/08 06:51:05 INFO mapred.JobClient:     Launched map tasks=1
14/11/08 06:51:05 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=12804
14/11/08 06:51:05 INFO mapred.JobClient:   File Output Format Counters 
14/11/08 06:51:05 INFO mapred.JobClient:     Bytes Written=7581
14/11/08 06:51:05 INFO mapred.JobClient:   FileSystemCounters
14/11/08 06:51:05 INFO mapred.JobClient:     FILE_BYTES_READ=13890
14/11/08 06:51:05 INFO mapred.JobClient:     HDFS_BYTES_READ=351142
14/11/08 06:51:05 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=147821
14/11/08 06:51:05 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=7581
14/11/08 06:51:05 INFO mapred.JobClient:   File Input Format Counters 
14/11/08 06:51:05 INFO mapred.JobClient:     Bytes Read=335470
14/11/08 06:51:05 INFO mapred.JobClient:   Map-Reduce Framework
14/11/08 06:51:05 INFO mapred.JobClient:     Map output materialized bytes=13890
14/11/08 06:51:05 INFO mapred.JobClient:     Map input records=600
14/11/08 06:51:05 INFO mapred.JobClient:     Reduce shuffle bytes=13890
14/11/08 06:51:05 INFO mapred.JobClient:     Spilled Records=12
14/11/08 06:51:05 INFO mapred.JobClient:     Map output bytes=13860
14/11/08 06:51:05 INFO mapred.JobClient:     Total committed heap usage (bytes)=132190208
14/11/08 06:51:05 INFO mapred.JobClient:     CPU time spent (ms)=3280
14/11/08 06:51:05 INFO mapred.JobClient:     Combine input records=0
14/11/08 06:51:05 INFO mapred.JobClient:     SPLIT_RAW_BYTES=122
14/11/08 06:51:05 INFO mapred.JobClient:     Reduce input records=6
14/11/08 06:51:05 INFO mapred.JobClient:     Reduce input groups=6
14/11/08 06:51:05 INFO mapred.JobClient:     Combine output records=0
14/11/08 06:51:05 INFO mapred.JobClient:     Physical memory (bytes) snapshot=183197696
14/11/08 06:51:05 INFO mapred.JobClient:     Reduce output records=6
14/11/08 06:51:05 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=695611392
14/11/08 06:51:05 INFO mapred.JobClient:     Map output records=6

4观察输出

[email protected]:~$ hadoop fs -ls ./output

Warning: $HADOOP_HOME is deprecated.

Found 15 items

-rw-r--r--   1 hadoop supergroup        194 2014-11-08 06:56 /user/hadoop/output/_policy

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:57 /user/hadoop/output/clusteredPoints

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:48 /user/hadoop/output/clusters-0

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:49 /user/hadoop/output/clusters-1

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:56 /user/hadoop/output/clusters-10-final

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:50 /user/hadoop/output/clusters-2

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:51 /user/hadoop/output/clusters-3

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:51 /user/hadoop/output/clusters-4

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:52 /user/hadoop/output/clusters-5

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:53 /user/hadoop/output/clusters-6

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:54 /user/hadoop/output/clusters-7

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:54 /user/hadoop/output/clusters-8

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:55 /user/hadoop/output/clusters-9

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:48 /user/hadoop/output/data

drwxr-xr-x   - hadoop supergroup          0 2014-11-08 06:48 /user/hadoop/output/random-seeds

时间: 2024-11-05 13:51:36

mahout部署实践的相关文章

Hadoop部署实践: &nbsp; 离线安装 CDH5.1 &nbsp; (待完成)

配置主机映射关系 [[email protected] ~]$ cat /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 10.15.5.200 master.hadoop 10.15.5.201 slave01.hadoop 10.15.5.202 slave02.hadoop 10.15.5.203 slave03.hadoop 如上有4台主机,每台hosts都已

Docker在Ubuntu的部署实践

Docker在Ubuntu的部署实践 作者:chszs,版权所有,未经同意,不得转载.博主主页:http://blog.csdn.net/chszs 本文讲述Docker在Ubuntu系统上的部署过程.其中,Ubuntu为12.04.5 LTS, Precise Pangolin版. 1.安装Docker # apt-get update # apt-get install docker.io Reading package lists... Done Building dependency t

https部署实践 (Let&#39;s Encrypt)

1 .获取 Let's Encrypt git clone https://github.com/letsencrypt/letsencrypt cd letsencrypt chmod +x letsencrypt-auto 2 .执行安装证书 ./letsencrypt-auto certonly -a webroot --webroot-path=/home/www/demo.com --email [email protected] -d demo.com -d www.demo.com

Citrix XenApp&XenDesktop 7.15 部署实践指南——第二节·环境介绍

参考之前的的XenServer或vSphere手册配置Hypervisor,并完成虚拟机操作系统模板的创建,然后按照下表创建虚拟机,修改计算机名,配置IP地址,加域:具体创建过程不再此手册中体现:XenServer部分相关内容参考itdali.cn . 所有Windows.Windows Server都强烈建议更新至最新的补丁,这样可避免在Citrix环境下很多问题的发生:所有Windows.Windows Server在制作模板时也都建议安装.net framework 3.5..net fr

Spring Boot Tomcat 容器化部署实践与总结

在平时的工作和学习中经常会构建简单的web应用程序.如果只是HelloWorld级别的程序,使用传统的Spring+SpringMVC框架搭建得话会将大部分的时间花费在搭建框架本身上面,比如引入SpringMVC,配置DispatcheherServlet等.并且这些配置文件都差不多,重复这些劳动似乎意义不大.所以使用Springboot框架来搭建简单的应用程序显得十分的便捷和高效. 前两天在工作中需要一个用于测试文件下载的简单web程序,条件是使用Tomcat Docker Image作为载体

ASP.NET Core在CentOS上的最小化部署实践

原文:ASP.NET Core在CentOS上的最小化部署实践 引言 本文从Linux小白的视角, 在CentOS 7.x服务器上搭建一个Nginx-Powered AspNet Core Web准生产应用. 在开始之前,我们还是重温一下部署原理,正如你所常见的.Net Core 部署图: 在Linux上部署.Net Core App最好的方式是在Linux机器上使用Kestrel 服务在端口5000上支撑web应用: 然后设置Nginx作为反向代理服务器,将输入请求转发给Kestrel服务器,

万台规模下的SDN控制器集群部署实践

目前在网络世界里,云计算.虚拟化.SDN.NFV这些话题都非常热.今天借这个机会我跟大家一起来一场SDN的深度之旅,从概念一直到实践一直到一些具体的技术. 本次分享分为三个主要部分: SDN & NFV的背景介绍 SDN部署的实际案例 SDN控制器的集群部署方案 我们首先看一下SDN.其实SDN这个东西已经有好几年了,它强调的是什么?控制平面和数据平面分离,中间是由OpenFlow交换机组成的控制器,再往上就是运行在SDN之上的服务或者是应用.这里强调两个,控制器和交换机的接口——我们叫做南向接

魅族大数据之流平台设计部署实践--转

原文地址:http://mp.weixin.qq.com/s/-RZB0gCj0gCRUq09EMx1fA 沈辉煌   魅族数据架构师  2010年加入魅族,负责大数据.云服务相关设计与研发: 专注于分布式服务.分布式存储.海量数据下rdb与nosql融合等技术. 主要技术点:推荐算法.文本处理.ranking算法 本篇文章内容来自第八期魅族开放日魅族数据架构师沈辉煌的现场分享,由IT大咖说提供现场速录,由msup整理编辑. 导读:魅族大数据的流平台系统拥有自设计的采集SDK,自设计支持多种数据

Hadoop第10周练习—Mahout部署及进行20newsgroup数据分析例子

1  运行环境说明... 3 1.1  硬软件环境... 3 1.2  机器网络环境... 3 2  书面作业0:搭建Mahout环境... 3 2.1  Mahout介绍... 3 2.2  部署过程... 4 2.2.1   下载Mahout4 2.2.2   上传Mahout4 2.2.3   解压缩... 4 2.2.4   设置环境变量... 5 2.2.5   验证安装完成... 6 2.3  测试例子... 6 2.3.1   下载测试数据,放到$MAHOUT_HOME/testd