Hadoop 在Windows7操作系统下使用Eclipse来搭建hadoop开发环境

1、 需要下载hadoop的专门插件jar包

hadoop版本为2.3.0,hadoop集群搭建在centos6x上面,插件包下载地址为:http://download.csdn.net/detail/mchdba/8267181,jar包名字为hadoop-eclipse-plugin-2.3.0,可以适用于hadoop2x系列软件版本。

2、 把插件包放到eclipse/plugins目录下

为了以后方便,我这里把尽可能多的jar包都放进来了,如下图所示:

3、重启eclipse,配置Hadoop installation directory    

如果插件安装成功,打开Windows—Preferences后,在窗口左侧会有Hadoop Map/Reduce选项,点击此选项,在窗口右侧设置Hadoop安装路径。

4、配置Map/Reduce Locations

打开Windows-->Open Perspective-->Other

选择Map/Reduce,点击OK,在右下方看到有个Map/Reduce Locations的图标,如下图所示:

点击Map/Reduce Location选项卡,点击右边小象图标,打开Hadoop Location配置窗口:

输入Location Name,任意名称即可.配置Map/Reduce Master和DFS Mastrer,Host和Port配置成与core-site.xml的设置一致即可。

去找core-site.xml配置:

<property>

<name>fs.default.name</name>                                                                       
<value>hdfs://name01:9000</value>      
                                                                         </property>

在界面配置如下:

点击"Finish"按钮,关闭窗口。点击左侧的DFSLocations—>myhadoop(上一步配置的location name),如能看到user,表示安装成功,但是进去看到报错信息:Error: Permission denied: user=root,access=READ_EXECUTE,inode="/tmp";hadoop:supergroup:drwx---------,如下图所示:

应该是权限问题:把/tmp/目录下面所有的关于hadoop的文件夹设置成hadoop用户所有然后分配授予777权限。

cd /tmp/

chmod 777 /tmp/

chown -R hadoop.hadoop /tmp/hsperfdata_root

之后重新连接打开DFS Locations就显示正常了。

Map/Reduce Master (此处为Hadoop集群的Map/Reduce地址,应该和mapred-site.xml中的mapred.job.tracker设置相同)

(1):点击报错:

An internal error occurred during: "Connecting to DFS hadoopname01".

java.net.UnknownHostException: name01

直接在hostname那一栏里面设置ip地址为:192.168.52.128,即可,这样就正常打开了,如下图所示:

5、新建WordCount项目

File—>Project,选择Map/Reduce Project,输入项目名称WordCount等。

在WordCount项目里新建class,名称为WordCount,报错代码如下:Invalid Hadoop Runtime specified; please click ‘Configure Hadoop install directory‘ or fill in library location input field,报错原因是目录选择不对,不能选择在跟目录E:\hadoop下,换成E:\u\hadoop\就可以了,如下所示:

一路下一步过去,点击Finished按钮,完成工程创建,Eclipse控制台下面出现如下信息:

14-12-9 下午04时03分10秒: Eclipse is running in a JRE, but a JDK is required

Some Maven plugins may not work when importing projects or updating source folders.

14-12-9 下午04时03分13秒: Refreshing [/WordCount/pom.xml]

14-12-9 下午04时03分14秒: Refreshing [/WordCount/pom.xml]

14-12-9 下午04时03分14秒: Refreshing [/WordCount/pom.xml]

14-12-9 下午04时03分14秒: Updating index central|http://repo1.maven.org/maven2

14-12-9 下午04时04分10秒: Updated index for central|http://repo1.maven.org/maven2

6, Lib包导入:
需要添加的hadoop相应jar包有:

/hadoop-2.3.0/share/hadoop/common下所有jar包,及里面的lib目录下所有jar包,
/hadoop-2.3.0/share/hadoop/hdfs下所有jar包,不包括里面lib下的jar包,
/hadoop-2.3.0/share/hadoop/mapreduce下所有jar包,不包括里面lib下的jar包,
/hadoop-2.3.0/share/hadoop/yarn下所有jar包,不包括里面lib下的jar包,
大概18个jar包左右。

7,Eclipse直接提交mapreduce任务所需要环境配置代码如下所示:

  1. package wc;
  2. import java.io.IOException;
  3. import java.util.StringTokenizer;
  4. import org.apache.hadoop.conf.Configuration;
  5. import org.apache.hadoop.fs.Path;
  6. import org.apache.hadoop.io.IntWritable;
  7. import org.apache.hadoop.io.Text;
  8. import org.apache.hadoop.mapreduce.Job;
  9. import org.apache.hadoop.mapreduce.Mapper;
  10. import org.apache.hadoop.mapreduce.Reducer;
  11. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  12. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  13. import org.apache.hadoop.util.GenericOptionsParser;
  14. public class W2 {
  15. public static class TokenizerMapper extends
  16. Mapper<Object, Text, Text, IntWritable> {
  17. private final static IntWritable one = new IntWritable(1);
  18. private Text word = new Text();
  19. public void map(Object key, Text value, Context context)
  20. throws IOException, InterruptedException {
  21. StringTokenizer itr = new StringTokenizer(value.toString());
  22. while (itr.hasMoreTokens()) {
  23. word.set(itr.nextToken());
  24. context.write(word, one);
  25. }
  26. }
  27. }
  28. public static class IntSumReducer extends
  29. Reducer<Text, IntWritable, Text, IntWritable> {
  30. private IntWritable result = new IntWritable();
  31. public void reduce(Text key, Iterable<IntWritable> values,
  32. Context context) throws IOException, InterruptedException {
  33. int sum = 0;
  34. for (IntWritable val : values) {
  35. sum += val.get();
  36. }
  37. result.set(sum);
  38. context.write(key, result);
  39. }
  40. }
  41. public static void main(String[] args) throws Exception {
  42. Configuration conf = new Configuration(); System.setProperty(\

8、运行

 8.1、在HDFS上创建目录input

[[email protected] hadoop-2.3.0]$ hadoop fs -ls /

[[email protected] hadoop-2.3.0]$  hadoop fs -mkdir input

mkdir: `input‘: No such file or directory

[[email protected] hadoop-2.3.0]$ PS:fs需要全目录的方式来创建文件夹

如果Apache hadoop版本是0.x 或者1.x,

bin/hadoop hdfs fs -mkdir -p /in

bin/hadoop hdfs fs  -put /home/du/input   /in

如果Apache hadoop版本是2.x.

bin/hdfs  dfs  -mkdir -p /in

bin/hdfs  dfs   -put /home/du/input   /in

如果是发行版的hadoop,比如Cloudera CDH,IBM BI,Hortonworks HDP 则第一种命令即可。要注意创建目录的全路径。另外hdfs的根目录是 /

2、拷贝本地README.txt到HDFS的input里

[[email protected] hadoop-2.3.0]$ find . -name README.txt

./share/doc/hadoop/common/README.txt

[[email protected] ~]$ hadoop fs -copyFromLocal ./src/hadoop-2.3.0/share/doc/hadoop/common/README.txt /data/input

[[email protected] ~]$

[[email protected] ~]$ hadoop fs -ls /
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2014-12-15 23:34 /data
-rw-r--r--   3 hadoop supergroup         88 2014-08-26 02:21 /input
You have new mail in /var/spool/mail/root
[[email protected] ~]$

3,运行hadoop结束后,查看输出结果
(1),直接在hadoop服务器上面查看
[[email protected] ~]$ hadoop fs -ls /data/
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2014-12-15 23:29 /data/input
drwxr-xr-x   - hadoop supergroup          0 2014-12-15 23:34 /data/output
[[email protected] ~]$

(2),去Eclipse下查看

(3),在控制台上查看信息

    1. 2014-12-16 15:34:01,303 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - session.id is deprecated. Instead, use dfs.metrics.session-id
    2. 2014-12-16 15:34:01,309 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
    3. 2014-12-16 15:34:02,047 INFO [main] input.FileInputFormat (FileInputFormat.java:listStatus(287)) - Total input paths to process : 1
    4. 2014-12-16 15:34:02,120 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
    5. 2014-12-16 15:34:02,323 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local1764589720_0001
    6. 2014-12-16 15:34:02,367 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop1764589720/.staging/job_local1764589720_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
    7. 2014-12-16 15:34:02,368 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/staging/hadoop1764589720/.staging/job_local1764589720_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
    8. 2014-12-16 15:34:02,682 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1764589720_0001/job_local1764589720_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
    9. 2014-12-16 15:34:02,682 WARN [main] conf.Configuration (Configuration.java:loadProperty(2345)) - file:/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local1764589720_0001/job_local1764589720_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
    10. 2014-12-16 15:34:02,703 INFO [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://localhost:8080/
    11. 2014-12-16 15:34:02,704 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_local1764589720_0001
    12. 2014-12-16 15:34:02,707 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
    13. 2014-12-16 15:34:02,719 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
    14. 2014-12-16 15:34:02,853 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
    15. 2014-12-16 15:34:02,857 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local1764589720_0001_m_000000_0
    16. 2014-12-16 15:34:02,919 INFO [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux.
    17. 2014-12-16 15:34:03,281 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(581)) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util[email protected]
    18. 2014-12-16 15:34:03,287 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(733)) - Processing split: hdfs://192.168.52.128:9000/data/input/README.txt:0+1366
    19. 2014-12-16 15:34:03,304 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(388)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
    20. 2014-12-16 15:34:03,340 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1181)) - (EQUATOR) 0 kvi 26214396(104857584)
    21. 2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(975)) - mapreduce.task.io.sort.mb: 100
    22. 2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(976)) - soft limit at 83886080
    23. 2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(977)) - bufstart = 0; bufvoid = 104857600
    24. 2014-12-16 15:34:03,341 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(978)) - kvstart = 26214396; length = 6553600
    25. 2014-12-16 15:34:03,708 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_local1764589720_0001 running in uber mode : false
    26. 2014-12-16 15:34:03,710 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) - map 0% reduce 0%
    27. 2014-12-16 15:34:04,121 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) -
    28. 2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1435)) - Starting flush of map output
    29. 2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1453)) - Spilling map output
    30. 2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1454)) - bufstart = 0; bufend = 2055; bufvoid = 104857600
    31. 2014-12-16 15:34:04,128 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1456)) - kvstart = 26214396(104857584); kvend = 26213684(104854736); length = 713/6553600
    32. 2014-12-16 15:34:04,179 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1639)) - Finished spill 0
    33. 2014-12-16 15:34:04,194 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(995)) - Task:attempt_local1764589720_0001_m_000000_0 is done. And is in the process of committing
    34. 2014-12-16 15:34:04,207 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - map
    35. 2014-12-16 15:34:04,208 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1115)) - Task \‘attempt_local1764589720_0001_m_000000_0\‘ done.
    36. 2014-12-16 15:34:04,208 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local1764589720_0001_m_000000_0
    37. 2014-12-16 15:34:04,208 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete.
    38. 2014-12-16 15:34:04,211 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks
    39. 2014-12-16 15:34:04,211 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local1764589720_0001_r_000000_0
    40. 2014-12-16 15:34:04,221 INFO [pool-6-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux.
    41. 2014-12-16 15:34:04,478 INFO [pool-6-thread-1] mapred.Task (Task.java:initialize(581)) - Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util[email protected]
    42. 2014-12-16 15:34:04,483 INFO [pool-6-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce[email protected]
    43. 2014-12-16 15:34:04,500 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(193)) - MergerManager: memoryLimit=949983616, maxSingleShuffleLimit=237495904, mergeThreshold=626989184, ioSortFactor=10, memToMemMergeOutputsThreshold=10
    44. 2014-12-16 15:34:04,503 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local1764589720_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
    45. 2014-12-16 15:34:04,543 INFO [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(140)) - localfetcher#1 about to shuffle output ofmap attempt_local1764589720_0001_m_000000_0 decomp: 1832 len: 1836 to MEMORY
    46. 2014-12-16 15:34:04,548 INFO [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 1832 bytes from map-output for attempt_local1764589720_0001_m_000000_0
    47. 2014-12-16 15:34:04,553 INFO [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(307)) - closeInMemoryFile -> map-output of size: 1832, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->1832
    48. 2014-12-16 15:34:04,564 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning
    49. 2014-12-16 15:34:04,566 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
    50. 2014-12-16 15:34:04,566 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(667)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
    51. 2014-12-16 15:34:04,585 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments
    52. 2014-12-16 15:34:04,585 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
    53. 2014-12-16 15:34:04,605 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(742)) - Merged 1 segments, 1832 bytes to disk to satisfy reduce memory limit
    54. 2014-12-16 15:34:04,605 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772)) - Merging 1 files, 1836 bytes from disk
    55. 2014-12-16 15:34:04,606 INFO [pool-6-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787)) - Merging 0 segments, 0 bytes from memory into reduce
    56. 2014-12-16 15:34:04,607 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments
    57. 2014-12-16 15:34:04,608 INFO [pool-6-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 1823 bytes
    58. 2014-12-16 15:34:04,608 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
    59. 2014-12-16 15:34:04,643 INFO [pool-6-thread-1] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(996)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
    60. 2014-12-16 15:34:04,714 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) - map 100% reduce 0%
    61. 2014-12-16 15:34:04,842 INFO [pool-6-thread-1] mapred.Task (Task.java:done(995)) - Task:attempt_local1764589720_0001_r_000000_0 is done. And is in the process of committing
    62. 2014-12-16 15:34:04,850 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1 / 1 copied.
    63. 2014-12-16 15:34:04,850 INFO [pool-6-thread-1] mapred.Task (Task.java:commit(1156)) - Task attempt_local1764589720_0001_r_000000_0 is allowed to commit now
    64. 2014-12-16 15:34:04,881 INFO [pool-6-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task \‘attempt_local1764589720_0001_r_000000_0\‘ to hdfs://192.168.52.128:9000/data/output/_temporary/0/task_local1764589720_0001_r_000000
    65. 2014-12-16 15:34:04,884 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce
    66. 2014-12-16 15:34:04,884 INFO [pool-6-thread-1] mapred.Task (Task.java:sendDone(1115)) - Task \‘attempt_local1764589720_0001_r_000000_0\‘ done.
    67. 2014-12-16 15:34:04,885 INFO [pool-6-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local1764589720_0001_r_000000_0
    68. 2014-12-16 15:34:04,885 INFO [Thread-4] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete.
    69. 2014-12-16 15:34:05,714 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) - map 100% reduce 100%
    70. 2014-12-16 15:34:05,714 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) - Job job_local1764589720_0001 completed successfully
    71. 2014-12-16 15:34:05,733 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 38
    72. File System Counters
    73. FILE: Number of bytes read=34542
    74. FILE: Number of bytes written=470650
    75. FILE: Number of read operations=0
    76. FILE: Number of large read operations=0
    77. FILE: Number of write operations=0
    78. HDFS: Number of bytes read=2732
    79. HDFS: Number of bytes written=1306
    80. HDFS: Number of read operations=15
    81. HDFS: Number of large read operations=0
    82. HDFS: Number of write operations=4
    83. Map-Reduce Framework
    84. Map input records=31
    85. Map output records=179
    86. Map output bytes=2055
    87. Map output materialized bytes=1836
    88. Input split bytes=113
    89. Combine input records=179
    90. Combine output records=131
    91. Reduce input groups=131
    92. Reduce shuffle bytes=1836
    93. Reduce input records=131
    94. Reduce output records=131
    95. Spilled Records=262
    96. Shuffled Maps =1
    97. Failed Shuffles=0
    98. Merged Map outputs=1
    99. GC time elapsed (ms)=13
    100. CPU time spent (ms)=0
    101. Physical memory (bytes) snapshot=0
    102. Virtual memory (bytes) snapshot=0
    103. Total committed heap usage (bytes)=440664064
    104. Shuffle Errors
    105. BAD_ID=0
    106. CONNECTION=0
    107. IO_ERROR=0
    108. WRONG_LENGTH=0
    109. WRONG_MAP=0
    110. WRONG_REDUCE=0
    111. File Input Format Counters
    112. Bytes Read=1366
    113. File Output Format Counters
    114. Bytes Written=1306
时间: 2024-08-24 01:45:40

Hadoop 在Windows7操作系统下使用Eclipse来搭建hadoop开发环境的相关文章

ezgo下安装eclipse及搭建android开发环境

1. JDK 即 Java Development Kit,Java 开发工具包 ezgo11 本来就已预装和配置了JDK,因此不需要再去下载安装,也不许要配置环境 验证:打开终端 $ java -version 2. 安装SDK Android SDK,即 Android Software Development Kit,Android 软件开发工具包. 下载地址:get android SDK 这里我是下载SDK only(在页面的下方有一个[DOWNLOAD FOR OTHER PLATF

win7下在eclipse中搭建cocos2d-x开发环境

1.    eclipse下载. 进入eclipse官网下载 Eclipse standard 4.4 ,  下载页面: http://www.eclipse.org/downloads/ 2.    ADT插件下载 :  http://developer.android.com/sdk/installing/installing-adt.html 3.    Android SDK下载:http://developer.android.com/sdk/index.html (也可以下捆绑的,e

Windows下基于eclipse的Spark应用开发环境搭建

原创文章,转载请注明: 转载自www.cnblogs.com/tovin/p/3822985.html 一.软件下载 maven下载安装 :http://10.100.209.243/share/soft/apache-maven-3.2.1-bin.zip       jdk下载安装:          http://10.100.209.243/share/soft/jdk-7u60-windows-i586.exe(32位)         http://10.100.209.243/sh

【甘道夫】Eclipse+Maven搭建HBase开发环境及HBaseDAO代码示例

环境: Win764bit Eclipse Version: Kepler Service Release 1 java version "1.7.0_40" 第一步:Eclipse中新建Maven项目,编辑pom.xml并更新下载jar包 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance&qu

在Eclipse中搭建Python开发环境

在Eclipse中搭建Python开发环境 来自: http://hi.baidu.com/hqwfreefly/blog/item/2543181d0afd9604314e150e.html 前言 笔者最近迷上了Python,但是一直没有找到满意的IDE.虽然使用Vim编辑器+命令行的方式比较酷,然而一个优秀的IDE确实能让开发速度提升不少.于是笔者想到Eclipse——一个神一样的利器.经过一番曲折,终究修成正果.整理成文,希望对热爱的Python的童鞋有所帮助. 正文 首先我们需要明白一点

Windows下visual studio code搭建golang开发环境

Windows下visual studio code搭建golang开发环境 序幕 其实环境搭建没什么难的,但是遇到一些问题,主要是有些网站资源访问不了(如:golang.org),导致一些包无法安装,最终会导致环境搭建失败,跟据这个教程几步,我们将可以快速的构建golang的开发环境. 开发环境: 一.安装 这里我用需要安装一些工具: 1.Visual Studio Code 1.0.0 2.Golang下载 这里我使用的是Go1.6. 3.git下载 这一步跟建环境没什么关系, 但是之后要引

Eclipse+EPIC搭建Perl开发环境

Perl,如果纯粹只是用做脚本的功能来写写几十行的代码,实现小功能,用NodePad++, EditPlus就足够了,如果是企业级的开发,数以百计的pm模块,几十万行代码的调试,用这些编辑器就远远不够了!这个时候企业级的IDE就发挥出优势了.下面详细介绍下,在Window下利用 Eclipse+EPIC+PadWalker来搭建Perl开发环境! [步骤一]: 下载并安装最新的Perl解释器 ActiverPerl5.20. http://www.activestate.com/activepe

Eclipse + PyDev 搭建 Python 开发环境

使用 Eclipse 安装搭建编程语言的开发环境 在安装 Eclipse 之前,我们需要先安装 JDK ,JDK 下载地址为: http://www.oracle.com/technetwork/java/javase/downloads/index.htmlhttp://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 打开下载地址之后 找到 Windows x64 版本的 JDK 进行下载

windows下VisualStudio和QtCreator搭建Qt开发环境

一.简介 集成开发平台IDE都有各自的长处,编写Qt程序可根据自己的喜好来选择相应的IDE.下述文章都是装载博友的文章,其中有很多细节还得自己调整. 二.详解 1.VisualStudio搭建Qt开发环境 Visual Studio中文官方网站 Visual Studio所有下载 (1)Visual Studio Community 2013 1.选择Community 2013,下载安装 2.安装完Visual Studio默认是英文,如果对英文不感冒的童鞋,可以下载中文语言包. 3.安装完成