我是采用的源码编译的包安装的, 主要是考虑到会对部分功能做裁剪或增强, 具体源码编译方式可以参考另外一篇博文《编译Sqoop2错误解决》。然后从dist/target目录下拷贝sqoop-1.99.3.tar.gz压缩包的内容到/usr/lib/sqoop目录中。
接下来便是开始对相关配置文件进行配置,注意sqoop2是区分了server和client端的,我们首先安装服务端,记得服务端需要安装到可以执行hadoop命令行的机器上,最直接的方式就是安装到hadoop集群某个节点上,而客户端可以在任何机器上,只要能与sqoop server进行通信的机器就行。
1、服务端安装配置
进入/usr/lib/sqoop/server/conf目录,可以看到里面有几个配置文件, 有关于tomcat容器的还有sqoop本身的,首先我们配置catilina.properties,需要将hadoop的jar包依赖进来,当然,如果还需要导入hive或hbase,对应的jar包也需要照这样加入:
common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/../lib/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/common/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/common/lib/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/hdfs/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/hdfs/lib/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/mapreduce/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/mapreduce/lib/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/tools/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/tools/lib/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/yarn/*.jar,/home/cdh/hadoop-2.3.0-cdh5.1.2/share/hadoop/yarn/lib/*.jar
然后修改sqoop.properties,里面可以配置日志输出路径, 元数据内嵌数据库deby的信息等,需要注意的是修改deby库名为SQOOP,默认不是这个,而代码中却写死的这个,囧, 我们最主要修改以下位置:
<pre name="code" class="html"># JDBC repository provider configuration org.apache.sqoop.repository.jdbc.handler=org.apache.sqoop.repository.derby.DerbyRepositoryHandler org.apache.sqoop.repository.jdbc.transaction.isolation=READ_COMMITTED org.apache.sqoop.repository.jdbc.maximum.connections=10 org.apache.sqoop.repository.jdbc.url=jdbc:derby:@[email protected]/repository/SQOOP;create=true org.apache.sqoop.repository.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver org.apache.sqoop.repository.jdbc.user=sa org.apache.sqoop.repository.jdbc.password=
# # Configuration for Mapreduce submission engine (applicable if it's configured) # # Hadoop configuration directory org.apache.sqoop.submission.engine.mapreduce.configuration.directory=/home/cdh/hadoop/etc/hadoop/
另外配置暂时不用怎么特殊配置,保持默认就行, 如果需要可以进一步特殊配置即可, 毕竟先玩起来是要事!
最后需要做的一件事情就是将mysql-connector-java-5.1.20.jar这个MySQL驱动放入/usr/lib/sqoop/server/lib目录下:
最后便是启动服务端(我是安装到192.168.69.16):
/usr/lib/sqoop/bin/sqoop.sh server start
而作为客户端不需要任何配置,直接将压缩分发包发送到相应机器上解压,执行 bin/sqoop.sh client即可进入shell命令行界面,具体使用可以参考官方文档:
遗留问题, 启动老是报一个错误,应该是日志jar冲突导致的,一直没搞定,如果有知道如何处理的,可以一起交流哈:
log4j: Finished configuring. log4j:ERROR A "org.apache.log4j.xml.DOMConfigurator" object is not assignable to a "org.apache.log4j.spi.Configurator" variable. log4j:ERROR The class "org.apache.log4j.spi.Configurator" was loaded by log4j:ERROR [[email protected]] whereas object of type log4j:ERROR "org.apache.log4j.xml.DOMConfigurator" was loaded by [WebappClassLoader^M context: /sqoop^M delegate: false^M repositories:^M /WEB-INF/classes/^M ----------> Parent Classloader:^M [email protected]^M ]. log4j:ERROR Could not instantiate configurator [org.apache.log4j.xml.DOMConfigurator]. log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly.