Flume的安装与配置
一、 资源下载
资源地址:http://flume.apache.org/download.html
程序地址:http://apache.fayea.com/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz
源码地址:http://mirrors.hust.edu.cn/apache/flume/1.6.0/apache-flume-1.6.0-src.tar.gz
二、 安装搭建
(1)编译好的包:
直接在安装目录解压即可(重命名可选)
cd /usr/local/
tar –zxvf apache-flume-1.6.0-bin.tar.gz
mv apache-flume-1.6.0-bin flume
(2)源码编译安装:
这种方法比较麻烦,要把需要的包都下载全,然后用以下命令编译:
- 只进行编译:mvn clean compile
- 编译并且执行单元测试:mvn clean test
- 单独运行单元测试: mvn clean test -Dtest=<Test1>,<Test2>,... -DfailIfNoTests=false
- 创建压缩包: mvn clean install
- 跳过单元测试创建压缩包: mvn clean install –DskipTests
编译完成之后,和直接运行可执行包的
三、 运行与配置
(1)flume的配置
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /flume/test.log
# Describe the sink
a1.sinks.k1.type = hdfs
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.sinks.k1.hdfs.path=hdfs://192.168.15.135:9000/flume/events/%y-%m-%d/%H%M/%S
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.useLocalTimeStamp = true
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
配置文件分为四个部分source、sink、channel和它们之间的关联关系;flume之间模块的关系如下图:
如图:source是负责从WebServer收集数据信息,Sink负责将收集和格式化后的日志写入到磁盘、其他文件系统或其他日志系统,channel是负责连接source和sink。因为有channel的存在,所以source和sink是多对多的关系。
# example.conf: A single-node Flume configuration |
|
# Name the components on this agent |
a1是代理的名字 |
a1.sources = r1 |
定义一个source:r1 |
a1.sinks = k1 |
定义一个sink:k1 |
a1.channels = c1 |
定义一个channel:c1 |
# Describe/configure the source |
|
a1.sources.r1.type = exec |
a1的r1的类型为exec(执行类型) |
a1.sources.r1.command = tail -F /flume/test.log |
a1的r1要执行的命令为tail一个test.log |
# Describe the sink |
|
a1.sinks.k1.type = hdfs |
a1的sink类型为hdfs |
# Use a channel which buffers events in memory |
|
a1.channels.c1.type = memory |
a1的channel的类型为存在内存 |
a1.channels.c1.capacity = 1000 |
a1的容量为1000 |
a1.channels.c1.transactionCapacity = 100 |
a1的交互容量为100 |
a1.sinks.k1.hdfs.path=hdfs://192.168.15.135:9000/flume/events/%y-%m-%d/%H%M/%S |
a1的叫k1的sink的最终存储的文件系统的路径是:hdfs://…… |
a1.sinks.k1.hdfs.filePrefix = events- |
sink在存储文件的时候的前缀为event- |
a1.sinks.k1.hdfs.round = true |
hdfs配置项 |
a1.sinks.k1.hdfs.roundValue = 10 |
hdfs配置项 |
a1.sinks.k1.hdfs.roundUnit = minute |
hdfs配置项 |
a1.sinks.k1.hdfs.useLocalTimeStamp = true |
将用本地时间戳设置为true |
# Bind the source and sink to the channel |
|
a1.sources.r1.channels = c1 |
把source-r1绑定到channel-c1 |
a1.sinks.k1.channel = c1 |
把sink-k1绑定到channel-c1 |
(2)flume的运行方法为:
$ bin/flume-ng agent -n $agent_name -c conf -f conf/flume-conf.properties
-n 指定代理(agent)名字;
-c conf指定配置文件的目录(主要是日志等其他配置文件的目录);
-f 本次运行的flume的配置文件,需要添加路径(模式是在工程的根路径flume/)
执行命令例如:
$ bin/flume-ng agent -n a1 -c conf -f conf/example.conf
执行成功之后,我们可以在logs的flume.log中看到日志。
另外,还可以用以下方式启动,来指定日志输出:
$ bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console
--conf :与-c相同;
--conf-file :与-f相同;
--name:与-n相同;
flume.root.logger:指定日志级别和显示方式,上述命令为INFO,输出到终端;如果没有此项,像之前的命令一样,默认的级别是INFO,输出到LOGFILE。
四、 备注
(1)可选的source有:
- § Avro Source
- § Thrift Source
- § Exec Source
- § JMS Source
- § Spooling Directory Source
- § Twitter 1% firehose Source (experimental)
- § Kafka Source
- § NetCat Source
- § Sequence Generator Source
- § Syslog Sources
- § HTTP Source
- § Stress Source
- § Legacy Sources
- § Custom Source
- § Scribe Source
(2)可选的sink有:
- § HDFS Sink
- § Hive Sink
- § Logger Sink
- § Avro Sink
- § Thrift Sink
- § IRC Sink
- § File Roll Sink
- § Null Sink
- § HBaseSinks
- § MorphlineSolrSink
- § ElasticSearchSink
- § Kite Dataset Sink
- § Kafka Sink
- § Custom Sink
- § (2)可选的channel有:
详细配置参考:http://flume.apache.org/FlumeUserGuide.html#flume-sources