flume的配置详解

Flume:
=====================
Flume是一种分布式的、可靠的、可用的服务,可以有效地收集、聚合和移动大量的日志数据。
它有一个基于流数据的简单而灵活的体系结构。
它具有健壮性和容错能力,具有可调的可靠性机制和许多故障转移和恢复机制。
它使用一个简单的可扩展数据模型,允许在线分析应用程序。

source:源
对channel而言,相当于生产者,通过接收各种格式数据发送给channel进行传输

channel:通道
相当于数据缓冲区,接收source数据发送给sink

sink:沉槽
对channel而言,相当于消费者,通过接收channel数据通过指定数据类型发送到指定位置

Event:
===============
flume传输基本单位:
head + body

flume安装:
================
1、解压
2、符号链接
3、配置环境变量并使其生效
4、修改配置文件
1)重命名flume-env.ps1.template为flume-env.ps1
2)重命名flume-env.sh.template为flume-env.sh
3)修改flume-env.sh,配置jdk目录,添加
export JAVA_HOME=/soft/jdk

5、flume 查看版本
flume-ng version

flume使用:
=========================
//flume可以将配置文件写在zk上

//flume运行命令
flume-ng agent -n a1 -f xxx.conf /flume-ng agent -n xx -f xxx.conf

agent: a1
source: s1
channel:c1
sink: n1

使用方法:
1、编写配置文件r_nc.conf
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2、启动flume,指定配置文件
flume-ng agent -n a1 -f r_nc.conf

3、启动另一个会话,进行测试
nc localhost 8888

//用户手册
http://flume.apache.org/FlumeUserGuide.html

后台运行程序:
=============================================

ctrl + z :将程序放在后台运行 =====> [1]+ Stopped flume-ng agent -n a1 -f r_nc.conf

通过 bg %1 的方式将程序后台运行

通过jobs查看后台任务

通过 fg %1 的方式将程序放在前台运行

flume:
海量日志数据的收集、聚合和移动

flume-ng agent -n a1 -f xxx.conf

source
相对于channel是生产者 //netcat
channel
类似于缓冲区 //memory
sink
相对于channel是消费者 //logger

Event:
header + body
k v data

source:
============================================
1、序列(seq)源:多用作测试
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = seq
# 总共发送的事件个数
a1.sources.r1.totalEvents = 1000

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2、压力(stress)源:多用作负载测试
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = org.apache.flume.source.StressSource
# 单个事件大小,单位:byte
a1.sources.r1.size = 10240
# 事件总数
a1.sources.r1.maxTotalEvents = 1000000

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

3、滚动目录(Spooldir)源:监听指定目录新文件产生,并将新文件数据作为event发送
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = spooldir
# 设置监听目录
a1.sources.r1.spoolDir = /home/centos/spooldir

# 通过以下配置指定消费完成后文件后缀
#a1.sources.r1.fileSuffix = .COMPLETED

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

4、exec源 //通过执行linux命令产生新数据
//典型应用 tail -F (监听一个文件,文件增长的时候,输出追加数据)
//不能保证数据完整性,很可能丢失数据

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = exec
# 配置linux命令
a1.sources.r1.command = tail -F /home/centos/readme.txt

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

5、Taildir源 //监控目录下文件
//文件类型可通过正则指定
//有容灾机制

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = TAILDIR
# 设置source组 可设置多个
a1.sources.r1.filegroups = f1
# 设置组员的监控目录和监控文件类型,使用正则表示,只能监控文件
a1.sources.r1.filegroups.f1 = /home/centos/taildir/.*

# 设置定位文件的位置
# a1.sources.r1.positionFile ~/.flume/taildir_position.json

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

sink:
====================================
1、fileSink //多用作数据收集
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

# 配置sink
a1.sinks.k1.type = file_roll
# 配置目标文件夹
a1.sinks.k1.sink.directory = /home/centos/file
# 设置滚动间隔,默认30s,设为0则不滚动,成为单个文件
a1.sinks.k1.sink.rollInterval = 0

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2、hdfsSink //默认以seqFile格式写入
//k:LongWritable
//v: BytesWritable
//
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

# 配置sink
a1.sinks.k1.type = hdfs
# 配置目标文件夹
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/
# 配置文件前缀
a1.sinks.k1.hdfs.filePrefix = events-
# 滚动间隔,秒
a1.sinks.k1.hdfs.rollInterval = 0
# 触发滚动文件大小,byte
a1.sinks.k1.hdfs.rollSize = 1024
# 配置使用本地时间戳
a1.sinks.k1.hdfs.useLocalTimeStamp = true
# 配置输出文件类型,默认SequenceFile
# DataStream文本格式,不能设置压缩编解码器
# CompressedStream压缩文本格式,需要设置编解码器
a1.sinks.k1.hdfs.fileType = DataStream

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

3、hiveSink: //hiveserver帮助:hive --service help
//1、hive --service metastore 启动hive的metastore服务,metastore地址:thrift://localhost:9083
//2、将hcatalog的依赖放在/hive/lib下,cp hive-hcatalog* /soft/hive/lib (位置/soft/hive/hcatalog/share/hcatalog)
//3、创建hive事务表
//SET hive.support.concurrency=true;
SET hive.enforce.bucketing=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
SET hive.compactor.initiator.on=true;
SET hive.compactor.worker.threads=1;

//create table myhive.weblogs(id int, name string, age int)
clustered by(id) into 2 buckets
row format delimited
fields terminated by ‘\t‘
stored as orc
tblproperties(‘transactional‘=‘true‘);

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

# 配置sink
a1.sinks.k1.type = hive
a1.sinks.k1.hive.metastore = thrift://127.0.0.1:9083
a1.sinks.k1.hive.database = myhive
a1.sinks.k1.hive.table = weblogs
a1.sinks.k1.useLocalTimeStamp = true
#输入格式,DELIMITED和json
#DELIMITED 普通文本
#json json文件
a1.sinks.k1.serializer = DELIMITED
#输入字段分隔符,双引号
a1.sinks.k1.serializer.delimiter = ","
#输出字段分隔符,单引号
a1.sinks.k1.serializer.serdeSeparator = ‘\t‘
#字段名称,","分隔,不能有空格
a1.sinks.k1.serializer.fieldnames =id,name,age

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

4、hbaseSink //SimpleHbaseEventSerializer将rowKey和col设置了默认值,不能自定义
//RegexHbaseEventSerializer可以手动指定rowKey和col字段名称

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

# 配置sink
a1.sinks.k1.type = hbase
a1.sinks.k1.table = flume_hbase
a1.sinks.k1.columnFamily = f1
a1.sinks.k1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer

# 配置col正则手动指定
# rowKeyIndex手动指定rowKey,索引以0开头
a1.sinks.k1.serializer.colNames = ROW_KEY,name,age
a1.sinks.k1.serializer.regex = (.*),(.*),(.*)
a1.sinks.k1.serializer.rowKeyIndex=0

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

5、asynchbaseSink //异步hbaseSink
//异步机制,写入速度快
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

# 配置sink
a1.sinks.k1.type = asynchbase
a1.sinks.k1.table = flume_hbase
a1.sinks.k1.columnFamily = f1
a1.sinks.k1.serializer = org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

channel:缓冲区
=====================================
1、memorychannel
a1.channels.c1.type = memory
# 缓冲区中存留的最大event个数
a1.channels.c1.capacity = 1000
# channel从source中每个事务提取的最大event数
# channel发送给sink每个事务发送的最大event数
a1.channels.c1.transactionCapacity = 100

2、fileChannel: //检查点和数据存储在默认位置时,当多个channel同时开启
//会导致文件冲突,引发其他channel会崩溃

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels = c1
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /home/centos/flume/checkpoint
a1.channels.c1.dataDirs = /home/centos/flume/data

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

memoryChannel:快速,但是当设备断电,数据会丢失

FileChannel: 速度较慢,即使设备断电,数据也不会丢失

Avro
===============================================
source
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4444

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

***********************************************************************************************
*启动avro客户端,发送数据: *
* flume-ng avro-client -H localhost -p 4444 -R ~/avro/header.txt -F ~/avro/user0.txt *
* 指定ip 指定端口 指定header文件 指定数据文件 *
***********************************************************************************************

sink
# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = TAILDIR
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /home/centos/taildir/.*

# 配置sink
a1.sinks.k1.type = avro
a1.sinks.k1.bind = 192.168.23.101
a1.sinks.k1.port = 4444

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

Flume跃点:
=====================================
1、将s101的flume发送到其他节点
xsync.sh /soft/flume
xsync.sh /soft/apache-flume-1.8.0-bin/

2、切换到root用户,分发环境变量文件
su root
xsync.sh /etc/profile
exit

3、配置文件
1)配置s101 //hop.conf
设置source:avro
设置sink: hdfs

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4444

# 配置sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /flume/hop/%y-%m-%d/
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.rollSize = 1024
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType = DataStream

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2)配置s102-s104 //hop2.conf
设置source:taildir
设置sink: avro

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = TAILDIR
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /home/centos/taildir/.*

# 配置sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.23.101
a1.sinks.k1.port = 4444

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

4、在s102-s104创建~/taildir文件夹
xcall.sh "mkdir ~/taildir"

5、启动s101的flume
flume-ng agent -n a1 -f /soft/flume/conf/hop.conf

6、分别启动s102-s104的flume,并将其放在后台运行
flume-ng agent -n a1 -f /soft/flume/conf/hop2.conf &

7、进行测试,分别在s102-s104的taildir中创建数据,观察hdfs数据情况
s102]$ echo 102 > taildir/1.txt
s103]$ echo 103 > taildir/1.txt
s104]$ echo 104 > taildir/1.txt

interceptor:拦截器
==================================
是source端组件:负责修改或删除event
每个source可以配置多个拦截器 ===> interceptorChain

1、Timestamp Interceptor //时间戳拦截器 + header

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888
# 给拦截器起名
a1.sources.r1.interceptors = i1
# 指定拦截器类型
a1.sources.r1.interceptors.i1.type = timestamp

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2、Static Interceptor //静态拦截器 + header

3、Host Interceptor //主机拦截器 + header

4、设置拦截器链:

# 将agent组件起名
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888

a1.sources.r1.interceptors = i1 i2 i3
a1.sources.r1.interceptors.i1.type = timestamp
a1.sources.r1.interceptors.i2.type = host
a1.sources.r1.interceptors.i3.type = static
a1.sources.r1.interceptors.i3.key = location
a1.sources.r1.interceptors.i3.value = NEW_YORK

# 配置sink
a1.sinks.k1.type = logger

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

channel selector:通道挑选器
====================================
是source端组件:负责将event发送到指定的channel,相当于分区

当一个source设置多个channel时,默认以副本形式向每个channel发送一个event拷贝

1、replication副本通道挑选器 //默认挑选器,source将所有channel发送event副本
//设置source x 1, channel x 3, sink x 3
// nc memory file

# 将agent组件起名
a1.sources = r1
a1.sinks = k1 k2 k3
a1.channels = c1 c2 c3

# 配置source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 8888
a1.sources.r1.selector.type = replicating

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

a1.channels.c3.type = memory
a1.channels.c3.capacity = 1000
a1.channels.c3.transactionCapacity = 100

# 配置sink
a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /home/centos/file1
a1.sinks.k1.sink.rollInterval = 0

a1.sinks.k2.type = file_roll
a1.sinks.k2.sink.directory = /home/centos/file2
a1.sinks.k2.sink.rollInterval = 0

a1.sinks.k3.type = file_roll
a1.sinks.k3.sink.directory = /home/centos/file3
a1.sinks.k3.sink.rollInterval = 0

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1 c2 c3
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
a1.sinks.k3.channel = c3

2、Multiplexing 多路复用通道挑选器 //选择avro源发送文件

# 将agent组件起名
a1.sources = r1
a1.sinks = k1 k2 k3
a1.channels = c1 c2 c3

# 配置source
a1.sources.r1.type = avro
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4444
# 配置通道挑选器
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = country
a1.sources.r1.selector.mapping.CN = c1
a1.sources.r1.selector.mapping.US = c2
a1.sources.r1.selector.default = c3

# 配置channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

a1.channels.c3.type = memory
a1.channels.c3.capacity = 1000
a1.channels.c3.transactionCapacity = 100

# 配置sink
a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /home/centos/file1
a1.sinks.k1.sink.rollInterval = 0

a1.sinks.k2.type = file_roll
a1.sinks.k2.sink.directory = /home/centos/file2
a1.sinks.k2.sink.rollInterval = 0

a1.sinks.k3.type = file_roll
a1.sinks.k3.sink.directory = /home/centos/file3
a1.sinks.k3.sink.rollInterval = 0

# 绑定channel-source, channel-sink
a1.sources.r1.channels = c1 c2 c3
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
a1.sinks.k3.channel = c3

1、创建file1 file2 file3文件夹,家目录
mkdir file1 file2 file3

2、创建文件夹country,并放入头文件和数据
创建头文件CN.txt、US.txt、OTHER.txt
CN.txt ===> country CN
US.txt ===> country US
OTHER.txt ===> country OTHER

创建数据 1.txt
1.txt ====> helloworld

3、运行flume
flume-ng agent -n a1 -f /soft/flume/selector_multi.conf

4、运行Avro客户端
flume-ng avro-client -H localhost -p 4444 -R ~/country/US.txt -F ~/country/1.txt ===> 查看file2
flume-ng avro-client -H localhost -p 4444 -R ~/country/CN.txt -F ~/country/1.txt ===> 查看file1
flume-ng avro-client -H localhost -p 4444 -R ~/country/OTHER.txt -F ~/country/1.txt ===> 查看file3

sinkProcessor
=================================
sink Runner 运行一个 sink Group

sink Group 是由一个或多个 sink 构成

sink Runner 告诉 sink Group 处理下一批 event

sink Group 含有一个 sink Processor , 负责指定一个 sink 来处理这批数据

2、failover 容灾 //将所有sink设置一个优先级
//数量越大,优先级越高
//当数据传入时,优先级最高的sink负责处理
//当sink挂掉,次高优先级的sink被激活,继续处理数据
//channel和sink必须一对一

a1.sources = r1
a1.sinks = s1 s2 s3
a1.channels = c1 c2 c3

# Describe/configure the source
a1.sources.r1.type = seq

a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = s1 s2 s3
a1.sinkgroups.g1.processor.type = failover
a1.sinkgroups.g1.processor.priority.s1 = 5
a1.sinkgroups.g1.processor.priority.s2 = 10
a1.sinkgroups.g1.processor.priority.s3 = 15
a1.sinkgroups.g1.processor.maxpenalty = 10000

# Describe the sink
a1.sinks.s1.type = file_roll
a1.sinks.s1.sink.directory = /home/centos/file1
a1.sinks.s2.type = file_roll
a1.sinks.s2.sink.directory = /home/centos/file2
a1.sinks.s3.type = file_roll
a1.sinks.s3.sink.directory = /home/centos/file3

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c2.type = memory
a1.channels.c3.type = memory

# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2 c3
a1.sinks.s1.channel = c1
a1.sinks.s2.channel = c2
a1.sinks.s3.channel = c3

Event事件是由Source端封装输入数据的字节数组得来的
Event event = EventBuilder.withBody(body);

Sink中的process方法返回两种状态:
1、READY //一个或多个event成功分发
2、BACKOFF //channel中没有数据提供给sink

flume中事务的生命周期:

tx.begin() //开启事务,之后执行操作
tx.commit() //提交事务,操作完成后由此提交
tx.rollback() //回滚事务,出现异常可以采取回滚措施
tx.close() //关闭事务,最后一定要关闭事务

原文地址:https://www.cnblogs.com/zhengchunyuan/p/12204763.html

时间: 2024-11-05 20:40:39

flume的配置详解的相关文章

Flume NG 配置详解

配置 设置代理 Flume代理配置存储在本地配置文件.这是一个文本文件格式,是Java属性文件格式.在相同的配置文件,可以指定一个或多个代理的配置.配置文件包括每个源,接收器和通道,把它们连接在一起,形成数据流. 配置单个组件 流中每个组件(源,接收器或通道)都有名称,类型,和一组特定实例的属性.例如,Avro源需要一个接收数据的主机名(或IP地址)和端口号.一个内存通道可以有最大队列大小(“能力”),HDFS的Sink需要知道文件系统的URI,路径创建文件,文件的创建频率(“hdfs.roll

使用LVS实现负载均衡原理及安装配置详解

转:http://www.cnblogs.com/liwei0526vip/p/6370103.html 使用LVS实现负载均衡原理及安装配置详解 负载均衡集群是 load balance 集群的简写,翻译成中文就是负载均衡集群.常用的负载均衡开源软件有nginx.lvs.haproxy,商业的硬件负载均衡设备F5.Netscale.这里主要是学习 LVS 并对其进行了详细的总结记录. 一.负载均衡LVS基本介绍 LB集群的架构和原理很简单,就是当用户的请求过来时,会直接分发到Director

LVS配置详解

一.LVS系统组成 前端:负载均衡层 –      由一台或多台负载调度器构成 中间:服务器群组层 –      由一组实际运行应用服务的服务器组成 底端:数据共享存储层 –      提供共享存储空间的存储区域 二.LVS术语 Director Server:调度服务器,将负载分发到RealServer的服务器 Real Server:真实服务器,真正提供应用服务的服务器 VIP:虚拟IP地址,公布给用户访问的IP地址 RIP:真实IP地址,集群节点上使用的IP地址 DIP:Director连

logback logback.xml 常用配置详解

一:根节点 包含的属性: scan: 当此属性设置为true时,配置文件如果发生改变,将会被重新加载,默认值为true. scanPeriod: 设置监测配置文件是否有修改的时间间隔,如果没有给出时间单位,默认单位是毫秒.当scan为true时,此属性生效.默认的时间间隔为1分钟. debug: 当此属性设置为true时,将打印出logback内部日志信息,实时查看logback运行状态.默认值为false. 例如: <configuration scan="true" scan

php-fpm的配置详解

php5.3自带php-fpm /usr/local/php/etc/php-fpm.confpid = run/php-fpm.pidpid设置,默认在安装目录中的/var/run/php-fpm.pid,建议开启 error_log = log/php-fpm.log错误日志,默认在安装目录中的/var/log/php-fpm.log log_level = notice错误级别. 可用级别为: alert(必须立即处理), error(错误情况), warning(警告情况), notic

varnish安装及配置详解

varnish系统架构: varnish主要运行两个进程:Management进程和Child进程(也叫Cache进程). Management进程主要实现应用新的配置.编译VCL.监控varnish.初始化varnish以及提供一个命令行接口等.Management进程会每隔几秒钟探测一下Child进程以判断其是否正常运行,如果在指定的时长内未得到Child进程的回应,Management将会重启此Child进程. Child进程包含多种类型的线程,常见的如:Acceptor线程:接收新的连接

Windows下Nginx Virtual Host多站点配置详解

Windows下Nginx Virtual Host多站点配置详解 此教程适用于Windows系统已经配置好Nginx+Php+Mysql环境的同学. 如果您还未搭建WNMP环境,请查看 windows7配置Nginx+php+mysql教程. 先说明一下配置多站点的目的:在生产环境中,如果将系统所有代码文件都放在公开目录中,则很容易被查看到系统源码,这样是很不安全的,所以需要只公开index.php的入口文件目录.而同一个服务器中,可能运行多个系统,这样就必须公开多个入口文件目录,以便用不同的

linux FTP配置详解

一.vsftpd说明: LINUX下实现FTP服务的软件很多,最常见的有vsftpd,Wu-ftpd和Proftp等.Red Hat Enterprise Linux中默认安装的是vsftpd. 访问FTP服务器时需要经过验证,只有经过了FTP服务器的相关验证,用户才能访问和传输文件.vsftpd提供了3种ftp登录形式:  (1)anonymous(匿名帐号) 使用anonymous是应用广泛的一种FTP服务器.如果用户在FTP服务器上没有帐号,那么用户可以以anonymous为用户名,以自己

Log4J日志配置详解

一.Log4j简介 Log4j有三个主要的组件:Loggers(记录器),Appenders (输出源)和Layouts(布局).这里可简单理解为日志类别,日志要输出的地方和日志以何种形式输出.综合使用这三个组件可以轻松地记录信息的类型和级别,并可以在运行时控制日志输出的样式和位置. 1.Loggers Loggers组件在此系统中被分为五个级别:DEBUG.INFO.WARN.ERROR和FATAL.这五个级别是有顺序的,DEBUG < INFO < WARN < ERROR <