Hadoop实战-Flume之Source regex_extractor(十二)

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
a1.sources.r1.interceptors = i1 i2 i3 i4
a1.sources.r1.interceptors.i4.type = timestamp
a1.sources.r1.interceptors.i2.type = host
a1.sources.r1.interceptors.i3.type = static
a1.sources.r1.interceptors.i3.key = datacenter
a1.sources.r1.interceptors.i3.value = NEW_YORK
a1.sources.r1.interceptors.i1.type=regex_extractor
a1.sources.r1.interceptors.i1.regex = (\\d):(\\d):(\\d)
a1.sources.r1.interceptors.i1.serializers = s1 s2 s3
a1.sources.r1.interceptors.i1.serializers.s1.name = one
a1.sources.r1.interceptors.i1.serializers.s2.name = two
a1.sources.r1.interceptors.i1.serializers.s3.name = three

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
时间: 2024-10-13 22:02:47

Hadoop实战-Flume之Source regex_extractor(十二)的相关文章

Hadoop实战-Flume之Source replicating(十四)

a1.sources = r1 a1.sinks = k1 k2 a1.channels = c1 c2 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 # Describe the sink a1.sinks.k1.type =file_roll a1.sinks.k1.sink.directory=/hom

Hadoop实战-Flume之Hdfs Sink(十)

a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 # Describe the sink #a1.sinks.k1.type = logger a1.sinks.k1.type = hdfs a1.sinks.k1.hd

Hadoop实战-Flume之Source regex_filter(十三)

a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 a1.sources.r1.interceptors =i5 #a1.sources.r1.interceptors.i4.type = timestamp #a1.so

Hadoop实战-Flume之Source interceptor(十一)(2017-05-16 22:40)

a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 a1.sources.r1.interceptors = i1 i2 i3 a1.sources.r1.interceptors.i1.type = timestamp

Hadoop实战-Flume之自定义Sink(十九)

import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import org.apache.flume.Channel; import org.apache.flume.Context; import org.apache.flume.Event; import org.apache.flume.EventDeli

《Android源码设计模式解析与实战》读书笔记(十二)

第十二章.观察者模式 观察者模式是一个使用率非常高的模式,它最常用在GUI系统.订阅–发布系统.因为这个模式的一个重要作用就是解耦,将被观察者和观察者解耦,使得它们之间的依赖性更小,甚至做到毫无依赖.比如安卓的开源项目EventBus.Otto.AndroidEventBus等事件总线类的和RxJava响应式编程其核心都是使用观察者模式. 1.定义 观察者模式是一种行为类模式,它定义对象间一种一对多的依赖关系,使得每当一个对象改变状态,则所有依赖于它的对象都会得到通知并被自动更新. 2.使用场景

Hadoop实战-Flume之自定义Source(十八)

import java.nio.charset.Charset; import java.util.HashMap; import java.util.Random; import org.apache.flume.Context; import org.apache.flume.EventDeliveryException; import org.apache.flume.PollableSource; import org.apache.flume.conf.Configurable; im

Hadoop实战-Flume之Hello world(九)

环境介绍: 主服务器ip:192.168.80.128 1.准备apache-flume-1.7.0-bin.tar文件 2.上传到master(192.168.80.128)服务器上 3.解压apache-flume-1.7.0-bin.tar tar -zxvf apache-flume-1.7.0-bin.tar 4.进入到Flume的配置文件目录 cd /apache-flume-1.7.0-bin/conf 5.修改apache-flume-1.7.0-bin/conf的里面内容 a1

Hadoop实战-Flume之Sink Load-balancing(十七)

a1.sources = r1 a1.sinks = k1 k2 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 # Describe the sink a1.sinks.k1.type =file_roll a1.sinks.k1.sink.directory=/home/c