flume-hdfs sinks报错

flume使用hdfs sinks时候报错:org.apache.flume.EventDeliveryException: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null

解决方法:sink是hdfs,然后使用目录自动生成功能。出现如题的错误,看官网文档说的是需要在每个文件记录行的开头需要有时间戳,但是时间戳的格式可能比较难调节,所以亦可设置 hdfs.useLocalTimeStamp这个参数,比如以每个小时作为一个文件夹

  

# Describe the sink
a1.sinks.k1.type=hdfs
a1.sinks.k1.hdfs.path=hdfs://node0:8020/user/flume/%Y-%m-%d
a1.sinks.k1.hdfs.rollSize=10240000
a1.sinks.k1.hdfs.rollInterval=0
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.idleTimeout=5
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.useLocalTimeStamp = true

时间: 2024-11-06 17:48:30

flume-hdfs sinks报错的相关文章

flume采集启动报错,权限不够

18/04/18 16:47:12 WARN source.EventReader: Could not find file: /home/hadoop/king/flume/103104/data/HD20180417213353.data java.io.FileNotFoundException: /home/hadoop/king/flume/103104/trackerDir/.flumespool-main.meta (Permission denied) at java.io.Fi

eclipse 向HDFS中写入文件报错 permission denied

环境:win7  eclipse    hadoop 1.1.2 当执行创建文件的的时候, 即: fileSystem.mkdirs(Path);//想hadoop上创建一个文件报错 报错: org.apache.hadoop.security.AccessControlException:Permission denied:user=Administrator,access=WRITE,inode="tmp":root:supergroup:rwxr-xr-x 原因: 1. 当前用户

ha环境下重新格式化hdfs报错

datanode启动不成功,如下所示,我的136,137.138都是datanode,都启动不了. 查看datanode日志文件发现报错: 一个报错Incompatible clusterIDs in /home/hadoop/data/datanode,需要删除core-site.xml中配置的hadoop.tmp.dir临时目录下的东西. 第二个报错需要删除hdfs-site.xml中配置dfs.datanode.data.dir对应的目录下的东西. 然后重新格式化hdfs后,重启hdfs就

hue中访问hdfs报错

在hue中访问hdfs报错: Cannot access: /. Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup". 原因: 解决方法: 原文地址:https://www.cnblogs.com/mediocreWorld/p/11148875.html

filebeat+kafka+SparkStreaming程序报错及解决办法

17/07/01 03:07:21 WARN RandomBlockReplicationPolicy: Expecting 1 replicas with only 0 peer/s. 17/07/01 03:07:21 WARN BlockManager: Block input-0-1498849640800 replicated to only 0 peer(s) instead of 1 peers 17/07/01 03:07:26 ERROR Executor: Exception

用java运行Hadoop程序报错:org.apache.hadoop.fs.LocalFileSystem cannot be cast to org.apache.

用java运行Hadoop例程报错:org.apache.hadoop.fs.LocalFileSystem cannot be cast to org.apache.所写代码如下: package com.pcitc.hadoop; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.h

hadoop平台读取文件报错

背景: 生产环境有个脚本执行读取st层表数据时出现IO错误,查看表目录下的文件,都是压缩后的文件.详细信息如下: Task with the most failures(4): ----- Task ID: task_201408301703_172845_m_003505 URL: http://master:50030/taskdetails.jsp?jobid=job_201408301703_172845&tipid=task_201408301703_172845_m_003505 -

hive 之简单查询报错

报错如下: 查看表数据存储的位置,文件情况发现hdfs 下该.gz压缩文件出现问题 重新导入 load data local inpath '/home/dp/db_apptrack_mobile_product.csv' overwrite into table stage.mobile_product_temp;

Spark 启动历史任务记录进程,报错 Logging directory must be specified解决

最近在自己电脑上装了Spark 单机运行模式,Spark 启动没有任何问题,可是启动spark history时,一直报错,错误信息如下: Spark assembly has been built with Hive, including Datanucleus jars on classpath Spark Command: /usr/local/java/jdk1.7.0_67/bin/java -cp ::/usr/local/spark/conf:/usr/local/spark/li