KFC数据测试hbase结果

两个field,一个是KFC数据 一个列放的内容是“same”

每条数据都flush

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:07:46,898 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:07:47,049 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,412 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:07:47,481 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-08-08 17:07:48,743 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0xd4159f connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
create table success!
has been write 10000 record 20414 total milliseconds
has been write 20000 record 18707 total milliseconds
has been write 30000 record 18629 total milliseconds
has been write 40000 record 18413 total milliseconds
has been write 50000 record 18332 total milliseconds
has been write 60000 record 18233 total milliseconds
has been write 70000 record 18290 total milliseconds
has been write 80000 record 18422 total milliseconds
has been write 90000 record 18439 total milliseconds
has been write 100000 record 19525 total milliseconds
has been write 110000 record 18534 total milliseconds
has been write 120000 record 18421 total milliseconds
has been write 130000 record 18413 total milliseconds
has been write 140000 record 18017 total milliseconds
has been write 150000 record 18618 total milliseconds
has been write 160000 record 19550 total milliseconds
has been write 170000 record 18546 total milliseconds
has been write 180000 record 18636 total milliseconds
has been write 190000 record 18201 total milliseconds
has been write 200000 record 18178 total milliseconds
has been write 210000 record 18044 total milliseconds
has been write 220000 record 17923 total milliseconds
has been write 230000 record 18356 total milliseconds
has been write 240000 record 18626 total milliseconds
has been write 250000 record 18766 total milliseconds
has been write 260000 record 18783 total milliseconds
has been write 270000 record 18354 total milliseconds
has been write 280000 record 18632 total milliseconds
has been write 290000 record 18365 total milliseconds
has been write 300000 record 18347 total milliseconds
has been write 310000 record 18467 total milliseconds
has been write 320000 record 18390 total milliseconds
has been write 330000 record 22061 total milliseconds
has been write 340000 record 18059 total milliseconds
has been write 350000 record 18703 total milliseconds
has been write 360000 record 18620 total milliseconds
has been write 370000 record 18527 total milliseconds
has been write 380000 record 18596 total milliseconds
has been write 390000 record 18534 total milliseconds
has been write 400000 record 18756 total milliseconds
has been write 410000 record 18690 total milliseconds
has been write 420000 record 18712 total milliseconds
has been write 430000 record 18782 total milliseconds
has been write 440000 record 18725 total milliseconds
has been write 450000 record 18458 total milliseconds
has been write 460000 record 18478 total milliseconds
873298 total milliseconds

==================================================

10000条数据提交一次

(如果要设置多条提交除了设置 table.setAutoFlush(false);还要设置buf大小table.setWriteBufferSize(1024 * 1024*50); //100MB)

空间大小

0                   /hbase/.tmp
7595732       /hbase/WALs
0                   /hbase/archive
0                   /hbase/corrupt
49270766     /hbase/data
42                 /hbase/hbase.id
7                   /hbase/hbase.version
208169150   /hbase/oldWALs

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
2014-08-08 17:51:58,199 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2014-08-08 17:51:58,497 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:58,977 INFO [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=catalogtracker-on-hconnection-0x1af0db6 connecting to ZooKeeper ensemble=h139:2181,h135:2181,openstack:2181
2014-08-08 17:51:59,066 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
table Exists!
has been write 10000 record 148 total milliseconds
has been write 20000 record 1465 total milliseconds
has been write 30000 record 699 total milliseconds
has been write 40000 record 999 total milliseconds
has been write 50000 record 882 total milliseconds
has been write 60000 record 644 total milliseconds
has been write 70000 record 808 total milliseconds
has been write 80000 record 725 total milliseconds
has been write 90000 record 612 total milliseconds
has been write 100000 record 709 total milliseconds
has been write 110000 record 588 total milliseconds
has been write 120000 record 600 total milliseconds
has been write 130000 record 813 total milliseconds
has been write 140000 record 545 total milliseconds
has been write 150000 record 750 total milliseconds
has been write 160000 record 769 total milliseconds
has been write 170000 record 771 total milliseconds
has been write 180000 record 761 total milliseconds
has been write 190000 record 622 total milliseconds
has been write 200000 record 723 total milliseconds
has been write 210000 record 625 total milliseconds
has been write 220000 record 777 total milliseconds
has been write 230000 record 635 total milliseconds
has been write 240000 record 707 total milliseconds
has been write 250000 record 604 total milliseconds
has been write 260000 record 804 total milliseconds
has been write 270000 record 735 total milliseconds
has been write 280000 record 624 total milliseconds
has been write 290000 record 615 total milliseconds
has been write 300000 record 727 total milliseconds
has been write 310000 record 613 total milliseconds
has been write 320000 record 665 total milliseconds
has been write 330000 record 703 total milliseconds
has been write 340000 record 622 total milliseconds
has been write 350000 record 620 total milliseconds
has been write 360000 record 933 total milliseconds
has been write 370000 record 885 total milliseconds
has been write 380000 record 861 total milliseconds
has been write 390000 record 989 total milliseconds
has been write 400000 record 833 total milliseconds
has been write 410000 record 991 total milliseconds
has been write 420000 record 736 total milliseconds
has been write 430000 record 586 total milliseconds
has been write 440000 record 590 total milliseconds
has been write 450000 record 690 total milliseconds
has been write 460000 record 617 total milliseconds
34145 total milliseconds

时间: 2024-08-04 10:01:10

KFC数据测试hbase结果的相关文章

大数据测试笔记(1)-测试的3条建议

大数据,咋一听起来都觉得很神秘,很高大上,从2013年开始听得越来越多,什么数据挖掘,数据分析.机器学习.算法,让我等听起来天马行空,雾里看花,有幸接触到了大数据项目,让我拨开云雾,原来大数据其实简单,真的简单,大量数据嘛,就是我们说的大数据,基于数据分析,获得有价值的信息. 目前我理解大数据,有数据采集.数据存储.数据分析.数据应用,前两者是基础,后两者是价值,采集存储数据不是目的,利用数据分析有价值的信息,才是我们选择的. 我们不展开聊,作为测试,我关心的是我要测试什么,如何测试,怎么衡量产

HBase写入性能改造(续)--MemStore、flush、compact参数调优及压缩卡的使用【转】

首先续上篇测试: 经过上一篇文章中对代码及参数的修改,Hbase的写入性能在不开Hlog的情况下从3~4万提高到了11万左右. 本篇主要介绍参数调整的方法,在HDFS上加上压缩卡,最后能达到的写入性能为17W行每秒(全部测试都不开Hlog). 上篇测试内容: 详情 http://blog.csdn.net/kalaamong/article/details/7275242. 测试数据 http://blog.csdn.net/kalaamong/article/details/7290192 同

HBase跨版本数据迁移总结

某客户大数据测试场景为:Solr类似画像的数据查出用户标签--通过这些标签在HBase查询详细信息.以上测试功能以及性能. 其中HBase的数据量为500G,Solr约5T.数据均需要从对方的集群人工迁移到我们自己搭建的集群.由于Solr没有在我们集群中集成,优先开始做HBase的数据迁移,以下总结了HBase使用以及数据迁移遇到的各种问题以及解决方法. 一.迁移过程遇到问题以及解决 客户HBase版本:Version 0.94.15腾讯大数据套件HBase版本:Version 1.2.1客户私

hbase+opentsdb 单机版搭建

2018年2月19日星期一 Lee 这个实验步骤比较简单,只能用来演示下搭建过程,实际生产环境复杂的很多. 实验环境: centos6.5 x86_64IP: 10.0.20.25 这里实验没有用HDFS,生产环境的话,hbase还是要把数据存到hdfs上比较安全的. 1. 安装单机zookeeper cd /root/tar xf zookeeper-3.4.8.tar.gz -C ./mv zookeeper-3.4.8 /opt/zk cd /opt/zkcat conf/zoo.cfg

hbase过滤器(1)

最近在公司做hbase就打算复习下它的过滤器以便不时之需,RowFilter根据行键(rowkey)筛选数据 public void filter() throws IOException { Filter rf = new RowFilter(CompareFilter.CompareOp.LESS, new BinaryComparator(Bytes.toBytes("35643b94-b396-4cdc-abd9-029ca495769d"))); Scan s = new S

[原创]HBase学习笔记(1)-安装和部署

HBase安装和部署 使用的HBase版本是1.2.4 1.安装步骤(默认hdfs已安装好) # 下载并解压安装包 cd tools/ tar -zxf hbase-1.2.4-bin.tar.gz   # 重命名为hbase mv hbase-1.2.4 hbase # 将hadoop目录下的hdfs-site.xml 和 core-stie.xml拷贝到 hbase下的conf 目录中 cd /home/work/tools/hbase/conf cp /home/work/tools/ha

Hbase delete遇到的常见异常: Exception in thread &quot;main&quot; java.lang.UnsupportedOperationException

hbase 执行批量删除时出现错误: Exception in thread "main" java.lang.UnsupportedOperationException at java.util.AbstractList.remove(AbstractList.java:161) at org.apache.hadoop.hbase.client.HTable.delete(HTable.java:852) 这种异常其实很常见,remove操作不支持,为什么会出现不支持的情况呢?检查

HBase学习

记录HBase的学习过程.之后会陆续添加内容. 读取hbase的博客,理解hbase是什么.推荐博文: 1,HBase原理,基础架构,基础概念 2,HBase超详细介绍 ----------------------------------------------------- 一.直接实践吧! 1,HBase standalone模式安装 版本:1.2.4 参考文档:http://archive.cloudera.com/cdh5/cdh/5/hbase-0.98.6-cdh5.3.3/book

基于HBase的时间序列数据库(改进)

基本知识: 期望:1.利用高效的行.列键组织数据存储方式和使用平滑的数据持久策略缓解集群压力 2.利用zookeeper保障数据一致性(选举Leader) 提高性能的技术:数据压缩.索引技术.实体化视图 zookeeper 监控HRegionServer,保存Root Region实际地址,HMaster物理地址,减轻分布式应用从头开发协作服务的负担 HMaster管理HRegionServer负载均衡 日志根据Hadoop的SequenceFile存储 HBase主要处理实际数据文件和日志文件