hadoop redis mongodb

一、环境

系统        CentOS7.0 64位

namenode01    192.168.0.220

namenode02    192.168.0.221

datanode01    192.168.0.222

datanode02    192.168.0.223

datanode03    192.168.0.224

二、配置基础环境

在所有的机器上添加本地hosts文件解析

[[email protected] ~]# tail -5 /etc/hosts
192.168.0.220	namenode01
192.168.0.221	namenode02
192.168.0.222	datanode01
192.168.0.223	datanode02
192.168.0.224	datanode03

在5台机器上创建hadoop用户,并设置密码是hadoop,这里只以naemenode01为例子

[[email protected] ~]# useradd hadoop
[[email protected] ~]# passwd hadoop
Changing password for user hadoop.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

配置5台机器hadoop用户之间互相免密码ssh登录

#namenode01的操作
[[email protected] ~]# su - hadoop
[[email protected] ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory ‘/home/hadoop/.ssh‘.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
1c:7e:89:9d:14:9a:10:fc:69:1e:11:3d:6d:18:a5:01 [email protected]
The key‘s randomart image is:
+--[ RSA 2048]----+
|     .o.E++=.    |
|      ...o++o    |
|       .+ooo     |
|       o== o     |
|       oS.=      |
|        ..       |
|                 |
|                 |
|                 |
+-----------------+
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[[email protected] ~]$ ssh namenode01 hostname
namenode01
[[email protected] ~]$ ssh namenode02 hostname
namenode02
[[email protected] ~]$ ssh datanode01 hostname
datanode01
[[email protected] ~]$ ssh datanode02 hostname
datanode02
[[email protected] ~]$ ssh datanode03 hostname
datanode03

#在namenode02上操作
[[email protected] ~]# su - hadoop
[[email protected] ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
a9:f5:0d:cb:c9:88:7b:71:f5:71:d8:a9:23:c6:85:6a [email protected]
The key‘s randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|            .  o.|
|         . ...o.o|
|        S +....o |
|       +.E.O o.  |
|      o ooB o .  |
|       ..        |
|      ..         |
+-----------------+

[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[[email protected] ~]$ ssh namenode01 hostname
namenode01
[[email protected] ~]$ ssh namenode02 hostname
namenode02
[[email protected] ~]$ ssh datanode01 hostname
datanode01
[[email protected] ~]$ ssh datanode02 hostname
datanode02
[[email protected] ~]$ ssh datanode03 hostname
datanode03

#在datanode01上操作
[[email protected] ~]# su - hadoop
[[email protected] ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
48:72:20:69:64:e7:81:b7:03:64:41:5e:fa:88:db:5e [email protected]datanode01
The key‘s randomart image is:
+--[ RSA 2048]----+
| +O+=            |
| +=*.o           |
| .ooo.o          |
| . oo+ .         |
|. . ... S        |
| o               |
|. . E            |
| . .             |
|  .              |
+-----------------+

[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[[email protected] ~]$ ssh namenode01 hostname
namenode01
[[email protected] ~]$ ssh namenode02 hostname
namenode02
[[email protected] ~]$ ssh datanode01 hostname
datanode01
[[email protected] ~]$ ssh datanode02 hostname
datanode02
[[email protected] ~]$ ssh datanode03 hostname
datanode03

#datanode02上操作
[[email protected] ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
32:aa:88:fa:ce:ec:51:6f:de:f4:06:c9:4e:9c:10:31 [email protected]
The key‘s randomart image is:
+--[ RSA 2048]----+
|      E.         |
|      ..         |
|       .         |
|      .          |
|    . o+So       |
|   . o oB        |
|  . . oo..       |
|.+ o o o...      |
|=+B   . ...      |
+-----------------+

[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[[email protected] ~]$ ssh namenode01 hostname
namenode01
[[email protected] ~]$ ssh namenode02 hostname
namenode02
[[email protected] ~]$ ssh datanode01 hostname
datanode01
[[email protected] ~]$ ssh datanode02 hostname
datanode02
[[email protected]node02 ~]$ ssh datanode03 hostname
datanode03

#datanode03上操作
[[email protected] ~]# su - hadoop
[[email protected] ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
f3:f3:3c:85:61:c6:e4:82:58:10:1f:d8:bf:71:89:b4 [email protected]
The key‘s randomart image is:
+--[ RSA 2048]----+
|      o=.        |
|      ..o.. .    |
|       o.+ * .   |
|      . . E O    |
|        S  B o   |
|         o. . .  |
|          o  .   |
|           +.    |
|            o.   |
+-----------------+

[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03

#验证结果
[[email protected] ~]$ ssh namenode01 hostname
namenode01
[[email protected] ~]$ ssh namenode02 hostname
namenode02
[[email protected] ~]$ ssh datanode01 hostname
datanode01
[[email protected] ~]$ ssh datanode02 hostname
datanode02
[[email protected] ~]$ ssh datanode03 hostname
datanode03

三、安装jdk环境

[[email protected] ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u74-b02/jdk-8u74-linux-x64.tar.gz?AuthParam=1461828883_648d68bc6c7b0dfd253a6332a5871e06
[[email protected] ~]# tar xf jdk-8u74-linux-x64.tar.gz -C /usr/local/

#配置环境变量配置文件
[[email protected] ~]# cat /etc/profile.d/java.sh
JAVA_HOME=/usr/local/jdk1.8.0_74
JAVA_BIN=/usr/local/jdk1.8.0_74/bin
JRE_HOME=/usr/local/jdk1.8.0_74/jre
PATH=$PATH:/usr/local/jdk1.8.0_74/bin:/usr/local/jdk1.8.0_74/jre/bin
CLASSPATH=/usr/local/jdk1.8.0_74/jre/lib:/usr/local/jdk1.8.0_74/lib:/usr/local/jdk1.8.0_74/jre/lib/charsets.jar
export JAVA_HOME PATH

#加载环境变量
[[email protected] ~]# source /etc/profile.d/java.sh
[[email protected] ~]# which java
/usr/local/jdk1.8.0_74/bin/java

#测试结果
[[email protected] ~]# java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

#将环境变量配置文件和二进制包复制到其余的4台机器上
[[email protected] ~]# scp -r /usr/local/jdk1.8.0_74 namenode02:/usr/local/
[[email protected] ~]# scp -r /usr/local/jdk1.8.0_74 datanode01:/usr/local/
[[email protected] ~]# scp -r /usr/local/jdk1.8.0_74 datanode02:/usr/local/
[[email protected] ~]# scp -r /usr/local/jdk1.8.0_74 datanode03:/usr/local/
[[email protected] ~]# scp /etc/profile.d/java.sh namenode02:/etc/profile.d/                                                                                                      100%  308     0.3KB/s   00:00    
[[email protected] ~]# scp /etc/profile.d/java.sh datanode01:/etc/profile.d/                                                                                            100%  308     0.3KB/s   00:00    
[[email protected] ~]# scp /etc/profile.d/java.sh datanode02:/etc/profile.d/                                                                                                         100%  308     0.3KB/s   00:00    
[[email protected] ~]# scp /etc/profile.d/java.sh datanode03:/etc/profile.d/

#测试结果,以namenode02为例子
[[email protected] ~]# source /etc/profile.d/java.sh 
[[email protected] ~]# java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

四、安装hadoop

#下载hadoop软件
[[email protected] ~]# wget http://apache.fayea.com/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz
[[email protected] ~]# tar xf hadoop-2.5.2.tar.gz -C /usr/local/
[[email protected] ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/
[[email protected] ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop
‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’

#添加hadoop的环境变量配置文件
[[email protected] ~]# cat /etc/profile.d/hadoop.sh
HADOOP_HOME=/usr/local/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_BASE PATH

#切换到hadoop用户下,检查jdk环境是否正常
[[email protected] ~]# su - hadoop
Last login: Thu Apr 28 15:17:16 CST 2016 from datanode01 on pts/1
[[email protected] ~]$ java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

#开始编辑hadoop的配置文件
#编辑hadoop的环境变量文件
[[email protected] ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_74        #修改JAVA_HOME变量的值

#编辑core-site.xml文件
[[email protected] ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/temp</value>
        </property>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://mycluster</value>
        </property>
        <property>
                <name>io.file.buffers.size</name>
                <value>131072</value>
        </property>
</configuration>

#编辑hdfs-site.xml文件
[[email protected] ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/data/hdfs/dfs/name</value>    #namenode目录
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/data/hdfs/data</value>        #datanode目录
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
	<property>
		<name>dfs.nameservices</name>
		<value>mycluster</value>        #和core-site.xml文件中保持一致
	</property>
	<property>
		<name>dfs.ha.namenodes.mycluster</name>
		<value>namenode01,namenode02</value>        #namenode节点
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.namenode01</name>
		<value>namenode01:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.namenode02</name>
		<value>namenode02:8020</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.namenode01</name>
		<value>namenode01:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.namenode02</name>
		<value>namenode02:50070</value>
	</property>
	<property>
	        #namenode往journalnode写edits文件,填写所有的journalnode节点
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485;datanode02:8485;datanode03:8485/mycluster</value>
	</property>
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/data/hdfs/journal</value>    #journalnode目录
	</property>
	<property>
		<name>dfs.client.faliover.proxy.provider.mycluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<property>
		<name>dfs.ha.fening.methods</name>
		<value>sshfence</value>        #通过什么方法进行fence操作
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_rsa</value>    #主机之间的认证
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>6000</value>
	</property>
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>false</value>    #关闭主备自动切换,后面通过zookeeper来切换
	</property>
	<property>
		<name>dfs.replication</name>
		<value>3</value>        #replicaion的数量,默认为3分,少于这个数量会报错
	</property>
	<property>
		<name>dfs.webhdfs.enabled</name>
		<value>true</value>
	</property>
	<property>
		<name>dfs.permissions</name>
		<value>false</value>
	</property>
</configuration>

#编辑yarn-site.xml文件
[[email protected] ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml 
<configuration>
	<property>
		<name>yarn.nodemanager.aux-service</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.resourcemanager.address</name>
		<value>namenode01:8032</value>
	</property>
	<property>
		<name>yarn.resourcemanager.scheduler.address</name>
		<value>namenode01:8030</value>
	</property>
	<property>
		<name>yarn.resourcemanager.resource-tracker.address</name>
		<value>namenode01:8031</value>
	</property>
	<property>
		<name>yarn.resourcemanager.admin.address</name>
		<value>namenode01:8033</value>
	</property>
	<property>
		<name>yarn.resourcemanager.webapp.address</name>
		<value>namenode01:8033</value>
	</property>
	<property>
		<name>yarn.nodemanager.resource.memory-mb</name>
		<value>15360</value>
	</property>
</configuration>

#编辑mapred-site.xml文件
[[email protected] ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
[[email protected] ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<property>
		<name>mapredue.jobtracker.http.address</name>
		<value>namenode01:50030</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>namenode01:10020</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>namenode01:19888</value>
	</property>
</configuration>

#编辑slaves配置文件
[[email protected] ~]$ cat /usr/local/hadoop/etc/hadoop/slaves 
datanode01
datanode02
datanode03

#在namenodee01上切换到root用户下,创建相应的目录
[[email protected] ~]# mkdir /data/hdfs
[[email protected] ~]# chown hadoop.hadoop /data/hdfs/

#将hadoop用户的环境变量配置文件复制到其余4台机器上
[[email protected] ~]# scp /etc/profile.d/hadoop.sh namenode02:/etc/profile.d/
[[email protected] ~]# scp /etc/profile.d/hadoop.sh datanode01:/etc/profile.d/
[[email protected] ~]# scp /etc/profile.d/hadoop.sh datanode02:/etc/profile.d/  
[[email protected] ~]# scp /etc/profile.d/hadoop.sh datanode03:/etc/profile.d/

#复制hadoop安装文件到其余的4台机器上
[[email protected] ~]# scp -r /usr/local/hadoop-2.5.2/ namenode02:/usr/local/
[[email protected] ~]# scp -r /usr/local/hadoop-2.5.2/ datanode01:/usr/local/
[[email protected] ~]# scp -r /usr/local/hadoop-2.5.2/ datanode02:/usr/local/
[[email protected] ~]# scp -r /usr/local/hadoop-2.5.2/ datanode03:/usr/local/

#修改目录的权限,以namenode02为例
[[email protected] ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/
[[email protected] ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop
‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’
[[email protected] ~]# ll /usr/local |grep hadoop
lrwxrwxrwx  1 root   root     24 Apr 28 17:19 hadoop -> /usr/local/hadoop-2.5.2/
drwxr-xr-x  9 hadoop hadoop  139 Apr 28 17:16 hadoop-2.5.2

#创建目录
[[email protected] ~]# mkdir /data/hdfs
[[email protected] ~]# chown -R hadoop.hadoop /data/hdfs/

#检查jdk环境
[[email protected] ~]# su - hadoop
Last login: Thu Apr 28 15:12:24 CST 2016 on pts/0
[[email protected] ~]$ java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)
[[email protected] ~]$ which hadoop
/usr/local/hadoop/bin/hadoop

五、启动hadoop

#在所有服务器执行hadoop-daemon.sh start journalnode,要在hadoop用户下执行
#只贴出namenoe01的过程
[[email protected] ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-hadoop-journalnode-namenode01.out

#在namenode01上执行
[[email protected] ~]$ hadoop namenode -format
时间: 2024-08-05 06:49:19

hadoop redis mongodb的相关文章

(转)Memcache,Redis,MongoDB(数据缓存系统)方案对比与分析

Memcache,Redis,MongoDB(数据缓存系统)方案对比与分析 数据库表数据量极大(千万条),要求让服务器更加快速地响应用户的需求. 二.解决方案: 1.通过高速服务器Cache缓存数据库数据 2.内存数据库 (这里仅从数据缓存方面考虑,当然,后期可以采用Hadoop+HBase+Hive等分布式存储分析平台) 三.主流解Cache和数据库对比: 上述技术基本上代表了当今在数据存储方面所有的实现方案,其中主要涉及到了普通关系型数据库(MySQL/PostgreSQL),NoSQL数据

[轉]redis;mongodb;memcache三者的性能比較

先说我自己用的情况: 最先用的memcache ,用于键值对关系的服务器端缓存,用于存储一些常用的不是很大,但需要快速反应的数据 然后,在另一个地方,要用到redis,然后就去研究了下redis. 一看,显示自己安装了php扩展,因为有服务器上的redis服务端,自己本地就没有安装,其实用法和memcache基本一样,可能就是几个参数有所不 同.当然 它们缓存的效果也不一样,具体的哪里不一样,一下就是一些资料,和自己的总结 1. Redis和Memcache都是将数据存放在内存中,都是内存数据库

记录CentOS 7.4 上安装MySQL&amp;MariaDB&amp;Redis&amp;Mongodb

记录CentOS 7.4 上安装MySQL&MariaDB&Redis&Mongodb 前段时间我个人Google服务器意外不能用,并且我犯了一件很低级的错误,直接在gcp讲服务器实例给释放掉,导致我的数据全部丢失,现在新搞一个服务器,顺便记录一下CentOS 7.4 MySQL&MariaDB&Redis&Mongodb 的安装 1祝大家:诸事顺利,2019 发大财! 本人将一如既往,更新我的博客,努力为博客园贡献文章! Mysql 安装 随着CentOS

本地搭建easy-mock(nvm+node 8.x+redis+mongodb)

1.easy-mock代码库;命令:git clone [email protected]:easy-mock/easy-mock.git: 2.nvm+redis+MongoDB的安装 2-1)nvm安装和使用:https://www.cnblogs.com/dengjerry/p/12686812.html 2-2)redis的安装和介绍:https://www.runoob.com/redis/redis-install.html: 2-3)MongoDB的安装介绍:https://www

大数据架构开发 挖掘分析 Hadoop HBase Hive Storm Spark ZooKeeper Redis MongoDB 机器学习 云计算

培训大数据架构开发.挖掘分析! 从零基础到高级,一对一培训![技术QQ:2937765541] ----------------------------------------------------------------------------------------------------------------- 课程体系: 获取视频资料和培训解答技术支持地址 课程展示(大数据技术很广,一直在线为你培训解答!):    获取视频资料和培训解答技术支持地址

大数据架构开发 挖掘分析 Hadoop HBase Hive Storm Spark Sqoop Flume ZooKeeper Kafka Redis MongoDB 机器学习 云计算 视频教程

培训大数据架构开发.挖掘分析! 从零基础到高级,一对一培训![技术QQ:2937765541] ------------------------------------------------------------------------------------------------------------------------------------------- 课程体系: 获取视频资料和培训解答技术支持地址 课程展示(大数据技术很广,一直在线为你培训解答!):    获取视频资料和培

Memcache,Redis,MongoDB(数据缓存系统)方案对比与分析

mongodb和memcached不是一个范畴内的东西.mongodb是文档型的非关系型数据库,其优势在于查询功能比较强大,能存储海量数据.mongodb和memcached不存在谁替换谁的问题. 和memcached更为接近的是redis.它们都是内存型数据库,数据保存在内存中,通过tcp直接存取,优势是速度快,并发高,缺点是数据类型有限,查询功能不强,一般用作缓存.在我们团队的项目中,一开始用的是memcached,后来用redis替代. 相比memcached: 1.redis具有持久化机

springboot学习笔记-3 整合redis&amp;mongodb

一.整合redis 1.1 建立实体类 @Entity @Table(name="user") public class User implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; private String name; @DateTimeFormat(pattern="yyyy-MM-dd HH:mm:ss") private

[转载]Node.JS平台上的数据库Redis,MongoDB,HBASE,MySQL

一. MongoDB: 因为10gen是的赞助商之一,所以MongoDB有着良好的Node.JS支持. a. 基本支持:,在Node.JS对MongoDB的支持有两种常用的组件mongodb, mongoose.下面分别介绍. (1)基于mongodb的支持.这个for Node.JS的驱动是基于事件驱动的,所以用法基本上都是异步回调函方式.下载驱动组件$npm install -gd mongodb 在testdb.js加入如下代码: var mongodb = require('mongod