saltstack搭建集群2

功能模块-----keepalived模块

写之前先找一台主机源码安装测试

http://www.keepalived.org/software/keepalived-1.2.19.tar.gz

[[email protected] tools]# tar xf keepalived-1.2.19.tar.gz

[[email protected] tools]# cd keepalived-1.2.19

[[email protected] keepalived-1.2.19]# ./configure --prefix=/usr/local/keepalived --disable-fwmark

[[email protected] keepalived-1.2.19]# make && make install

keepalived-1.2.19/keepalived/etc/init.d/keepalived.init       #启动脚本

keepalived-1.2.19/keepalived/etc/keepalived/keepalived.conf    #配置文件

配置keepalived模块路径及相关文件

[[email protected] ~]# mkdir /srv/salt/prod/keepalived

[[email protected] ~]# mkdir /srv/salt/prod/keepalived/files

[[email protected] keepalived]# cp ~/tools/keepalived-1.2.19.tar.gz /srv/salt/prod/keepalived/

files/

[[email protected] tools]#cp keepalived-1.2.19/keepalived/etc/init.d/keepalived.init /srv/salt/prod/keepalived/files/        #复制启动脚本

[[email protected] tools]#cp keepalived-1.2.19/keepalived/etc/keepalived/keepalived.conf /srv/salt/prod/keepalived/files/           #复制配置文件

[[email protected] tools]# cp keepalived-1.2.19/keepalived/etc/init.d/keepalived.sysconfig /srv/salt/prod/keepalived/files/

[[email protected] tools]# cd /srv/salt/prod/keepalived/files/

[[email protected] files]# vim keepalived.init     #修改启动脚本路径

daemon /usr/local/keepalived/sbin/keepalived ${KEEPALIVED_OPTIONS}

1.keepalived功能模块

[[email protected] keepalived]# cd /srv/salt/prod/keepalived/

[[email protected] keepalived]# cat install.sls

include:

- pkg.pkg-init

keepalived-install:

file.managed:

- name: /usr/local/src/keepalived-1.2.19.tar.gz

- source: salt://keepalived/files/keepalived-1.2.19.tar.gz

- user: root

- group: root

- mode: 755

cmd.run:

- name: cd /usr/local/src/ && tar xf keepalived-1.2.19.tar.gz && cd keepalived-1.2.19 && ./configure --prefix=/usr/local/keepalived --disable-fwmark && make &&make install

- unless: test -d /usr/local/keepalived

- require:

- pkg: pkg-init

- file: keepalived-install

keepalived-init:

file.managed:

- name: /etc/init.d/keepalived

- source: salt://keepalived/files/keepalived.init

- user: root

- group: root

- mode: 755

cmd.run:

- name: chkconfig --add keepalived

- unless: chkconfig --list |grep keepalived

- require:

- file: keepalived-init

/etc/sysconfig/keepalived:

file.managed:

- source: salt://keepalived/files/keepalived.sysconfig

- user: root

- group: root

- mode: 644

/etc/keepalived:

file.directory:

- user: root

- group: root

- mode: 755

[[email protected] files]# salt ‘*‘ state.sls keepalived.install env=prod     #手动测试一下

2.keepalived业务模块

[[email protected] ~]# cd /srv/salt/prod/cluster/files/

[[email protected] files]# cat haproxy-outside-keepalived.cfg #keepalived配置文件,里面用到了jinja变量

#configutation file for keepalive

globlal_defs {

notification_email {

[email protected]

}

notification_email_from [email protected]

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id {{ROUTEID}}

}

vrrp_instance haproxy_ha {

state {{STATEID}}

interface eth2

virtual_router_id 36

priority {{PRIORITYID}}

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.10.130

}

}

[[email protected] ~]# cd /srv/salt/prod/cluster/

[[email protected] cluster]# cat haproxy-outside-keepalived.sls

include:

- keepalived.install

keepalived-service:

file.managed:

- name: /etc/keepalived/keepalived.conf

- source: salt://cluster/files/haproxy-outside-keepalived.cfg

- user: root

- group: root

- mode: 644

- template: jinja

{% if grains[‘fqdn‘] == ‘node1‘ %}

- ROUTEID: haproxy_ha

- STATEID: MASTER

- PRIORITYID: 150

{% elif grains[‘fqdn‘] == ‘node2‘ %}

- ROUTEID: haproxy_ha

- STATEID: BACKUP

- PRIORITYID: 100

{% endif %}

service.running:

- name: keepalived

- enable: True

- watch:

- file: keepalived-service

[[email protected] cluster]# salt ‘*‘ state.sls cluster.haproxy-outside-keepalived env=prod  #测试一下


指定服务器执行keepalived模块

[[email protected] salt]# cat /srv/salt/base/top.sls

base:

‘*‘:

- init.env_init

prod:

‘node1‘:

- cluster.haproxy-outside

- cluster.haproxy-outside-keepalived

‘node2‘:

- cluster.haproxy-outside

- cluster.haproxy-outside-keepalived

[[email protected] salt]# salt ‘*‘ state.highstate    #到这步执行成功的话就实现了keepalived+haproxy

遇到问题:发现keepalived 虚拟vip写不上去

查看日志 cat /var/log/messages,发现下面一句

Aug 11 15:10:12 node1 Keepalived_vrrp[29442]: VRRP_Instance(haproxy_ha{) sending 0 priority

haproxy_ha后面打了个空格解决

vrrp_instance haproxy_ha {

时间: 2024-10-13 04:38:23

saltstack搭建集群2的相关文章

saltstack搭建集群详解1

使用saltstack完成这个架构图: 配置思路 (1).系统初始化 Base环境下存放所有系统都要执行的状态,调整内核参数,dns,装zabbix-agent等 (2).功能模块(如:上面的haproxy) 如上面的haproxy nginx php memcached等服务,每一个服务都建一个目录,把每一个服务要执行的状态都放在这个目录下. (3).业务模块 以业务为单位,一个业务里可能包含haproxy,nginx,php等,业务需要什么服务就把功能模块里对应的服务include 1.编辑

saltstack搭建集群3

系统初始化模块--------------zabbix-agent 在配置文件里设置pillar路径 [[email protected] init]# vim /etc/salt/master pillar_roots: base: - /srv/pillar/base [[email protected] init]# /etc/init.d/salt-master restart 在pillar里建立top.sls和zabbix.sls [[email protected] init]#

使用LVS+DR搭建集群实现负载均衡

使用LVS+DR搭建集群实现负载均衡 DR模式的概述与工作原理 DR模式服务概述:        Direct Routing(直接路由) --在同一个地域,同一个网段 Director分配请求到不同的real server.real server处理请求后直接回应给用户,这样director负载均衡器仅处理客户机与服务器的一半连接.负载均衡器仅处理一半的连接,避免了新的性能瓶颈,同样增加了系统的可伸缩性.Direct Routing由与采用物理层(修改MAC地址)技术,因此所有服务器都必须在一

复制虚拟机vmware centos搭建集群节点过程中网络配置eth0和eth1遇到的问题以及NAT模式下虚拟机静态IP配置方法

在centos中安装完第一个虚拟机后,一般习惯通过克隆的方式创建其它虚拟机,开后vmware无法发现网卡信息,系统认为这是重新安装,所以重新创建了一个新的网卡叫eth1. 并且用IFCONFIG-a查看网卡信息,只有lo信息,无法显示eth0了. 解决方法: 修改/etc/udev/rules.d 下的 70-persistent-net.rules文件,我们会发现下面两行: # PCI device 0x1022:0x2000(pcnet32) SUBSYSTEM=="net",DR

redis 一二事 - 搭建集群缓存服务器

在如今并发的环境下,对大数据量的查询采用缓存是最好不过的了,本文使用redis搭建集群 (个人喜欢redis,对memcache不感冒) redis是3.0后增加的集群功能,非常强大 集群中应该至少有三个节点,每个节点有一备份节点.这样算下来至少需要6台服务器 考虑到有些朋友的电脑配置不是很高,跑多个虚拟机就会卡,这边放出伪分布式和分布式 (2年前的配置) 前提先装好一个单例情况下的redis(这里就不多说了) 需要6个redis实例 搭建集群的步骤: 在/usr/local下 创建文件夹 这个

搭建集群平台

******************* 搭建集群结点 ***************** # vim /etc/yum.repos.d/rhel-source.repo 安装服务: # yum install -y ricci   (两个结点上都安装) # passwd ricci # /etc/init.d/ricci start # chkconfig ricci on#开机自启动 # yum install -y luci   (在server1上安装) # /etc/init.d/luc

使用LVS+NAT搭建集群实现负载均衡

使用LVS+NAT搭建集群实现负载均衡 LVS集群简介    计算机集群简称集群是一种计算机系统,它通过一组松散集成的计算机软件或硬件连接起来高度紧密地协作完成计算工作.在某种意义上,他们可以被看作是一台计算机.集群系统中的单个计算机通常称为节点,通常通过局域网连接,但也有其它的可能连接方式.集群计算机通常用来改进单个计算机的计算速度和/或可靠性.一般情况下集群计算机比单个计算机,比如工作站或超级计算机性能价格比要高得多        LVS集群分布图   集群有三种类型:           

用apache和tomcat搭建集群,实现负载均衡

型的企业应用每天都需要承受巨大的访问量,在着巨大访问量的背后有数台服务器支撑着,如果一台服务器崩溃了,那么其他服务器可以使企业应用继续运行,用户对服务器的运作是透明化的,如何实现这种透明化呢?由如下问题需要解决. 一.Session的复制 二.如何将请求发送到正常的服务器 针对以上问题,可以使用群集和负载均衡来解决,整体架构如下:  中间由一台服务器做负载均衡(Load Balancer),它将所有请求,根据一定的负载均衡规则发送给指定的群集服务器(Cluster),群集服务器拥有着相同的状态和

Ubuntu下用hadoop2.4搭建集群(伪分布式)

要真正的学习hadoop,就必须要使用集群,但是对于普通开发者来说,没有大规模的集群用来测试,所以只能使用伪分布式了.下面介绍如何搭建一个伪分布式集群. 为了节省时间和篇幅,前面一些步骤不再叙述.本文是在基于单机模式的前提下进行得搭建.若不会搭建单机模式,请看我的前一篇文章.Ubuntu下用hadoop2.4搭建集群(单机模式) 第一步 配置hdfs-site.xml /usr/local/hadoop/etc/hadoop/hdfs-site.xml用来配置集群中每台主机都可用,指定主机上作为