300字范文,内容丰富有趣,生活中的好帮手!
300字范文 > Linux之企业实训篇——haproxy与pacemaker实现高可用负载均衡

Linux之企业实训篇——haproxy与pacemaker实现高可用负载均衡

时间:2021-01-16 02:05:16

相关推荐

Linux之企业实训篇——haproxy与pacemaker实现高可用负载均衡

:haproxy与fence的相关配置可以参照一下我之前写的博客 >_< ~~~

一、简介

Pacemaker是一个集群资源管理器。它利用集群基础构件(OpenAIS,heartbeat或corosync)提供的消息和成员管理能力来探测并从节点或资源级别的故障中恢复,以实现群集服务(亦称资源)的最大可用性。Corosync是集群管理套件的一部分,它在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。

[1]

Corosync是集群管理套件的一部分,通常会与其他资源管理器一起组合使用它在传递信息的时候可以通过一个简单的配置文件来定义信息传递的方式和协议等。它是一个新兴的软件,推出,但其实它并不是一个真正意义上的新软件,在2002年的时候有一个项目Openais

, 它由于过大,分裂为两个子项目,其中可以实现HA心跳信息传输的功能就是Corosync ,它的代码60%左右来源于Openais.

Corosync可以提供一个完整的HA功能,但是要实现更多,更复杂的功能,那就需要使用Openais了。Corosync是未来的发展方向。在以后的新项目里,一般采用Corosync,而hb_gui可以提供很好的HA管理功能,可以实现图形化的管理。另外相关的图形化有RHCS的套件luci+ricci,当然还有基于java开发的LCMC集群管理工具。

二、实验环境

haproxy与pacemaker服务器 :

server1: 172.25.2.1/24server2 : 172.25.2.2/24

后端服务器:

server:3: 172.25.2.3/24server4 : 172.25.2.4/24物理主机:172.25.2.250/24

此次试验所用的安装包链接:/s/1nCyPkqyomRDHjWG__X0lcw 密码: wmxq

三、实验

3.1 Pacemaker+Corosync配置:

server2于server1环境应该完全相同,haproxy的配置参数可查看上篇博客

3.1.1 配置server2于server1相同的haproxy环境。

[root@server2 x86_64]# rpm -ivh haproxy-1.6.11-1.x86_64.rpm Preparing...########################################### [100%]1:haproxy########################################### [100%][root@server1 ~]# scp /etc/haproxy/haproxy.cfg server2:/etc/haproxy/haproxy.cfg root@server2's password: haproxy.cfg 100% 1897[root@server2 x86_64]# /etc/init.d/haproxy startStarting haproxy: [ OK ]

3.1.2 安装pacemaker与corosync软件

[root@server2 ~]# yum install -y pacemaker corosync[root@server1 ~]# cd /etc/corosync/[root@server1 corosync]# lscorosync.conf.example corosync.conf.example.udpu service.d uidgid.d[root@server1 corosync]# cp corosync.conf.example corosync.conf //拷贝配置文件[root@server1 corosync]# vim corosync.conf

[root@server1 corosync]# scp corosync.conf server2:/etc/corosync/ //拷贝配置文件root@server2s password: corosync.conf 100% 4800.5KB/s 00:00 [root@server1 ~]# yum install crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-2.1.x86_64.rpm -y //安装管理工具

3.1.3 pacemaker配置参数查看

[root@server1 ~]# crm//进入管理界面crm(live)# configure crm(live)configure# show //查看默认配置node server1node server2property $id="cib-bootstrap-options" \dc-version="1.1.10-14.el6-368c726" \cluster-infrastructure="classic openais (with plugin)" \expected-quorum-votes="2"crm(live)configure#

在另一台服务器上我们也可以实施监控查看

Server4:

[root@server1 ~]# crm_mon //调出监控Last updated: Sat Aug 4 15:07:13 Last change: Sat Aug 4 15:00:04 via crmd on server1Stack: classic openais (with plugin)Current DC: server1 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes0 Resources configured//ctrl+c退出监控

3.1.4 stonith禁用

[root@server1 ~]# crm//进入管理界面crm(live)# configure crm(live)configure# property stonith-enabled=false//corosync默认启用了stonith,而当前集群并没有相应的stonith设备 我们里可以通过如下命令先禁用stonith:crm(live)configure# commit //保存

注意:每次修改完策略都必须保存一下,否则不生效

3.1.5 添加vip

[root@server2 rpmbuild]# crm_verify -VL //检查语法[root@server2 ~]# crm//进入管理界面crm(live)# configure crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=172.25.2.100 cidr_netmask=24 op monitor interval=1min //添加VIPcrm(live)configure# commit //保存

//监控Last updated: Sat Aug 4 15:26:06 Last change: Sat Aug 4 15:25:34 via cibadmin on server1Stack: classic openais (with plugin)Current DC: server1 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes1 Resources configuredOnline: [ server1 server2 ]vip(ocf::heartbeat:IPaddr2): Started server2 //此时vip已添加在server2上

[root@server2~]# /etc/init.d/corosync stop //server2关闭服务Starting Corosync Cluster Engine (corosync):[ OK ]Server1:Last updated: Sat Aug 4 15:28:31 Last change: Sat Aug 4 15:25:34 via cibadmin on server1Stack: classic openais (with plugin)Current DC: server1 - partition WITHOUT quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes1 Resources configuredOnline: [ server1 ]OFFLINE: [ server2 ]vip(ocf::heartbeat:IPaddr2): Started server1

[root@server2 ~]# /etc/init.d/corosync start //打开服务Starting Corosync Cluster Engine (corosync):[ OK ]Server1:Last updated: Sat Aug 4 15:31:27 Last change: Sat Aug 4 15:25:34 via cibadmin on server1Stack: classic openais (with plugin)Current DC: server1 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes1 Resources configuredOnline: [ server1 server2 ] //server2 onlinevip(ocf::heartbeat:IPaddr2): Started server1

3.1.6 设置投票

[root@server2 x86_64]# crmcrm(live)# configure crm(live)configure# show node server1node server2primitive vip ocf:heartbeat:IPaddr2 \params ip="172.25.2.100" cidr_netmask="24" \op monitor interval="1min"property $id="cib-bootstrap-options" \dc-version="1.1.10-14.el6-368c726" \cluster-infrastructure="classic openais (with plugin)" \expected-quorum-votes="2" \stonith-enabled="false"crm(live)configure# property no-quorum-policy=ignore v//设置投票crm(live)configure# verify //检查语法crm(live)configure# commit //保存

3.1.7 添加haproxy:

[root@server1 ~]# crm crm(live)# configure crm(live)configure# primitive haproxy lsb:haproxy op monitor interval=1mincrm(live)configure# commit

Last updated: Sat Aug 4 15:45:04 Last change: Sat Aug 4 15:44:58 via cibadmin on server1Stack: classic openais (with plugin)Current DC: server2 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes2 Resources configuredOnline: [ server1 server2 ]vip(ocf::heartbeat:IPaddr2): Started server2haproxy (lsb:haproxy): Started server1 //监控,此时haproxy在server1上运行

3.1.8 添加组,组合vip与haproxy

[root@server1 ~]# crm crm(live)# configure crm(live)configure# group hagroup vip haproxy crm(live)configure# commit

Last updated: Sat Aug 4 15:46:21 Last change: Sat Aug 4 15:46:02 via cibadmin on server1Stack: classic openais (with plugin)Current DC: server2 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes2 Resources configuredOnline: [ server1 server2 ]Resource Group: hagroup //hagroupvip (ocf::heartbeat:IPaddr2): Started server2haproxy (lsb:haproxy): Started server1

3.1.9 测试

Last updated: Sat Aug 4 15:46:21 Last change: Sat Aug 4 15:46:02 via cibadmin on server1Stack: classic openais (with plugin)Current DC: server2 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes2 Resources configuredOnline: [ server1 server2 ]Resource Group: hagroup //hagroupvip (ocf::heartbeat:IPaddr2): Started server2haproxy (lsb:haproxy): Started server1[root@server1 ~]# crm node standby //强制关闭当前服务器Last updated: Sat Aug 4 16:05:30 Last change: Sat Aug 4 16:05:26 via crm_attribute on server1Stack: classic openais (with plugin)Current DC: server2 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes2 Resources configuredNode server1: standbyOnline: [ server2 ]//仅仅server1在线Resource Group: hagroupvip (ocf::heartbeat:IPaddr2): Started server2 haproxy (lsb:haproxy): Started server2 //运行在server2

[root@server1 ~]# crm node online //激活服务Last updated: Sat Aug 4 16:15:27 Last change: Sat Aug 4 16:15:11 via crm_attribute on server1Stack: classic openais (with plugin)Current DC: server2 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes2 Resources configured[root@server2 ~]# crm_monOnline: [ server1 server2 ] // 均在线Resource Group: hagroupvip (ocf::heartbeat:IPaddr2): Started server2haproxy (lsb:haproxy): Started server2

3.2 配置Fence:

1.Server1与server2

[root@server1 ~]# yum install fence-virt-0.2.3-15.el6.x86_64 -y //安装fence

物理机:

[root@foundation2 cluster]# scp fence_xvm.key server1:/etc/cluster/root@server1's password: //传秘钥fence_xvm.key 100% 1280.1

2.打开stonith支持fence

[root@server1 ~]# crmcrm(live)# configure crm(live)configure# show node server1 \attributes standby="off"node server2primitive haproxy lsb:haproxy \op monitor interval="1min"primitive vip ocf:heartbeat:IPaddr2 \params ip="172.25.2.100" cidr_netmask="24" \op monitor interval="1min"group hagroup vip haproxyproperty $id="cib-bootstrap-options" \dc-version="1.1.10-14.el6-368c726" \cluster-infrastructure="classic openais (with plugin)" \expected-quorum-votes="2" \stonith-enabled="false" \no-quorum-policy="ignore"crm(live)configure# property stonith-enabled=true //打开fencecrm(live)configure# commit

2.创建vmfence

[root@server2 ~]# crm crm(live)# configure crm(live)configure# primitive vmfence lsb:ocf:service: stonith: crm(live)configure# primitive vmfence stonith:fence_fence_legacy fence_pcmkfence_virtfence_xvmcrm(live)configure# primitive vmfence stonith:fence_xvm metaop params crm(live)configure# primitive vmfence stonith:fence_xvm params action=ipport=pcmk_list_action=pcmk_off_retries=pcmk_status_timeout=auth= key_file= pcmk_list_retries=pcmk_off_timeout=port=debug= multicast_address=pcmk_list_timeout=pcmk_reboot_action= priority=delay= pcmk_host_argument= pcmk_monitor_action= pcmk_reboot_retries= retrans=domain=pcmk_host_check= pcmk_monitor_retries= pcmk_reboot_timeout= stonith-timeout=hash= pcmk_host_list= pcmk_monitor_timeout= pcmk_status_action= timeout=ip_family= pcmk_host_map= pcmk_off_action= pcmk_status_retries= use_uuid=crm(live)configure# primitive vmfence stonith:fence_xvm params pcmk_host_map="server1:westos1;server2:westos2;" op monitor interval=1min//这里的server1后跟的名称必须与物理机上虚拟机名称一样crm(live)configure# commit

监控:

Last updated: Sat Aug 4 16:36:52 Last change: Sat Aug 4 16:36:00 via cibadmin on server2Stack: classic openais (with plugin)Current DC: server2 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes3 Resources configuredOnline: [ server1 server2 ]Resource Group: hagroupvip (ocf::heartbeat:IPaddr2): Started server2haproxy (lsb:haproxy): Started server2vmfence (stonith:fence_xvm): Started server1 / /添加

3.3 测试

[root@server2 ~]# echo c >/proc/sysrq-trigger //内核崩溃Last updated: Sat Aug 4 16:39:20 Last change: Sat Aug 4 16:36:00 via cibadmin on server2Stack: classic openais (with plugin)Current DC: server2 - partition WITHOUT quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes3 Resources configuredOnline: [ server2 ]OFFLINE: [ server1 ]Resource Group: hagroupvip (ocf::heartbeat:IPaddr2): Started server2haproxy (lsb:haproxy): Started server2vmfence (stonith:fence_xvm): Started server2 //转移

server1会重启

[root@server1 ~]# /etc/init.d/corosync start //开启服务Starting Corosync Cluster Engine (corosync):[ OK ]Last updated: Sat Aug 4 16:40:58 Last change: Sat Aug 4 16:36:00 via cibadmin on server2Stack: classic openais (with plugin)Current DC: server2 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes3 Resources configuredOnline: [ server1 server2 ] Resource Group: hagroupvip (ocf::heartbeat:IPaddr2): Started server2haproxy (lsb:haproxy): Started server2vmfence (stonith:fence_xvm): Started server1 //转移

4.访问vip

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。