keepalived食用指南之基本用法

看到名字就应该知道就是保持主机一直处于在线状态,是一种HA cluster解决方案。通过所谓的高可用或热备,用来防止单点故障的发生。keepalived是通过vrrp的协议在linux主机上以守护进程方式的实现;能够根据配置文件自动生成ipvs规则,对各RS(real server)做健康状态检测。

关于vrrp:

那么在了解keepalived之前首先得看看什么是vrrp?vrrp: virtual router redundancy protocol 叫虚拟路由冗余协议,可以认为它是实现路由器高可用的容错协议(其他设备也可以使用),即将N台提供相同功能的路由器组成一个路由器组(Router Group),这个组里面有一个master和多个backup,而且他们都带有同一个VRID进行组织运行;VRID:虚拟路由器标识,有VRID的设备才能定义一个VRRP。

一个VRRP的集群拥有一个虚拟IP(vip(virtual IP),其实各台设备都有一个自己的IP号,而虚拟IP只是附加在master上面给外界使用!),占有这个IP的master实际负责ARP相应和转发IP数据包,组中的其它路由器作为备份的角色处于待命状态。master会发组播消息,当backup在超时时间内收不到消息包时就认为master挂掉了,这时就需要根据VRRP的优先级来选举一个backup当master,保证路由器的高可用。

当然上述是用在路由器上面的,不过keepalived实现的功能其实就是这样,两台主机定义一个virtual IP随后一台为master拥有VIP当主节点宕机之后slave节点接受不到主节点消息就会取而代之使用VIP。注意:VIP只有一个并且提供给外界使用。

知道了什么是vrrp那么接下来也来了解一些keepalived应用的架构。

keepalived的组成结构:

这边我为了方(tou)便(lan)一下,在百度中找到一张图片非常合适所以就无耻的拿来使用了!不对这个图其实是官网的…百度也是抄袭还无耻的打了水印!不过今天不讨论这个,接下来我来解析一下这张图片:

keepalive是由一个主进程(Control Plane) , 生成两个子进程(Checkers & VRRP Stack)

VRRP Stack : 是整个keepalive功能实现的核心子进程

Checkers : 主要用于检测real server的健康状态 , 可以基于TCP Check , http_get , https_get , misc_get等多种方法

除了上述的两个重要的子进程 , 还有其他辅助组件:

控制组件 : 主进程 (Control Plane) , 主进程除了生成两个子进程工作外 , 其本身也是配置文件分析器 ;

ipvs wrapper : 起初keepalive 是为了ipvs而生 , 所以 ipvs wrapper 是用于生成ipvs规则的工具(此处不用安装ipvsadm也能生成ipvs规则)

watch dog : 很可能VRRP Stack&Checkers会挂掉,那么需要一个watchdog这个一个进程负责替主进程检测两个子进程 ( Checkers 和 VRRP Stack ) 的健康状态。

说了这么多keepalived的理论估计大家早就看烦了,接下来弄点示例给大家看看keepalived是如何使用的。不过配置keepalived比较方便!

安装keepalived:

安装非常方便,由于keepalived已经被收录进base源所以直接使用yum安装即可!

[root@localhost ~] yum install keepalived
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
=======================================================================================================
 Package Arch Version Repository Size
=======================================================================================================
Installing:
 keepalived x86_64 1.2.13-8.el7 base 224 k
Installing for dependencies:
 lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 base 41 k
 net-snmp-agent-libs x86_64 1:5.7.2-24.el7_2.1 base 702 k
 net-snmp-libs x86_64 1:5.7.2-24.el7_2.1 base 747 k
 perl x86_64 4:5.16.3-291.el7 base 8.0 M
 perl-Carp noarch 1.26-244.el7 base 19 k
 perl-Encode x86_64 2.51-7.el7 base 1.5 M
 perl-Exporter noarch 5.68-3.el7 base 28 k
 perl-File-Path noarch 2.09-2.el7 base 26 k
 perl-File-Temp noarch 0.23.01-3.el7 base 56 k
 perl-Filter x86_64 1.49-3.el7 base 76 k
 perl-Getopt-Long noarch 2.40-2.el7 base 56 k
 perl-HTTP-Tiny noarch 0.033-3.el7 base 38 k
 perl-PathTools x86_64 3.40-5.el7 base 82 k
 perl-Pod-Escapes noarch 1:1.04-291.el7 base 51 k
 perl-Pod-Perldoc noarch 3.20-4.el7 base 87 k
 perl-Pod-Simple noarch 1:3.28-4.el7 base 216 k
 perl-Pod-Usage noarch 1.63-3.el7 base 27 k
 perl-Scalar-List-Utils x86_64 1.27-248.el7 base 36 k
 perl-Socket x86_64 2.010-4.el7 base 49 k
 perl-Storable x86_64 2.45-3.el7 base 77 k
 perl-Text-ParseWords noarch 3.29-4.el7 base 14 k
 perl-Time-HiRes x86_64 4:1.9725-3.el7 base 45 k
 perl-Time-Local noarch 1.2300-2.el7 base 24 k
 perl-constant noarch 1.27-2.el7 base 19 k
 perl-libs x86_64 4:5.16.3-291.el7 base 688 k
 perl-macros x86_64 4:5.16.3-291.el7 base 43 k
 perl-parent noarch 1:0.225-244.el7 base 12 k
 perl-podlators noarch 2.5.1-3.el7 base 112 k
 perl-threads x86_64 1.87-4.el7 base 49 k
 perl-threads-shared x86_64 1.43-6.el7 base 39 k
Transaction Summary
=======================================================================================================
Install 1 Package (+30 Dependent packages)
#安装的包大多都是依赖perl包,所以keepalived是perl语言开发的工具。

安装完毕keepalived按照国际惯例接下来看看配置文件是个啥样子?不过注意想要看看keepalived的版本可以使用keepalived -v查看

关于keepalived的配置文件:

keepalived默认配置文件在/etc/keepalived/keepalived.conf之中,其实keepalived只有一个配置文件。由于keepalived设计简单所以功能实用所以配置文件较少。打开之后可以看到默认就有示例:

! Configuration File for keepalived
global_defs {
 notification_email {
 acassen@firewall.loc
 failover@firewall.loc
 sysadmin@firewall.loc
 }
 notification_email_from Alexandre.Cassen@firewall.loc
 smtp_server 192.168.200.1
 smtp_connect_timeout 30
 router_id LVS_DEVEL
}

vrrp_instance VI_1 {
 state MASTER
 interface eth0
 virtual_router_id 51
 priority 100
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.200.16
 192.168.200.17
 192.168.200.18
 }
}

virtual_server 192.168.200.100 443 {
 delay_loop 6
 lb_algo rr
 lb_kind NAT
 nat_mask 255.255.255.0
 persistence_timeout 50
 protocol TCP

 real_server 192.168.201.100 443 {
 weight 1
 SSL_GET {
 url {
 path /
 digest ff20ad2481f97b1754ef3e12ecd3a9cc
 }
 url {
 path /mrtg/
 digest 9b3a0c85a887a256d6939da88aabd8cd
 }
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
 }
 }
}
#后面省略......

这种配置方式与nginx一模一样,所以很简单的我就把配置文件中的内容范围三类:

①Glob configuration 全局配置;大多数的配置都是邮件通知…这个我稍后来说。

②VRRPD configuration 是VRRP守护进程配置,此配置段又分为两个子配置:

vrrp instance 用于定义VRRP的实例(定义虚拟IP地址,就是定义一个接口)

vrrp synchronization group 定义vrrp的同步组

③LVS configuration 配置向后代理的主机(可以定义多次)其实就是virtual server,后面别忘了定义反向代理的real server配置段。

好了看好了配置文件的示例以及结构段落接下来开始就是Keepalived的食用指南了!

keepalived单主模型配置:

首先上拓扑图:

在master上的配置:

[root@localhost ~] vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
 notification_email { #将keepalived警告信息发送给哪个邮箱?
 root@localhost
 }
 notification_email_from keepalive@localhost #定义keepalived发送邮件的名字
 smtp_server 127.0.0.1 #定义邮件服务器地址或者名字
 smtp_connect_timeout 30
 router_id LVS_DEVEL #定义路由代表符
}

vrrp_instance test1 {
 state MASTER #定义状态,这里是master
 interface enp0s3 #定义接口
 virtual_router_id 16 #这里定义vrrp的id号,每个组的id必须一样!
 priority 100 #主机优先级,默认是100
 advert_int 1 
 authentication { #这里定义认证
 auth_type PASS #使用简单密码认证
 auth_pass b89a0c18 #写好随机密码,两边必须一样
 }
 virtual_ipaddress { #定义VIP地址
 172.16.1.50/24 dev enp0s3
 }
}

随后再去配置slave:

! Configuration File for keepalived

global_defs {
 notification_email {
 root@localhost
 }
 notification_email_from keepalive@localhost
 smtp_server 127.0.0.1
 smtp_connect_timeout 30
 router_id LVS_DEVEL
}

vrrp_instance test1 {
 state BACKUP #这边修改为backup节点。
 interface enp0s3
 virtual_router_id 16
 priority 99 #优先级更改一下,比主节点低就行。
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass b89a0c18
 }
 virtual_ipaddress {
 172.16.1.50/24 dev enp0s3
  }
}

然后看看测试效果:

[root@localhost ~] systemctl start keepalived ; ssh 172.16.1.30 "systemctl start keepalived"
#首先启动文件
[root@localhost ~] ip addr show ; ssh 172.16.1.30 "ip addr show"
#随后看看ip地址状态
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:ac:9d:0e brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.10/24 brd 172.16.1.255 scope global enp0s3
 valid_lft forever preferred_lft forever
#你看master主机拥有了vip地址并且可以对外服务
 inet 172.16.1.50/24 scope global secondary enp0s3
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:feac:9d0e/64 scope link
 valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:d4:d3:0d brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.30/24 brd 172.16.1.255 scope global enp0s3
#这里从节点就没有vip的地址所以只是一个备份节点
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:fed4:d30d/64 scope link
 valid_lft forever preferred_lft forever

接下来关闭master节点的服务看看发生了什么:

[root@localhost ~] systemctl stop keepalived
#在主节点中停止服务
[root@localhost ~] ip addr show ; ssh 172.16.1.30 "ip addr show"
#再次看看状态:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:ac:9d:0e brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.10/24 brd 172.16.1.255 scope global enp0s3
#vip被收走了!
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:feac:9d0e/64 scope link
 valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:d4:d3:0d brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.30/24 brd 172.16.1.255 scope global enp0s3
 valid_lft forever preferred_lft forever
 inet 172.16.1.50/24 scope global secondary enp0s3
#50的vip地址拿回来了!
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:fed4:d30d/64 scope link
 valid_lft forever preferred_lft forever

你看单主模型就是这样容易,也是企业最常用的一种模型!

有了单主模型接下来看看双主模型是个什么东西:

双主模型示例:

由于单主模型定义了一个vrrp的vip地址,不过有些企业可能并没有后端代理,在服务器上跑了两个应用想要互为备份,那么双主模型就派上用处了,一个应用一个vip地址。架构图如下:

两个实例分别跑着不同的应用但是都互为备份,当一端主机出现问题时会将所有应用交给另一台主机。这里我们使用和上面相同的两台主机,一个是172.16.1.10 & 172.16.1.30

172.16.1.10的配置:

</pre>
<pre>[root@localhost ~] vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 notification_email {
 root@localhost
 }
 notification_email_from keepalive@localhost 
 smtp_server 127.0.0.1 
 smtp_connect_timeout 30
 router_id LVS_DEVEL 
}

vrrp_instance test1 {
 state MASTER 
 interface enp0s3 
 virtual_router_id 16 
 priority 100 
 advert_int 1 
 authentication { 
 auth_type PASS 
 auth_pass b89a0c18 
 }
 virtual_ipaddress { 
 172.16.1.50/24 dev enp0s3
 }
}</pre>
<pre>vrrp_instance test2 { 
#添加一个test2的接口不过这里我们做backup
 state BACKUP
 interface enp0s3
 virtual_router_id 17
 priority 99
 authentication {
 auth_type PASS
 auth_pass 075bf718
 }
 virtual_ipaddress {
 172.16.1.60/24 dev enp0s3
 }
}

172.16.1.30的配置:

! Configuration File for keepalived

global_defs {
 notification_email {
 root@localhost
 }
 notification_email_from keepalive@localhost
 smtp_server 127.0.0.1
 smtp_connect_timeout 30
 router_id LVS_DEVEL
}

vrrp_instance test1 { 
#把优先级和state状态颠倒一下就行其他不用改
 state BACKUP
 interface enp0s3
 virtual_router_id 16
 priority 99
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass b89a0c18
 }
 virtual_ipaddress {
 172.16.1.50/24 dev enp0s3
 }
}

vrrp_instance test2 {
 state MASTER
 interface enp0s3
 virtual_router_id 17
 priority 100
 authentication {
 auth_type PASS
 auth_pass 075bf718
 }
 virtual_ipaddress {
 172.16.1.60/24 dev enp0s3
 }
}

随后启动服务看看vip的绑定情况:

[root@localhost ~] systemctl start keepalived ; ssh 172.16.1.30 "systemctl start keepalived"
[root@localhost ~] ip addr show ; ssh 172.16.1.30 "ip addr show"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:ac:9d:0e brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.10/24 brd 172.16.1.255 scope global enp0s3
 valid_lft forever preferred_lft forever
 inet 172.16.1.50/24 scope global secondary enp0s3
#172.16.1.10节点拿到50的vip
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:feac:9d0e/64 scope link
 valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:d4:d3:0d brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.30/24 brd 172.16.1.255 scope global enp0s3
 valid_lft forever preferred_lft forever
 inet 172.16.1.60/24 scope global secondary enp0s3
#172.16.1.30节点拿到60的vip
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:fed4:d30d/64 scope link
 valid_lft forever preferred_lft forever

随后停止172.16.1.10主机的keepalived的服务看看情况:

[root@localhost ~]# !30 #我调用了历史命令
ip addr show ; ssh 172.16.1.30 "ip addr show"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:ac:9d:0e brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.10/24 brd 172.16.1.255 scope global enp0s3
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:feac:9d0e/64 scope link
 valid_lft forever preferred_lft forever
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 08:00:27:d4:d3:0d brd ff:ff:ff:ff:ff:ff
 inet 172.16.1.30/24 brd 172.16.1.255 scope global enp0s3
 valid_lft forever preferred_lft forever
 inet 172.16.1.60/24 scope global secondary enp0s3
 valid_lft forever preferred_lft forever
 inet 172.16.1.50/24 scope global secondary enp0s3
#这里可以看到两个地址全部到172.16.1.30上面去了。
 valid_lft forever preferred_lft forever
 inet6 fe80::a00:27ff:fed4:d30d/64 scope link
 valid_lft forever preferred_lft forever

keepalived的基本使用就说到这里了,主要功能就是做高可用集群,但是keepalived还可以做负载均衡集群,关于这个配置和一些小技巧知识我下一期再说!

Comments

Leave a Reply

Your email address will not be published. Name and email are required