drbd实现HA的MySQL集群

现在我们来把目光放在HA存储上面吧~

drbd简述

什么是drbd? 全称是Distributed Replicated Block Device, 分布式复制块设备.基于内核和相关脚本构建的.用来构建高可用性的集群. 总体来看, 就像是基于网络的RAID1. 可以进行数据块级别的复制, 由于存在在文件系统模块的下层, 所以当然是无法理解文件系统的层次.

整体的一个流程如下:

img

其中有底色的那一个大块就是我们的内核空间, 上方就是用户空间, 用户空间的应用程序发起写入的系统调用请求将数据写入到磁盘设备, 原本应该是经过FS -> BC -> DS -> DD最后进入硬件设备的, 但是我们的DRBD在中间横插一刀, 将写入的数据原封不动的复制一遍, 并且通过TCP/IP协议栈经由网卡发送出去. 另外一边,接收端收到之后发送到DRBD, 接着进过scheduler通过驱动到达磁盘.

同样, 这样的模型也规定了DRBD只能是一读一写, 读写分离的形式, 一旦出现双写, 就会造成数据不一致的情况. 当然, 双主模型也不是不可以, 只不过需要分布式文件系统的援助. 利用它的分布式锁机制才可以.

上述的这个过程, 其实就是在镜像一个数据, 主要有三个特点:

  • 实时性: 复制立即发生
  • 透明性: 数据的存放是透明和独立的
  • 同步和异步

到这里, 我们就对DRBD有了一个大体的印象了, 有没有觉得他有点像我们之前说的一个内核模块? LVS. 对了~当时我们通过ipvsadm来进行规则的管理, 同样, 我们的DRBD也提供了用户空间的管理工具 – drbdadm. 此工具更高层, 更贴近用户. 比较低层的有: drbdsetupdrbdmeta. 前者是底层配置后者用来修改管理数据结构. adm其实就是对这些工具高层封装.

使用这些前端管理工具就可以使得DRBD发挥效果, 但是在说到配置之前, 我们先来看看DRBD的工作模式.

DRBD有三种复制的方式,我们可以概括成: 同步, 半同步, 异步.

我们知道DRBD是基于网络的, 而且之前我们也说过, 当发起系统调用之后会进入不可中断的睡眠态, 而直到返回结果才会继续执行. 而上面这几个复制方式区别就在什么时候返回处理结果上.

如果是刚刚提交就返回, 那就是异步. 如果是将数据扔到自己的消息队列上的时候再返回, 那就是半同步. 如果是把数据交给对方, 并且已经到达了的, 那就是同步的.

现在就可以说说DRBD的资源构成了. 必须要有的有三个部分:

  • 名字: 只允许不包含空格符的ASCII字符
  • drbd设备: /dev/drbd#
    • 主设备号: 147
  • 磁盘配置: 各个主机用于组成drbd设备的磁盘或者分区;
  • 网络配置: 节点通信间的网络通信属性;

drdb的使用

有点坑..这个drbd. 我们用CentOS6来实现.

首先安装必要的服务, 主要是两个rpm包, 一个是drbd84-utils, 一个是kmod-drbd84. 其中第二个需要特定的内核补丁.

安装完成了之后, 我们看一下生成的文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@VM-node3 ~]# rpm -ql drbd84-utils
/etc/bash_completion.d/drbdadm
/etc/drbd.conf
/etc/drbd.d
/etc/drbd.d/global_common.conf
/etc/ha.d/resource.d/drbddisk
/etc/ha.d/resource.d/drbdupper
/etc/rc.d/init.d/drbd
/etc/xen/scripts/block-drbd
/lib/drbd/drbdadm-84
/lib/drbd/drbdsetup-84
/lib/udev/rules.d/65-drbd.rules
/sbin/drbdadm
/sbin/drbdmeta
/sbin/drbdsetup
/usr/lib/drbd
/usr/lib/drbd/crm-fence-peer.sh
/usr/lib/drbd/crm-unfence-peer.sh
/usr/lib/drbd/notify-emergency-reboot.sh
/usr/lib/drbd/notify-emergency-shutdown.sh
/usr/lib/drbd/notify-io-error.sh
/usr/lib/drbd/notify-out-of-sync.sh
/usr/lib/drbd/notify-pri-lost-after-sb.sh
/usr/lib/drbd/notify-pri-lost.sh
/usr/lib/drbd/notify-pri-on-incon-degr.sh
/usr/lib/drbd/notify-split-brain.sh
/usr/lib/drbd/notify.sh
/usr/lib/drbd/outdate-peer.sh
/usr/lib/drbd/rhcs_fence
/usr/lib/drbd/snapshot-resync-target-lvm.sh
/usr/lib/drbd/stonith_admin-fence-peer.sh
/usr/lib/drbd/unsnapshot-resync-target-lvm.sh
/usr/lib/ocf/resource.d/linbit/drbd
/usr/sbin/drbd-overview
/usr/sbin/drbdadm
/usr/sbin/drbdmeta
/usr/sbin/drbdsetup
/usr/share/cluster/drbd.metadata
/usr/share/cluster/drbd.sh
...(omitted)
/var/lib/drbd

三个执行程序都是这个包提供的. 而另外一边, kmod其实就是安装了一个ko而已.

我们来看看配置文件吧:

1
2
include "drbd.d/global_common.conf";
include "drbd.d/*.res";

是模块式的配置, 显然global_common就是全局配置了, 而其他所有以res结尾的. 就都是我们的资源配置了.

先来看一下全局配置:

1
2
3
4
5
6
7
8
[root@VM-node3 drbd.d]# grep "{$" global_common.conf 
global {
common {
handlers {
startup {
options {
disk {
net {

层次一目了然. 全局和通用配置, 通用配置就是每一个drbd设备的配置

其中global定义drbd整体的工作属性, 关于这个没什么好设置的, 接着是handlers. 其实就是当发生了一些情况的时候调用哪些脚本的处理器, 接着startup定义启动时超时选项等等, options指定一些同步属性, disk用来指明磁盘属性,net指定属性, 我们可以在这里添加使用的带宽大小以及加密选项.

简单的配置一下:

1
2
3
4
5
6
7
8
disk {
on-io-error detach;
}
net {
protocol C;
cram-hmac-alg 'sha1';
shared-secret '52f5308f3a2f9b31ba'
}

接着就来定义一下资源(mysql.res):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
resource mysql {
on VM-node3 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
address 192.168.206.22:7789;
}
on VM-node4 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
address 192.168.206.23:7789;
}
}

这里, 同样的项目可以整合在一起, 于是我们的资源定义就简化成这样:

1
2
3
4
5
6
7
8
9
10
11
resource mysql {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
on VM-node3 {
address 192.168.206.22:7789;
}
on VM-node4 {
address 192.168.206.23:7789;
}
}

其中, meta-link 表示元数据存放在哪里. internal表示就在原磁盘上.

OK, 接着确定配置是一样的, 我们把它拷贝到另一个节点上, 当然另一个节点是已经安装过drbd的.

接下来就可以直接启动服务了, 但是在此次之前, 我们还需要初始化一下资源, 由于我之前写的是sdb1, 所以还是要先加进来一块新的磁盘并且进行分区的.

接着执行:

1
2
3
4
5
[root@VM-node4 ~]# drbdadm create-md mysql
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.

两边都要做这样的操作,接着就可以启动服务了:

1
2
3
4
5
6
7
8
[root@VM-node3 ~]# service drbd start
Starting DRBD resources: [
create res: mysql
prepare disk: mysql
adjust disk: mysql
adjust net: mysql
]
......

注意: 如果只启动一端, 会等待另外一边的启动. 直到两方都启动才会成功启动.

接下来, 我们可以通过下面的方式来查看当前drbd的状态:

1
2
3
4
5
6
7
[root@VM-node3 ~]# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6, 2016-01-12 13:27:11
0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:3144572
[root@VM-node3 ~]# drbd-overview
0:mysql/0 Connected Secondary/Secondary Inconsistent/Inconsistent

你会发现, 现在我们的drbd都是备用节点, 而且数据不一致. 所以我们必须手动的将其中一个节点变成主节点才行:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@VM-node3 ~]# drbdadm primary --force mysql
[root@VM-node3 ~]# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6, 2016-01-12 13:27:11
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:2924 nr:0 dw:0 dr:3756 al:0 bm:0 lo:0 pe:1 ua:3 ap:0 ep:1 wo:f oos:3141692
[>....................] sync'ed: 0.2% (3141692/3144572)K
finish: 0:32:43 speed: 1,440 (1,440) K/sec
[root@VM-node3 ~]# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6, 2016-01-12 13:27:11
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:14512 nr:0 dw:0 dr:15572 al:0 bm:0 lo:0 pe:1 ua:4 ap:0 ep:1 wo:f oos:3130076
[>....................] sync'ed: 0.6% (3130076/3144572)K
finish: 0:21:08 speed: 2,416 (2,416) K/sec

接着数据就会开始进行复制. 同步完成之后就是这个状态了:

1
2
3
4
5
[root@VM-node3 ~]# cat /proc/drbd
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6, 2016-01-12 13:27:11
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:3144572 nr:0 dw:0 dr:3145244 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

我们的节点是主节点, 而且双方都是UpToDate的状态.

接着就可以开始格式化文件系统了, 由于我们的drbd是位级别对齐的, 所以文件系统也会自动的同步过去.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@VM-node3 ~]# mke2fs -t ext4 /dev/drbd0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
196608 inodes, 786143 blocks
39307 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=805306368
24 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@VM-node3 ~]# mount /dev/drbd0 /mnt/
[root@VM-node3 ~]# ls /mnt/
lost+found

这样就可以正常使用了. 另外 如果进行主备的切换, 我们就需要首先卸载, 接着降级主节点, 升级备用节点, 接着备用节点就可以进行挂载使用了, 反之亦然.

那么, 切换来切换去真是是很麻烦的, 所以我们可以将这个drbd设备定义成资源, 通过HA服务来进行自动的调度.

OK, 以上就是我们MySQL高可用的前导基础准备了. 现在就可以正式开始了.

项目: drbd+pacemaker实现高可用MySQL集群

当前是一个肥肠干净的集群, 没有配置任何资源:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@VM-node3 ~]# pcs status
Cluster name:
Stack: classic openais (with plugin)
Current DC: VM-node3 (version 1.1.15-5.el6-e174ec8) - partition with quorum
Last updated: Tue Oct 31 19:09:26 2017 Last change: Mon Oct 30 23:24:09 2017 by root via cibadmin on VM-node4
, 2 expected votes
2 nodes and 0 resources configured

Online: [ VM-node3 VM-node4 ]

No resources


Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/disabled

接着我们就要开始配置drbd的资源了, 我们来看一下:

1
2
3
4
5
crm(live)ra# classes
lsb
ocf / .isolation heartbeat linbit pacemaker
service
stonith

多了一个linbit. 这个其实就是我们的drbd的ra了, 接着来看一下需要哪些参数:

1
2
3
4
Parameters (*: required, []: default):

drbd_resource* (string): drbd resource name
The name of the drbd resource from the drbd.conf file.

至此都和以前没什么 区别, 但是 drbd资源特殊的性表现在主从上, 这是一个clone资源, 一主一从, 如果要配置主从资源, 在crmsh中, 我们在configure中使用ms这个关键字来指明. 不过, 有哪些参数呢? 这些参数就是pacemaker中定义克隆资源的专用属性, 有这些:

  • clone-max: 最多克隆出的份数
  • clone-node-max: 在单个节点上最多运行几个克隆
  • notify: 当一份克隆启动或者停止的时候, 是否通知给其他的节点
  • master-max: 最多克隆的master资源的个数
  • master-node-max: 同一个节点最多运行多少master资源

那么我们来定义一下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@VM-node3 ~]# crm
crm(live)# configure
crm(live)configure# primitive drbd_mysql ocf:linbit:drbd drbd_resource="mysql" op monitor role="Slave" interval=20s timeout=20s op monitor role="Master" interval=10s timeout=20s
crm(live)configure# verify
WARNING: drbd_mysql: default timeout 20s for start is smaller than the advised 240
WARNING: drbd_mysql: default timeout 20s for stop is smaller than the advised 100
crm(live)configure# delete drbd_mysql
crm(live)configure# primitive drbd_mysql ocf:linbit:drbd drbd_resource="mysql" op monitor role="Slave" interval=20s timeout=20s op monitor role="Master" interval=10s timeout=20s op start timeout=240s op stop timeout=100
crm(live)configure# verify
crm(live)configure# ms ms_drbd_mysql drbd_mysql meta clone-max="2" clone-node-max="1" master-max="1" master-node-max="1" notify="true"
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Stack: classic openais (with plugin)
Current DC: VM-node3 (version 1.1.15-5.el6-e174ec8) - partition with quorum
Last updated: Tue Oct 31 22:54:17 2017 Last change: Tue Oct 31 22:54:14 2017 by root via cibadmin on VM-node3
, 2 expected votes
2 nodes and 2 resources configured

Online: [ VM-node3 VM-node4 ]

Full list of resources:

Master/Slave Set: ms_drbd_mysql [drbd_mysql]
Masters: [ VM-node4 ]
Slaves: [ VM-node3 ]

crm(live)# exit
bye
[root@VM-node3 ~]# drbd-overview
0:mysql/0 Connected Secondary/Primary UpToDate/UpToDate

服务已然运行.

但是我们说过, 如果没有文件系统, 是没办法工作的, 所以我们要继续添加 文件系统的资源, 不仅如此, 这个资源还应该在drbd之后启动, 而且, 他还必须在主节点上, 这就同时有了位置和顺序的约束:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
crm(live)configure# primitive drbd_fs ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/data" fstype="ext4" op monitor interval=20s timeout=40s op start timeout=60s op stop timeout=60s
crm(live)configure# verify
crm(live)configure# show
node VM-node3
node VM-node4
primitive drbd_fs Filesystem \
params device="/dev/drbd0" directory="/data" fstype=ext4 \
op monitor interval=20s timeout=40s \
op start timeout=60s interval=0 \
op stop timeout=60s interval=0
primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource=mysql \
op monitor role=Slave interval=20s timeout=20s \
op monitor role=Master interval=10s timeout=20s \
op start timeout=240s interval=0 \
op stop timeout=100 interval=0
ms ms_drbd_mysql drbd_mysql \
meta clone-max=2 clone-node-max=1 master-max=1 master-node-max=1 notify=true
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.15-5.el6-e174ec8 \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=2 \
stonith-enabled=false
crm(live)configure# colocation drbd_fs_with_drbd_mysql inf: drbd_fs ms_drbd_mysql:Master
crm(live)configure# order drbd_fs_after_drbd_mysql Mandatory: ms_drbd_mysql:promote drbd_fs:start
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Stack: classic openais (with plugin)
Current DC: VM-node3 (version 1.1.15-5.el6-e174ec8) - partition with quorum
Last updated: Wed Nov 1 10:23:41 2017 Last change: Wed Nov 1 10:23:37 2017 by root via cibadmin on VM-node3
, 2 expected votes
2 nodes and 3 resources configured

Online: [ VM-node3 VM-node4 ]

Full list of resources:

Master/Slave Set: ms_drbd_mysql [drbd_mysql]
Masters: [ VM-node4 ]
Slaves: [ VM-node3 ]
drbd_fs (ocf::heartbeat:Filesystem): Started VM-node4

可以看到, 文件系统在VM-node4上启动, 原因当然就是因为VM-node4是master节点了, 现在我们下线node4:

1
[root@VM-node4 scripts]# pcs node standby VM-node4

接着就可以看到, 资源一起转移到VM-node3:

1
2
3
[root@VM-node3 ~]# mount
...(omitted)
/dev/drbd0 on /data type ext4 (rw)

接下来就要安装MariaDB了, 熟悉的流程, 但是这一次要注意我们只能在master节点上进行安装:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@VM-node3 ~]# crm status
Stack: classic openais (with plugin)
Current DC: VM-node3 (version 1.1.15-5.el6-e174ec8) - partition with quorum
Last updated: Wed Nov 1 11:56:45 2017 Last change: Wed Nov 1 11:56:43 2017 by root via crm_attribute on VM-node4
, 2 expected votes
2 nodes and 3 resources configured

Online: [ VM-node3 VM-node4 ]

Full list of resources:

Master/Slave Set: ms_drbd_mysql [drbd_mysql]
Masters: [ VM-node3 ]
Slaves: [ VM-node4 ]
drbd_fs (ocf::heartbeat:Filesystem): Started VM-node3
[root@VM-node3 ~]# groupadd -r -g 306 mysql
[root@VM-node3 ~]# useradd -r -g 306 -u 306 mysql
[root@VM-node3 mariadb55]# chown root:mysql -R mysql
[root@VM-node3 mysql]# bash scripts/mysql_install_db --user=mysql --datadir=/data/db_data/
[root@VM-node3 db_data]# cp /usr/local/src/mariadb55/mysql/support-files/mysql.server /etc/rc.d/init.d/mysqld
[root@VM-node3 db_data]# chkconfig --add mysqld
[root@VM-node3 db_data]# chkconfig mysqld off
[root@VM-node3 db_data]# mkdir /etc/mysql
[root@VM-node3 db_data]# cp /usr/local/src/mariadb55/mysql/support-files/my-large.cnf /etc/mysql/my.cnf
[root@VM-node3 db_data]# vim /etc/mysql/my.cnf
---(In Vim)
[mysqld]
...(omitted)
datadir = /data/db_data
innodb_file_per_table = on
skip_name_resolve = on
---(Quit)
[root@VM-node3 db_data]# service mysqld start
Starting MySQL.171101 13:29:07 mysqld_safe Logging to '/data/db_data/VM-node3.err'.
171101 13:29:07 mysqld_safe Starting mysqld daemon with databases from /data/db_data
. SUCCESS!
[root@VM-node3 db_data]# /usr/local/src/mariadb55/mysql/bin/mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.58-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> GRANT ALL ON *.* TO 'root'@'192.168.206.%' IDENTIFIED BY 'cluster';
Query OK, 0 rows affected (0.02 sec)

MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> quit
Bye
[root@VM-node3 db_data]# service mysqld stop
Shutting down MySQL.. SUCCESS!

接下来master切换成另一个(让node3成为standby), 这个时候就不需要初始化了, 因为我们的文件是镜像过来的. 直接复制一下配置文件启动一下服务试试:

1
2
3
4
5
6
7
8
9
10
11
[root@VM-node4 mysql]# mkdir /etc/mysql
---
[root@VM-node3 ~]# scp /etc/mysql/my.cnf VM-node4:/etc/mysql/my.cnf
---
[root@VM-node4 mysql]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
[root@VM-node4 mysql]# chkconfig --add mysqld
[root@VM-node4 mysql]# chkconfig mysqld off
[root@VM-node4 mysql]# service mysqld start
Starting MySQL.171101 13:40:50 mysqld_safe Logging to '/data/db_data/VM-node4.err'.
171101 13:40:50 mysqld_safe Starting mysqld daemon with databases from /data/db_data
.. [ OK ]

OK, 如果数据库服务已经OK, 就可以配置资源和资源约束了. 当然我们还需要一个虚拟的IP地址.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
crm(live)# configure 
crm(live)configure# primitive ip ocf:heartbeat:IPaddr params ip="192.168.206.20" op monitor interval=10s timeout=20s
crm(live)configure# verify
crm(live)configure# primitive mysql lsb:mysqld op monitor interval=20s timeout=20s
crm(live)configure# verify
crm(live)configure# colocation ip_with_drbd_Master inf: ip ms_drbd_mysql:Master
crm(live)configure# colocation mysql_with_drbd_Master inf: mysql ms_drbd_mysql:Master
crm(live)configure# show
crm(live)configure# order mysql_after_drbd Mandatory:
drbd_fs ip ms_drbd_mysql mysql
crm(live)configure# order mysql_after_drbd Mandatory: drbd_fs:start mysql:start
crm(live)configure# verify
crm(live)configure# order mysql_after_ip Mandatory: ip:start mysql:start
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Stack: classic openais (with plugin)
Current DC: VM-node3 (version 1.1.15-5.el6-e174ec8) - partition with quorum
Last updated: Wed Nov 1 21:34:04 2017 Last change: Wed Nov 1 21:33:58 2017 by root via cibadmin on VM-node3
, 2 expected votes
2 nodes and 5 resources configured

Node VM-node4: standby
Online: [ VM-node3 ]

Full list of resources:

Master/Slave Set: ms_drbd_mysql [drbd_mysql]
Masters: [ VM-node3 ]
Stopped: [ VM-node4 ]
drbd_fs (ocf::heartbeat:Filesystem): Started VM-node3
ip (ocf::heartbeat:IPaddr): Started VM-node3
mysql (lsb:mysqld): Started VM-node3

至此, 我们的资源 就大体完成了, 打开网段内另外一台主机, 连接尝试:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[root@VM-node0 ~]# mysql -uroot -h192.168.206.20 -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 7
Server version: 5.5.58-MariaDB MariaDB Server

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test |
| testdb |
+--------------------+
5 rows in set (0.01 sec)

MariaDB [(none)]> show databasesl;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.206.20' (111)
ERROR: Can't connect to the server

unknown [(none)]> show databases;
No connection. Trying to reconnect...
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.206.20' (111)
ERROR: Can't connect to the server

unknown [(none)]> show databases;
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***

+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test |
| testdb |
+--------------------+
5 rows in set (0.01 sec)

MariaDB [(none)]>

可以看到中间的连接断掉了, 原因很简单, 我在中间将节点(node3)进行了下线, 于是过了一会node4得到了资源, mysql于是就重新连接了.