Corosync/Pacemaker集群配置

继续学习高可用集群啦.

体验PCS

之前就已经说过了, corosync是一个从Heartbeat V3中分出来的单独的messaging layer. 同时也是一个投票系统. 现在在CentOS7上, 我们使用Corosync+Pacemaker组合来构建服务.

首先我们先来看一下pcs这个东西, 之前也已经说过了, 用于集群的全生命周期管理.

1
2
3
4
5
6
7
8
9
10
11
12
Available Packages
Name : pcs
Arch : x86_64
Version : 0.9.158
Release : 6.el7.centos
Size : 4.8 M
Repo : base/7/x86_64
Summary : Pacemaker Configuration System
URL : https://github.com/ClusterLabs/pcs
License : GPLv2
Description : pcs is a corosync and pacemaker configuration tool. It permits users to
: easily view, modify and create pacemaker based clusters.

我们之前 都已经说了这么多了, 所以这次就直接来配置好了. 首先安装程序包吧, 由于pcs依赖corosync和pacemaker, 所以只需要安装pcs就行了, 为了方便我们建立一个ansible的服务器组(ha), 把他们VM-node1和VM-node2放进去吧.

1
[root@VM-node1 ~]# ansible ha -m yum -a 'name=pcs state=latest'

这次准备的两个CentSOS7, 而且已经做好了密钥ssh登录, 主机名解析, 时间同步.

安装完成之后, 会看到: pcs有个守护进程 这个我们需要启动起来接着设置成为开机启动:

1
2
3
4
5
6
7
8
9
10
11
12
[root@VM-node1 ~]# ansible ha -m service -a 'name=pcsd state=started enabled=yes'
# 查看一下
[root@VM-node1 ~]# systemctl status pcsd
● pcsd.service - PCS GUI and remote configuration interface
Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2017-10-28 14:04:32 CST; 17s ago
Main PID: 23944 (pcsd)
CGroup: /system.slice/pcsd.service
└─23944 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &

Oct 28 14:04:30 VM-node1 systemd[1]: Starting PCS GUI and remote configuration interface...
Oct 28 14:04:32 VM-node1 systemd[1]: Started PCS GUI and remote configuration interface.

也确确实实的启动了. 是一个ruby程序呢 监听在2224/tcp端口.

除了这个pcsd守护进程, 我们还有一个叫做pcs的程序, 更改程序就通过pcsd来建立联系和交换信息. 但是pcs不可能随意的就进行信息的交换和决策. 一定是需要验证的, 所以我们一个用户来用作pcs的验证:

1
2
3
4
5
6
7
8
[root@VM-node1 ~]# ansible ha -m shell -a "echo "test" | passwd hacluster --stdin"
VM-node2 | SUCCESS | rc=0 >>
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.

VM-node1 | SUCCESS | rc=0 >>
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.

这个用户也是存在的, 我们只需要保证密码是没有问题的就行了.

现在我们就来认证两个节点, 首先, 我们说说pcs这个程序的命令格式 , 和ip类似, pcs也是有很多子模块/子命令的, 接着这些模块各自又存在很多操作命令, 稍微看一下文档就明白了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Commands:
cluster Configure cluster options and nodes.
resource Manage cluster resources.
stonith Manage fence devices.
constraint Manage resource constraints.
property Manage pacemaker properties.
acl Manage pacemaker access control lists.
qdevice Manage quorum device provider on the local host.
quorum Manage cluster quorum settings.
booth Manage booth (cluster ticket manager).
status View cluster status.
config View and manage cluster configuration.
pcsd Manage pcs daemon.
node Manage cluster nodes.
alert Manage pacemaker alerts.

这里我们使用cluster的auth命令来认证:

1
2
3
4
5
6
7
8
[root@VM-node1 ~]# ansible ha -a "pcs cluster auth VM-node1 VM-node2 -u hacluster -p test"
VM-node2 | SUCCESS | rc=0 >>
VM-node1: Authorized
VM-node2: Authorized

VM-node1 | SUCCESS | rc=0 >>
VM-node1: Authorized
VM-node2: Authorized

认证完成了之后, 我们就可以建立集群了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@VM-node1 ~]# pcs cluster setup --name testcluster VM-node1 VM-node2
Destroying cluster on nodes: VM-node1, VM-node2...
VM-node1: Stopping Cluster (pacemaker)...
VM-node2: Stopping Cluster (pacemaker)...
VM-node1: Successfully destroyed cluster
VM-node2: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'VM-node1', 'VM-node2'
VM-node1: successful distribution of the file 'pacemaker_remote authkey'
VM-node2: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
VM-node1: Succeeded
VM-node2: Succeeded

Synchronizing pcsd certificates on nodes VM-node1, VM-node2...
VM-node1: Success
VM-node2: Success
Restarting pcsd on the nodes in order to reload the certificates...
VM-node1: Success
VM-node2: Success

一路绿灯, 集群建立完毕, 这个时候会自动生成一份corosync的配置文件, 来看下:

1
2
3
4
5
[root@VM-node1 ~]# grep '^[^[:space:]].*{$' /etc/corosync/corosync.conf
totem {
nodelist {
quorum {
logging {

这个配置也是的分成了几段的,首先的totem中定义了使用的版本和协议, 集群的名字, 包括是否使用安全认证等等, 接着nodelist中 分成了一个一个node的内容块, 其实就是定义每一个节点的信息了, 再往后即使很熟悉的投票系统的设置, 包括使用哪边提供的voting system, 是否是两节点模型啊等等. 最后就是关于日志的记录了, 很容易就能看懂, 包括是否记录到文件, 是否记录到syslog中, 以及位置定义.

好了现在我们就可以尝试启动集群了:

1
2
3
[root@VM-node1 ~]# pcs cluster start --all
VM-node2: Starting Cluster...
VM-node1: Starting Cluster...

这样就启动完成了, 我们有多种方法查看当前集群的状态, 一个一个来看看吧.

  • 使用pcs的status查看
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@VM-node1 ~]# pcs status
Cluster name: testcluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: VM-node1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 14:41:58 2017
Last change: Sat Oct 28 14:40:29 2017 by hacluster via crmd on VM-node1

2 nodes configured
0 resources configured

Online: [ VM-node1 VM-node2 ]

No resources


Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled

其中告诉了现在的信息传递层协议是corosync, 当前的DC是VM-node1 以及时间和当前的在线状态, 资源情况和守护进程的状态.

还可以在后面加上其他的参数来查看不同的信息:

1
2
3
4
5
6
7
[root@VM-node1 ~]# pcs status corosync

Membership information
----------------------
Nodeid Votes Name
1 1 VM-node1 (local)
2 1 VM-node2
  • corosync-cfgtool
1
2
3
4
5
6
[root@VM-node1 ~]# corosync-cfgtool -s VM-node1
Printing ring status.
Local node ID 1
RING ID 0
id = 192.168.206.9
status = ring 0 active with no faults

没什么好说的, 就简单的信息输出, -s 就是status的意思吧 (我猜的)

  • corosync-cmapctl
1
2
3
4
5
6
7
8
9
10
11
[root@VM-node1 ~]# corosync-cmapctl 
...(omitted)
runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.206.9)
runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.1.status (str) = joined
runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.206.10)
runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.2.status (str) = joined
...(omitted)

能够输出大量的运行时参数.

我们可以使用crm_verify来验证当前的集群配置是否存在异常:

1
2
3
4
5
[root@VM-node1 ~]# crm_verify -LV
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid

挨 出现了错误. 其实这个在之前也出现过了, 就是刚刚我们查看状态的时候显示的WARNING. 我们现在没有STONITH设备, 但是我们却开启了STNITH功能, 所以我们需要把它关闭, 关闭的方法也很简单, 我们使用pcs的property子命令来搞定:

1
2
3
4
5
6
7
[root@VM-node1 ~]# pcs property list --all
..(omitted)
stonith-enabled: true
..(omitted)
[root@VM-node1 ~]# pcs property set stonith-enabled=false
[root@VM-node1 ~]# crm_verify -LV
[root@VM-node1 ~]#

这就是pcs, 接下来我们再来试试crmsh.

crmsh体验

刚刚使用的pcs是需要pcsd才可以进行管理的, 也即是我们之前说的有agent的, 而crmsh就不需要, 他通过ssh来进行管理, 直接yum install crmsh就可以, 如果没有就尝试找找其他的源吧. 接着我们需要安装pssh和python-pssh这两个包. 他们是为了提供并行的ssh和python的pssh接口.

crmsh支持交互式模式, 而且还提供了类shell的操作方式, 例如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@VM-node1 ~]# crm
crm(live)# ls
cibstatus help site
cd cluster quit
end script verify
exit ra maintenance
bye ? ls
node configure back
report cib resource
up status corosync
options history
crm(live)# resource
crm(live)resource# cd ..
crm(live)#

但是不得不说的是, 这个crmsh提供的子命令和子子命令实在是太多了, 让人眼花缭乱.

我们直接用一个实例去学习吧.

使用crmsh+corosync+pacemaker实现httpd高可用

再熟悉不过了, 我们需要的资源有:

  • VIP
  • httpd服务

好了, 简单的部署一下, playbook如下:

1
2
3
4
5
6
7
8
9
- hosts: ha
remote_user: root
tasks:
- name: Install httpd service
yum: name=httpd state=latest
- name: Copy index.html to nodes
template: src=/root/index.html dest=/var/www/html/index.html
- name: Start httpd service and disable it
service: name=httpd state=started enabled=yes

模板文件很简单的一行:

1
<h1>{{ ansible_fqdn }}</h1>

接着我们要确定可以正常访问:

1
2
3
4
5
6
7
C:\Users\lenovo\Desktop
λ curl 192.168.206.9
<h1>VM-node1</h1>

C:\Users\lenovo\Desktop
λ curl 192.168.206.10
<h1>VM-node2</h1>

你可能会觉得奇怪, 为什么我们这一次设置成了开机自动启动, 其实这是因为crmsh只能从enable的unit中找到资源.

接下来就要开始配置了, 进入crmsh的RA模块:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@VM-node1 ~]# crm ra
crm(live)ra# classes
lsb
ocf / .isolation heartbeat openstack pacemaker
service
systemd
crm(live)ra# list ocf heartbeat
IPaddr IPaddr2 ...(omitted)
crm(live)ra# cd
crm(live)# configure
crm(live)configure# primitive webip ocf:heartbeat:IPaddr params ip=192.168.206.11
crm(live)configure# show
node 1: VM-node1
node 2: VM-node2
primitive webip IPaddr \
params ip=192.168.206.11
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.16-12.el7_4.4-94ff4df \
cluster-infrastructure=corosync \
cluster-name=testcluster \
stonith-enabled=false
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Stack: corosync
Current DC: VM-node1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 15:54:58 2017
Last change: Sat Oct 28 15:54:55 2017 by root via cibadmin on VM-node1

2 nodes configured
1 resource configured

Online: [ VM-node1 VM-node2 ]

Full list of resources:

webip (ocf::heartbeat:IPaddr): Stopped

资源就已经配置好了, 但是还没有启动.(过一小会再看就是Started的了) 我是怎么知道这些参数和格式的呢, 在之前的ra中, 我们可以使用info来查看每一个RA的参数配置.

如何进行资源的迁移呢? 很简单, 只要把当前的节点变成备用的就行了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
crm(live)# status
Stack: corosync
Current DC: VM-node1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 15:56:50 2017
Last change: Sat Oct 28 15:54:55 2017 by root via cibadmin on VM-node1

2 nodes configured
1 resource configured

Online: [ VM-node1 VM-node2 ]

Full list of resources:

webip (ocf::heartbeat:IPaddr): Started VM-node1

crm(live)# node
crm(live)node# standby
crm(live)node# cd
crm(live)# status
Stack: corosync
Current DC: VM-node1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 15:57:52 2017
Last change: Sat Oct 28 15:57:46 2017 by root via crm_attribute on VM-node1

2 nodes configured
1 resource configured

Node VM-node1: standby
Online: [ VM-node2 ]

Full list of resources:

webip (ocf::heartbeat:IPaddr): Started VM-node2

已经跑到VM-node2上去了. 不仅如此, 我们的VM-node1已经是软下线的状态了.

接着我们把node1上线:

1
crm(live)# node online

此时也许你的IP资源并没有回来 这是由于资源粘性导致的.

接着我们来配置httpd服务资源:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
crm(live)# configure
crm(live)configure# primitive webserver systemd:httpd
crm(live)configure# verify
WARNING: webserver: default timeout 20s for start is smaller than the advised 100
WARNING: webserver: default timeout 20s for stop is smaller than the advised 100
crm(live)configure# commit
WARNING: webserver: default timeout 20s for start is smaller than the advised 100
WARNING: webserver: default timeout 20s for stop is smaller than the advised 100
crm(live)configure# cd
crm(live)# status
Stack: corosync
Current DC: VM-node1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 16:35:17 2017
Last change: Sat Oct 28 16:35:09 2017 by root via cibadmin on VM-node1

2 nodes configured
2 resources configured

Online: [ VM-node1 VM-node2 ]

Full list of resources:

webip (ocf::heartbeat:IPaddr): Started VM-node2
webserver (systemd:httpd): Started VM-node1

忽略那个警告吧先…

但是现在的IP和httpd服务不在一个节点上, 这显然不是我们希望的. 这怎么办呢? 对 定义组资源.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
crm(live)# configure
crm(live)configure# group webservice webip webserver
crm(live)configure# verify
WARNING: webserver: default timeout 20s for start is smaller than the advised 100
WARNING: webserver: default timeout 20s for stop is smaller than the advised 100
crm(live)configure# commit
crm(live)configure# cd
crm(live)# status
Stack: corosync
Current DC: VM-node1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 16:37:39 2017
Last change: Sat Oct 28 16:37:33 2017 by root via cibadmin on VM-node1

2 nodes configured
2 resources configured

Online: [ VM-node1 VM-node2 ]

Full list of resources:

Resource Group: webservice
webip (ocf::heartbeat:IPaddr): Started VM-node2
webserver (systemd:httpd): Started VM-node2

这样就OK啦. 访问测试也是没有问题的:

1
2
3
C:\Users\lenovo\Desktop
λ curl 192.168.206.11
<h1>VM-node2</h1>

这里, 定义组的时候的顺序是挺重要的, 他会决定资源的调度顺序.

corosync的配置

刚刚我们使用pcs和crmsh进行了集群配置, 现在我们来手动配置corosync一下. 首先当然就是从corosync的配置文件入手了.

1
2
3
[root@VM-node1 ~]# cp -v /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
cp: overwrite ‘/etc/corosync/corosync.conf’? y
‘/etc/corosync/corosync.conf.example’ -> ‘/etc/corosync/corosync.conf’

先覆盖掉之前生成的那个配置文件, 而且原来的是使用udpu协议的, 这次我们使用udp的广播试试.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
totem {
version: 2 # 使用的corosync版本
crypto_cipher: none # 对称加密的算法 (aes256)
crypto_hash: none # 单向哈希的算法 (sha1)
interface { # 配置交换信息的接口
ringnumber: 0 # 为了防止出现环路, 按照0->1的顺序, 只要0在1就不会发送.
bindnetaddr: 192.168.1.0 # 绑定的网络地址
mcastaddr: 239.255.1.1 # 多播地址
mcastport: 5405 # 多播端口
ttl: 1 # 路由最大跳数,设置成1来防止环路
}
}
logging { # 日志相关
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: no
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
quorum { # 定义投票系统
provider: corosync_votequorum
}

接下来我们增加下安全认证,

1
2
3
crypto_cipher: aes128
crypto_hash: sha1
secauth: on

接着, 其实如果是V1这个地方我们可以不添加 nodelist的, 因为待会验证的主机都会自动加入的, 但是稳妥起见, 或者说更清晰一点, 还是写一下:

1
2
3
4
5
6
7
8
9
10
11
12
nodelist {

node {
ring0_addr: 192.168.206.9
node_id:0
}

node {
ring0_addr: 192.168.206.10
node_id:1
}
}

接下来就要生成密钥文件了, 使用corosync-keygen:

1
2
3
4
5
6
7
[root@VM-node1 corosync]# corosync-keygen 
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 920).
Press keys on your keyboard to generate entropy (bits = 1000).
Writing corosync key to /etc/corosync/authkey.

接着把这个文件和配置保持原有权限复制到各个节点上:

1
2
3
[root@VM-node1 corosync]# scp -p authkey corosync.conf VM-node2:/etc/corosync/
authkey 100% 128 66.4KB/s 00:00
corosync.conf 100% 3015 981.5KB/s 00:00

接下来我们试着手动启动服务, 如果没有问题就可以验证一下:

1
2
3
4
5
6
7
8
9
10
11
[root@VM-node1 corosync]# tail /var/log/cluster/corosync.log 
Oct 28 20:57:36 [34337] VM-node1 corosync info [QB ] server name: votequorum
Oct 28 20:57:36 [34337] VM-node1 corosync notice [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Oct 28 20:57:36 [34337] VM-node1 corosync info [QB ] server name: quorum
Oct 28 20:57:36 [34337] VM-node1 corosync notice [TOTEM ] A new membership (192.168.206.9:16) was formed. Members joined: 3232288265
Oct 28 20:57:36 [34337] VM-node1 corosync notice [QUORUM] Members[1]: 3232288265
Oct 28 20:57:36 [34337] VM-node1 corosync notice [MAIN ] Completed service synchronization, ready to provide service.
Oct 28 20:57:36 [34337] VM-node1 corosync notice [TOTEM ] A new membership (192.168.206.9:24) was formed. Members joined: 3232288266
Oct 28 20:57:36 [34337] VM-node1 corosync notice [QUORUM] This node is within the primary component and will provide service.
Oct 28 20:57:36 [34337] VM-node1 corosync notice [QUORUM] Members[2]: 3232288265 3232288266
Oct 28 20:57:36 [34337] VM-node1 corosync notice [MAIN ] Completed service synchronization, ready to provide service.

接下来就可以启动pacemaker服务了, 启动之前, 我们来看一下他的配置.

pacemaker

直接进入启动环境配置:

1
PCMK_logfile=/var/log/pacemaker.log

其他的选项又有什么大的必要开启.

接着直接启动吧:

1
[root@VM-node1 ~]# systemctl start pacemaker

好了, 集群就这样启动了, 接下来只要配置一下资源就行了. 可以使用crmsh或者pcs啥的工具就行了.

还可以使用 crm_verify来检查一下状态, 这个程序可不是crm提供的! 他是pacemaker自带的:

1
2
3
[root@VM-node1 ~]# crm_verify -LV
[root@VM-node1 ~]# rpm -qf `which crm_verify`
pacemaker-cli-1.1.16-12.el7_4.4.x86_64

项目: 构建Web高可用服务集群

我们这次再加上一个节点, 用它来做NFS共享存储. 这样就应该是三个资源了.VIP+httpd+Filesystem

先来规划一下, VIP: 192.168.206.11: ocf: heartbeat:IPaddr ; httpd: systemd; nfs shared storage: ocf: heartbeat:Filesystem

先来配置一下NFS server:

1
2
3
4
[root@VM-node0 ~]# cat /etc/exports 
/data 192.168.206.0/24(rw,no_root_squash)
[root@VM-node0 ~]# cat /data/index.html
<h1>Test page in VM-node3</h1>

接着分别做测试:

1
2
3
4
5
6
[root@VM-node1 ~]# systemctl start httpd
[root@VM-node1 ~]# mount -t nfs VM-node0:/data /var/www/html/
----
C:\Users\lenovo\Desktop
λ curl 192.168.206.9
<h1>Test page in VM-node0</h1>

另外一个:

1
2
3
4
5
6
[root@VM-node2 ~]# systemctl start httpd 
[root@VM-node2 ~]# mount -t nfs VM-node0:/data /var/www/html/
----
C:\Users\lenovo\Desktop
λ curl 192.168.206.10
<h1>Test page in VM-node0</h1>

测试结束之后, 解除挂载和httpd服务.

好了, 正式开始了!

这一次我们不使用组资源, 而是通过资源约束的方式来配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[root@VM-node0 ~]# crm
crm(live)# configure
crm(live)configure# show
node 3232288264: VM-node0
node 3232288265: VM-node1
node 3232288266: VM-node2
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.16-12.el7_4.4-94ff4df \
cluster-infrastructure=corosync \
stonith-enabled=false
crm(live)configure# cd
crm(live)# configure
crm(live)configure# primitive webip ocf:heartbeat:IPaddr2 params ip=192.168.206.11 op monitor interval=30s timeout=20s
crm(live)configure# verify
crm(live)configure# primitive webserver systemd:httpd op start timeout=30s op stop timeout=30s op monitor interval=30s timeout=20s
crm(live)configure# verify
WARNING: webserver: specified timeout 30s for start is smaller than the advised 100
WARNING: webserver: specified timeout 30s for stop is smaller than the advised 100
WARNING: webserver: specified timeout 20s for monitor is smaller than the advised 100
crm(live)configure# edit
crm(live)configure# verify
crm(live)configure# primitive webstore ocf:heartbeat:Filesystem params device="VM-node0:/data" fstype=nfs directory="/var/www/html"
crm(live)configure# verify
WARNING: webstore: default timeout 20s for start is smaller than the advised 60
WARNING: webstore: default timeout 20s for stop is smaller than the advised 60
crm(live)configure# edit
crm(live)configure# verify
crm(live)configure# order store_after_ip Mandatory: webip webstore
crm(live)configure# order server_after_store Mandatory: webserver webstore
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# exit
bye
[root@VM-node0 ~]# crm status
Stack: corosync
Current DC: VM-node2 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sun Oct 29 07:49:31 2017
Last change: Sun Oct 29 07:49:25 2017 by root via cibadmin on VM-node0

3 nodes configured
3 resources configured

Online: [ VM-node0 VM-node1 VM-node2 ]

Full list of resources:

webip (ocf::heartbeat:IPaddr2): Started VM-node0
webserver (systemd:httpd): Started VM-node0
webstore (ocf::heartbeat:Filesystem): Started VM-node0

OK, 呼出一口气….来测试一下吧.

1
2
3
C:\Users\lenovo\Desktop
λ curl 192.168.206.11
<h1>Web Page on NFS</h1>

这样就大功告成啦!!