Docker 搭建 ZooKeeper 集群

前言

本文将使用 Docker 与 Docker-Compose 在单机上快速安装 ZooKeeper 集群(伪集群)。

镜像地址

集群规划

节点名称节点 ID 监听端口映射端口版本说明
zookeeper011218121813.8.4
zookeeper022218121823.8.4
zookeeper033218121833.8.4

准备工作

  • 在宿主机中创建 ZooKeeper 节点一用于持久化数据的目录
1
2
sudo mkdir -p /usr/local/zookeeper/zookeeper01/data
sudo mkdir -p /usr/local/zookeeper/zookeeper01/datalog
  • 在宿主机中创建 ZooKeeper 节点二用于持久化数据的目录
1
2
sudo mkdir -p /usr/local/zookeeper/zookeeper02/data
sudo mkdir -p /usr/local/zookeeper/zookeeper02/datalog
  • 在宿主机中创建 ZooKeeper 节点三用于持久化数据的目录
1
2
sudo mkdir -p /usr/local/zookeeper/zookeeper03/data
sudo mkdir -p /usr/local/zookeeper/zookeeper03/datalog

数据卷使用说明

  • ZooKeeper 官方镜像在 /data/datalog 处配置了数据卷,分别用于保存 Zookeeper 内存中的数据库快照和数据库更新的事务日志。
  • 请注意存放事务日志数据的磁盘位置,因为专用的事务日志磁盘是保持良好性能的关键,将日志存放在繁忙的磁盘上会对性能产生不利影响。
  • 若希望指定 ZooKeeper 的配置文件,可以将宿主机中的 ZooKeeper 配置文件 zoo.cfg 挂载到 ZooKeeper 容器内的 /conf/zoo.cfg 位置。

创建容器

  • 创建 docker-compose.yml 配置文件,并写入以下内容:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
version: '3.5'

services:
zookeeper01:
image: zookeeper:3.8.4
container_name: zookeeper01
restart: always
hostname: zookeeper01
ports:
- 2181:2181
environment:
TZ: Asia/Shanghai
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_4LW_COMMANDS_WHITELIST: "ruok"
ZOO_SERVERS: server.1=zookeeper01:2888:3888;2181 server.2=zookeeper02:2888:3888;2181 server.3=zookeeper03:2888:3888;2181
healthcheck:
test: ["CMD", "sh", "-c", "echo ruok | nc localhost 2181 | grep imok"]
interval: 30s
timeout: 10s
retries: 5
start_period: 20s
volumes:
- /usr/local/zookeeper/zookeeper01/data:/data
- /usr/local/zookeeper/zookeeper01/datalog:/datalog
networks:
- distributed-network

zookeeper02:
image: zookeeper:3.8.4
container_name: zookeeper02
restart: always
hostname: zookeeper02
ports:
- 2182:2181
environment:
TZ: Asia/Shanghai
ZOO_MY_ID: 2
ZOO_PORT: 2181
ZOO_4LW_COMMANDS_WHITELIST: "ruok"
ZOO_SERVERS: server.1=zookeeper01:2888:3888;2181 server.2=zookeeper02:2888:3888;2181 server.3=zookeeper03:2888:3888;2181
healthcheck:
test: ["CMD", "sh", "-c", "echo ruok | nc localhost 2181 | grep imok"]
interval: 30s
timeout: 10s
retries: 5
start_period: 20s
volumes:
- /usr/local/zookeeper/zookeeper02/data:/data
- /usr/local/zookeeper/zookeeper02/datalog:/datalog
networks:
- distributed-network

zookeeper03:
image: zookeeper:3.8.4
container_name: zookeeper03
restart: always
hostname: zookeeper03
ports:
- 2183:2181
environment:
TZ: Asia/Shanghai
ZOO_MY_ID: 3
ZOO_PORT: 2181
ZOO_4LW_COMMANDS_WHITELIST: "ruok"
ZOO_SERVERS: server.1=zookeeper01:2888:3888;2181 server.2=zookeeper02:2888:3888;2181 server.3=zookeeper03:2888:3888;2181
healthcheck:
test: ["CMD", "sh", "-c", "echo ruok | nc localhost 2181 | grep imok"]
interval: 30s
timeout: 10s
retries: 5
start_period: 20s
volumes:
- /usr/local/zookeeper/zookeeper03/data:/data
- /usr/local/zookeeper/zookeeper03/datalog:/datalog
networks:
- distributed-network

networks:
distributed-network:
driver: bridge

配置说明

  • ZOO_MY_ID:用于指定 ZooKeeper 节点的 ID,它在集群中必须是唯一,并且其值应在 1 到 255 之间。另外,ZOO_MY_ID: 1 对应 server.1=zookeeper01:2888:3888;2181 中的 server.1
  • ZOO_4LW_COMMANDS_WHITELIST:"ruok" 是让 ZooKeeper 启用 ruok"四字命令"(4LW,Four Letter Words),用于简单的健康检查。在默认情况下,部分 ZooKeeper 配置可能禁用了这些命令。
  • server.1=zookeeper01:2888:3888;2181 中的 zookeeper01 是在 Docker Compose 中定义的 ZooKeeper 服务的名称,用作其他服务(如 Kafka)访问 ZooKeeper 的主机名,Docker Compose 会自动将这个名称解析为 ZooKeeper 容器的 IP 地址。
  • 创建并启动 ZooKeeper 容器
1
sudo docker-compose up -d

测试容器

查看容器状态

  • 查看所有 ZooKeeper 容器的运行状态
1
sudo docker ps -a
1
2
3
82d40df530ec   zookeeper:3.8.4   "/docker-entrypoint.…"   About a minute ago   Up About a minute (healthy)   2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 8080/tcp   zookeeper01
4ae126d4ea36 zookeeper:3.8.4 "/docker-entrypoint.…" About a minute ago Up About a minute (healthy) 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2182->2181/tcp, :::2182->2181/tcp zookeeper02
c70d199dd8ce zookeeper:3.8.4 "/docker-entrypoint.…" About a minute ago Up About a minute (healthy) 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2183->2181/tcp, :::2183->2181/tcp zookeeper03
  • 若 ZooKeeper 容器启动失败,可以通过以下命令查看容器的启动日志来排查问题
1
sudo docker logs -f --tail 100 zookeeper01

查看集群状态

  • 查看 ZooKeeper 节点一的集群状态
1
sudo docker exec -it zookeeper01 zkServer.sh status
1
2
3
4
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
  • 查看 ZooKeeper 节点二的集群状态
1
sudo docker exec -it zookeeper02 zkServer.sh status
1
2
3
4
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
  • 查看 ZooKeeper 节点三的集群状态
1
sudo docker exec -it zookeeper03 zkServer.sh status
1
2
3
4
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

数据同步测试

  • 使用客户端连接 ZooKeeper 节点一,并创建目录
1
2
# 客户端连接节点一
sudo docker exec -it zookeeper01 zkCli.sh -server localhost:2181
1
2
3
4
5
6
7
# 创建目录
[zk: localhost:2181 (CONNECTED) 0] create /test ""
Created /test

# 查看目录信息
[zk: localhost:2181 (CONNECTED) 1] ls /
[test, zookeeper]
  • 使用客户端连接其他任意节点(比如节点二),可以发现 test 目录会同步创建,即在任何一个集群节点进行操作,其他集群节点也会同步更新
1
2
# 客户端连接节点二
sudo docker exec -it zookeeper02 zkCli.sh -server localhost:2181
1
2
3
# 查看目录信息
[zk: localhost:2181 (CONNECTED) 0] ls /
[test, zookeeper]

集群重新选举测试

  • 关闭 Leader 节点(比如节点三)
1
sudo docker stop zookeeper03
  • 等待超时时间到了之后,重新查看其他节点的状态(比如节点一、节点二),会发现节点二被选举为新的 Leader。
1
sudo docker exec -it zookeeper01 zkServer.sh status
1
2
3
4
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

1
sudo docker exec -it zookeeper02 zkServer.sh status
1
2
3
4
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

参考资料