十、Docker版Redis集群搭建

news2024/11/13 15:25:46

目录

一、3主3从Redis集群配置

1、新建6个docker容器实例

 2、进入容器redis-node-1并为6台机器构建集群关系

3、以6381为切入点,查看集群状态

二、主从容错切换迁移案例

1、数据读写存储

2、假如6381宕机了,他的从库6386会不会切换

三、主从扩容案例

1、新建6387、6388两个节点+新建后启动+查看是否是8个节点

2、进入6387容器实例内部

3、将新增的 6387节点(空槽号)作为master节点加入原集群

4、检查下集群信息

5、重新分配槽号

6、再次检查集群信息

7、为6287主节点分配6388从节点

8、再次检查集群信息

四、主从缩容案例

1、6387和6388下线

2、检查集群情况1 获得6388的节点ID

3、将6388删除(从集群中将4号从节点6388删除)

4、将6387的槽号清空,重新分配

5、检查集群情况2

6、将6387删除

7、检查集群情况3


Redis集群(3主3从-docker配置案例)

一、3主3从Redis集群配置

1、新建6个docker容器实例

[root@localhost usr]# docker run -d --name=redis-node-1 --net host --privileged=true -v /usr/redis/share/redis-node-1:/data redis:latest --cluster-enabled yes --appendonly yes --port 6381  
[root@localhost usr]# docker run -d --name=redis-node-2 --net host --privileged=true -v /usr/redis/share/redis-node-2:/data redis:latest --cluster-enabled yes --appendonly yes --port 6382  
[root@localhost usr]# docker run -d --name=redis-node-3 --net host --privileged=true -v /usr/redis/share/redis-node-3:/data redis:latest --cluster-enabled yes --appendonly yes --port 6383
[root@localhost usr]# docker run -d --name=redis-node-4 --net host --privileged=true -v /usr/redis/share/redis-node-4:/data redis:latest --cluster-enabled yes --appendonly yes --port 6384 
[root@localhost usr]# docker run -d --name=redis-node-5 --net host --privileged=true -v /usr/redis/share/redis-node-5:/data redis:latest --cluster-enabled yes --appendonly yes --port 6385
[root@localhost usr]# docker run -d --name=redis-node-6 --net host --privileged=true -v /usr/redis/share/redis-node-6:/data redis:latest --cluster-enabled yes --appendonly yes --port 6386

参数详解:
--net host #使用宿主机的IP和端口,默认
--privileged=true #获取宿主机root用户权限
--cluster-enabled yes #开启Redis集群
--appendonly yes    #开启数据持久化
--port 6386 #Redis端口

[root@localhost usr]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
9f9f03c4e2d1   redis:latest   "docker-entrypoint.s…"   6 seconds ago    Up 5 seconds              redis-node-6
2c79b54096cd   redis:latest   "docker-entrypoint.s…"   11 seconds ago   Up 10 seconds             redis-node-5
b1827a5d2e49   redis:latest   "docker-entrypoint.s…"   17 seconds ago   Up 16 seconds             redis-node-4
5ac81bea561a   redis:latest   "docker-entrypoint.s…"   22 seconds ago   Up 21 seconds             redis-node-3
d4115e41eb85   redis:latest   "docker-entrypoint.s…"   27 seconds ago   Up 26 seconds             redis-node-2
b7f06253c3ae   redis:latest   "docker-entrypoint.s…"   39 seconds ago   Up 38 seconds             redis-node-1
[root@localhost usr]# 

 2、进入容器redis-node-1并为6台机器构建集群关系

[root@localhost redis-node-1]# docker exec -it redis-node-1 /bin/bash
root@localhost:/data# 
#构建集群
root@localhost:/data# redis-cli --cluster create 192.168.153.128:6381 192.168.153.128:6382 192.168.153.128:6383 192.168.153.128:6384 192.168.153.128:6385 192.168.153.128:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...

 #这几天就是哈希槽,给他分配槽位
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.153.128:6385 to 192.168.153.128:6381
Adding replica 192.168.153.128:6386 to 192.168.153.128:6382
Adding replica 192.168.153.128:6384 to 192.168.153.128:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[0-5460] (5461 slots) master
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[5461-10922] (5462 slots) master
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[10923-16383] (5461 slots) master
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   replicates 343e013448c5772761c20e7118e73ee95abb6527
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data# 

3、以6381为切入点,查看集群状态

root@localhost:/data# redis-cli -p 6381 -h 192.168.153.128
192.168.153.128:6381> 
192.168.153.128:6381> 
192.168.153.128:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:368
cluster_stats_messages_pong_sent:406
cluster_stats_messages_sent:774
cluster_stats_messages_ping_received:401
cluster_stats_messages_pong_received:368
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:774
192.168.153.128:6381> 
192.168.153.128:6381> cluster nodes
343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381@16381 myself,master - 0 1719382344000 1 connected 0-5460
a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382@16382 master - 0 1719382345000 2 connected 5461-10922
1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383@16383 master - 0 1719382344000 3 connected 10923-16383
79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385@16385 slave 1f828dceacb0310fbb55384d88ef13e84a83d310 0 1719382347187 3 connected
e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386@16386 slave 343e013448c5772761c20e7118e73ee95abb6527 0 1719382345000 1 connected
9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384@16384 slave a84d8fe5084c20bd077b45dcf89250c38dd428b7 0 1719382348306 2 connected
192.168.153.128:6381> 

完成!

二、主从容错切换迁移案例

1、数据读写存储

1、redis 读写error 说明
现在我们在6381里边
我们set一个数据
192.168.153.128:6381> set k1 v1
(error) MOVED 12706 192.168.153.128:6383
192.168.153.128:6381>
报错了,为啥,因为他计算出k1的值应该存储在6383上,所以会报错,这不是搞笑吗,我存个数据,还要这么麻烦

需要加个参数 -c 防止路由失效 并新增两个key
root@localhost:/data# redis-cli -p 6381 -h 192.168.153.128 -c
192.168.153.128:6381> 
192.168.153.128:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.153.128:6383
OK
192.168.153.128:6383>
192.168.153.128:6383> get k1
"v1"
192.168.153.128:6383>
因为是集群环境,他会随意切换。完美

在查看下集群信息
root@localhost:/data# redis-cli  --cluster check 192.168.153.128:6381
192.168.153.128:6381 (343e0134...) -> 2 keys | 5461 slots | 1 slaves.  #6381 有2个key
192.168.153.128:6382 (a84d8fe5...) -> 1 keys | 5462 slots | 1 slaves.  #6382 有1个key
192.168.153.128:6383 (1f828dce...) -> 1 keys | 5461 slots | 1 slaves.  #6383 有1个key
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data# 

2、假如6381宕机了,他的从库6386会不会切换

1、先停止6381
[root@localhost redis-node-1]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED       STATUS       PORTS     NAMES
9f9f03c4e2d1   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-6
2c79b54096cd   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-5
b1827a5d2e49   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-4
5ac81bea561a   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-3
d4115e41eb85   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-2
b7f06253c3ae   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-1
[root@localhost redis-node-1]# 
[root@localhost redis-node-1]# docker stop redis-node-1
redis-node-1
[root@localhost redis-node-1]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED       STATUS       PORTS     NAMES
9f9f03c4e2d1   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-6
2c79b54096cd   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-5
b1827a5d2e49   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-4
5ac81bea561a   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-3
d4115e41eb85   redis:latest   "docker-entrypoint.s…"   4 hours ago   Up 4 hours             redis-node-2
[root@localhost redis-node-1]# 

6381已经停机了,我们进入6382,查看下信息
[root@localhost ~]# docker exec -it redis-node-2 bash
root@localhost:/data# 
root@localhost:/data# 
root@localhost:/data# redis-cli -h 192.168.153.128 -p 6382 -c
192.168.153.128:6382> 
192.168.153.128:6382> cluster nodes
9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384@16384 slave a84d8fe5084c20bd077b45dcf89250c38dd428b7 0 1719385316975 2 connected
a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382@16382 myself,master - 0 1719385315000 2 connected 5461-10922
1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383@16383 master - 0 1719385316000 3 connected 10923-16383
79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385@16385 slave 1f828dceacb0310fbb55384d88ef13e84a83d310 0 1719385316000 3 connected
e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386@16386 master - 0 1719385318096 7 connected 0-5460
343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381@16381 master,fail - 1719385170602 1719385163000 1 disconnected
192.168.153.128:6382> 
以前我们6381 是master 现在6381挂了,他的从节点是6386,现在看 6386成了master
刚才k3存在了6381,我们在查下,6381的从节点6386也有,说明数据同步也没问题。
192.168.153.128:6382> get k3
-> Redirected to slot [4576] located at 192.168.153.128:6386
"v3"
192.168.153.128:6386>

假如现在6381恢复了,那么他是master 还是slave?
[root@localhost redis-node-1]# docker start redis-node-1
redis-node-1
[root@localhost redis-node-1]#
192.168.153.128:6386> cluster nodes
9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384@16384 slave a84d8fe5084c20bd077b45dcf89250c38dd428b7 0 1719385738626 2 connected
343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381@16381 slave e6b026a690cae99466085985f9891cefd57fb10c 0 1719385738000 7 connected
a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382@16382 master - 0 1719385738000 2 connected 5461-10922
79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385@16385 slave 1f828dceacb0310fbb55384d88ef13e84a83d310 0 1719385739725 3 connected
e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386@16386 myself,master - 0 1719385739000 7 connected 0-5460
1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383@16383 master - 0 1719385736398 3 connected 10923-16383
192.168.153.128:6386>

可以看到6381 是从节点了。你走了,再来,只能是从节点了。如果还想让6381为主节点,只能先把6386停掉,再起来。然后6386就成了6381的从节点了
[root@localhost redis-node-1]# docker stop redis-node-6
redis-node-6
[root@localhost redis-node-1]# 
[root@localhost redis-node-1]# docker start redis-node-6
redis-node-6
[root@localhost redis-node-1]# 
[root@localhost redis-node-1]#
192.168.153.128:6382> cluster nodes
9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384@16384 slave a84d8fe5084c20bd077b45dcf89250c38dd428b7 0 1719385960341 2 connected
a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382@16382 myself,master - 0 1719385956000 2 connected 5461-10922
1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383@16383 master - 0 1719385957000 3 connected 10923-16383
79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385@16385 slave 1f828dceacb0310fbb55384d88ef13e84a83d310 0 1719385959222 3 connected
e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386@16386 slave 343e013448c5772761c20e7118e73ee95abb6527 0 1719385957000 8 connected
343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381@16381 master - 0 1719385959000 8 connected 0-5460
192.168.153.128:6382> 
192.168.153.128:6382>

三、主从扩容案例

        假如现在流量上来了,3台机器已经满足不了需求了,需要扩容,在增加两台机器,6387(主)和6388(从),该如何实现呢

如果在家一个6387的话,Hash槽该怎么分配呢?分多少呢?

1、新建6387、6388两个节点+新建后启动+查看是否是8个节点

[root@localhost redis-node-1]# docker run -d --name=redis-node-7 --net host --privileged=true  -v /usr/redis/share/redis-node-7:/data redis:latest --cluster-enabled yes --appendonly yes --port 6387
ba3fd656217e38fc7743529846a1356cd58d7c021edf8482b8a0302d963d1402
[root@localhost redis-node-1]# 
[root@localhost redis-node-1]# docker run -d --name=redis-node-8 --net host --privileged=true  -v /usr/redis/share/redis-node-8:/data redis:latest --cluster-enabled yes --appendonly yes --port 6388
f5589a1830a14e944d84954226a811231472bc5b55e0aa1e0d5d2c6806bcd5bd
[root@localhost redis-node-1]# 
[root@localhost redis-node-1]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
f5589a1830a1   redis:latest   "docker-entrypoint.s…"   4 seconds ago    Up 3 seconds              redis-node-8
ba3fd656217e   redis:latest   "docker-entrypoint.s…"   23 seconds ago   Up 22 seconds             redis-node-7
9f9f03c4e2d1   redis:latest   "docker-entrypoint.s…"   4 hours ago      Up 19 minutes             redis-node-6
2c79b54096cd   redis:latest   "docker-entrypoint.s…"   4 hours ago      Up 4 hours                redis-node-5
b1827a5d2e49   redis:latest   "docker-entrypoint.s…"   4 hours ago      Up 4 hours                redis-node-4
5ac81bea561a   redis:latest   "docker-entrypoint.s…"   4 hours ago      Up 4 hours                redis-node-3
d4115e41eb85   redis:latest   "docker-entrypoint.s…"   4 hours ago      Up 4 hours                redis-node-2
b7f06253c3ae   redis:latest   "docker-entrypoint.s…"   4 hours ago      Up 22 minutes             redis-node-1
[root@localhost redis-node-1]#

2、进入6387容器实例内部

[root@localhost redis-node-1]# docker exec -it redis-node-7 bash
root@localhost:/data#

3、将新增的 6387节点(空槽号)作为master节点加入原集群

添加一个参数 add-node
命令:redis-cli --cluster add-node 实际IP:6387 实际IP:6381
就是说 6387将作为master节点新增,6381作为原来节点的领路人,相当于6387拜拜6381的码头,从而加入集群
root@localhost:/data# redis-cli --cluster add-node 192.168.153.128:6387 192.168.153.128:6381
>>> Adding node 192.168.153.128:6387 to cluster 192.168.153.128:6381
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.153.128:6387 to make it join the cluster.
[OK] New node added correctly.
root@localhost:/data# 

4、检查下集群信息

root@localhost:/data# redis-cli --cluster check 192.168.153.128:6381
192.168.153.128:6381 (343e0134...) -> 2 keys | 5461 slots | 1 slaves.
192.168.153.128:6383 (1f828dce...) -> 1 keys | 5461 slots | 1 slaves.
192.168.153.128:6387 (a6f8bf49...) -> 0 keys | 0 slots | 0 slaves.
#从这条就能看出来 已经成功加入了集群,但是没有分配槽位,没有从节点
192.168.153.128:6382 (a84d8fe5...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
M: a6f8bf499c0ca429260af33c46ba58d13a4da3f4 192.168.153.128:6387
   slots: (0 slots) master
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data# 
root@localhost:/data#

5、重新分配槽号

参数  reshard
命令:redis-cli --cluster reshard 192.168.153.128:6381

root@localhost:/data# redis-cli --cluster reshard 192.168.153.128:6381
M: a6f8bf499c0ca429260af33c46ba58d13a4da3f4 192.168.153.128:6387
   slots: (0 slots) master
现在6387还是0槽位
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? a6f8bf499c0ca429260af33c46ba58d13a4da3f4
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all
Do you want to proceed with the proposed reshard plan (yes/no)? yes

6、再次检查集群信息

root@localhost:/data# redis-cli --cluster check 192.168.153.128:6381
192.168.153.128:6381 (343e0134...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6383 (1f828dce...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6387 (a6f8bf49...) -> 1 keys | 4096 slots | 0 slaves.
192.168.153.128:6382 (a84d8fe5...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
M: a6f8bf499c0ca429260af33c46ba58d13a4da3f4 192.168.153.128:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data#

可以看到6387的槽位是slots:[0-1364],[5461-6826],[10923-12287]
因为重新分配的成本太高,所以前3家各自匀出来一部分给6387

7、为6287主节点分配6388从节点

命令:
redis-cli --cluster add-node IP:新slave端口 IP:新master端口 --cluster-slave --cluster-master-id 新主节点ID

root@localhost:/data# redis-cli --cluster add-node 192.168.153.128:6388 192.168.153.128:6387 --cluster-slave --cluster-master-id a6f8bf499c0ca429260af33c46ba58d13a4da3f4
>>> Adding node 192.168.153.128:6388 to cluster 192.168.153.128:6387
>>> Performing Cluster Check (using node 192.168.153.128:6387)
M: a6f8bf499c0ca429260af33c46ba58d13a4da3f4 192.168.153.128:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.153.128:6388 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 192.168.153.128:6387.
[OK] New node added correctly.
root@localhost:/data# 

8、再次检查集群信息

root@localhost:/data# redis-cli --cluster check 192.168.153.128:6381
192.168.153.128:6381 (343e0134...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6383 (1f828dce...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6387 (a6f8bf49...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6382 (a84d8fe5...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
S: ff95cd3de6ede6eb1b5b073248f718e24b4fae6e 192.168.153.128:6388
   slots: (0 slots) slave
   replicates a6f8bf499c0ca429260af33c46ba58d13a4da3f4
M: a6f8bf499c0ca429260af33c46ba58d13a4da3f4 192.168.153.128:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data#

四、主从缩容案例

        刚才是流量高峰,3主3从,满足不了业务需求,我们扩容至4主4从,现在高峰已经过去了,用不了4主4从了,要进行缩容

该怎么操作?删除6387和6388,恢复3主3从

1、6387和6388下线

2、检查集群情况1 获得6388的节点ID

root@localhost:/data# redis-cli --cluster check 192.168.153.128:6381
S: ff95cd3de6ede6eb1b5b073248f718e24b4fae6e 192.168.153.128:6388
   slots: (0 slots) slave
   replicates a6f8bf499c0ca429260af33c46ba58d13a4da3f4

3、将6388删除(从集群中将4号从节点6388删除)

命令:del-node
redis-cli --cluster del-node ip:从机端口 从机6388 ID
root@localhost:/data# redis-cli --cluster del-node 192.168.153.128:6388 ff95cd3de6ede6eb1b5b073248f718e24b4fae6e
>>> Removing node ff95cd3de6ede6eb1b5b073248f718e24b4fae6e from cluster 192.168.153.128:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
root@localhost:/data# 
root@localhost:/data#
root@localhost:/data# redis-cli --cluster check 192.168.153.128:6381
192.168.153.128:6381 (343e0134...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6383 (1f828dce...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6387 (a6f8bf49...) -> 1 keys | 4096 slots | 0 slaves.
192.168.153.128:6382 (a84d8fe5...) -> 1 keys | 4096 slots | 1 slaves.

4、将6387的槽号清空,重新分配

本例将清出来的 槽号给6381,也可以每个节点都给一点,

重新洗牌
root@localhost:/data# redis-cli --cluster reshard 192.168.153.128:6381
How many slots do you want to move (from 1 to 16384)? 4096 
What is the receiving node ID?343e013448c5772761c20e7118e73ee95abb6527   #谁来接手  6381的节点ID
Source node #1: a6f8bf499c0ca429260af33c46ba58d13a4da3f4  #6387的节点ID,告知删除哪个
Source node #2: done
Do you want to proceed with the proposed reshard plan (yes/no)? yes

5、检查集群情况2

root@localhost:/data# redis-cli --cluster check 192.168.153.128:6381
192.168.153.128:6381 (343e0134...) -> 2 keys | 8192 slots | 1 slaves.
192.168.153.128:6383 (1f828dce...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6387 (a6f8bf49...) -> 0 keys | 0 slots | 0 slaves.
192.168.153.128:6382 (a84d8fe5...) -> 1 keys | 4096 slots | 1 slaves.
#可以看到 6387的槽位已经为0了,6381的槽位是2个4096
 
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
M: a6f8bf499c0ca429260af33c46ba58d13a4da3f4 192.168.153.128:6387
   slots: (0 slots) master
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data# 
root@localhost:/data# 

6、将6387删除

root@localhost:/data# redis-cli --cluster del-node 192.168.153.128:6387 a6f8bf499c0ca429260af33c46ba58d13a4da3f4
>>> Removing node a6f8bf499c0ca429260af33c46ba58d13a4da3f4 from cluster 192.168.153.128:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
root@localhost:/data#

7、检查集群情况3

root@localhost:/data# redis-cli --cluster check 192.168.153.128:6381
192.168.153.128:6381 (343e0134...) -> 2 keys | 8192 slots | 1 slaves.
192.168.153.128:6383 (1f828dce...) -> 1 keys | 4096 slots | 1 slaves.
192.168.153.128:6382 (a84d8fe5...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.153.128:6381)
M: 343e013448c5772761c20e7118e73ee95abb6527 192.168.153.128:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   1 additional replica(s)
M: 1f828dceacb0310fbb55384d88ef13e84a83d310 192.168.153.128:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 9fe421e46bacfe26b3f787fb647bbb68b3cbbedd 192.168.153.128:6384
   slots: (0 slots) slave
   replicates a84d8fe5084c20bd077b45dcf89250c38dd428b7
S: 79a453b20c62c44d461433b508b8ef7de580ce72 192.168.153.128:6385
   slots: (0 slots) slave
   replicates 1f828dceacb0310fbb55384d88ef13e84a83d310
M: a84d8fe5084c20bd077b45dcf89250c38dd428b7 192.168.153.128:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: e6b026a690cae99466085985f9891cefd57fb10c 192.168.153.128:6386
   slots: (0 slots) slave
   replicates 343e013448c5772761c20e7118e73ee95abb6527
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@localhost:/data# 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1943675.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

(C语言) 文件读写基础

文章目录 🗂️前言📄ref📄访问标记🗃️文件访问标记 🗂️Code📄demo📄分点讲解🗃️打开/关闭🗃️写🗃️读 🗂️END🌟关注我 &#x1f…

Linux-socket详解

前言 OSI七层模型和TCP/IP四层模型在这里就不说了。 套接字(socket)是一个抽象层,应用程序可以通过它发送或接收数据,可对其进行像对文件一样的打开、读写和关闭等操作。套接字允许应用程序将IO插入到网络中,并与网络…

Nest.js 实战 (四):利用 Pipe 管道实现数据验证和转换

什么是管道(Pipe)? 在 Nest.js 中,管道(Pipelines) 是一种强大的功能,用于预处理进入控制器方法的请求数据,如请求体、查询参数、路径参数等。管道允许开发者在数据到达控制器方法之…

使用GoAccess进行Web日志可视化

运行网站的挑战之一是了解您的 Web 服务器正在做什么。虽然各种监控应用程序可以在您的服务器以高负载或页面响应缓慢运行时提醒您,但要完全了解正在发生的事情,唯一的方法是查看 Web 日志。阅读日志数据页面并了解正在发生的事情可能需要花费大量时间。…

慎用 readFileSync 读取大文件, 教你一招如何优雅处理大文件读取

我们在编写 nodejs 服务的时候,有时候需要使用 fs.readFileSync api 去读取文件,但是使用 fs.readFileSync 会将文件读取在内存中,如果遇到了文件很大时,fs.readFileSync 会占据服务器大量的内存,即使读取的文件比较小…

代发考试战报:7月16号武汉参加HCIP-Transmission传输 H31-341考试通过

代发考试战报:7月16号武汉参加HCIP-Transmission传输 H31-341考试通过,,有2个题好像没见到过,其他都是题库里的原题,题库很准,这个题库也不是一直不变的,也没规律可循什么时候变题,哪…

【TAROT学习日记】韦特体系塔罗牌学习(6)——教皇 THE HIEROPHANT

韦特体系塔罗牌学习(6)——教皇 THE HIEROPHANT 目录 韦特体系塔罗牌学习(6)——教皇 THE HIEROPHANT牌面分析1. 基础信息2. 图片元素 正位牌意1. 关键词/句2.爱情婚姻3. 学业事业4. 人际财富5. 其他象征意 逆位牌意1. 关键词/句2…

PostgreSQL 中如何实现数据的批量插入和更新?

🍅关注博主🎗️ 带你畅游技术世界,不错过每一次成长机会!📚领书:PostgreSQL 入门到精通.pdf 文章目录 PostgreSQL 中如何实现数据的批量插入和更新?一、批量插入数据1. 使用 INSERT INTO 语句结…

PSINS工具箱函数介绍——r2d

介绍工具箱里面r2d这个小函数的作用。 程序源码 function deg r2d(rad) % Convert angle unit from radian to degree % % Prototype: deg r2d(rad) % Input: rad - angle in radian(s) % Output: deg - angle in degree(s) % % See also r2dm, r2dms, d2r, dm2r, dms2r% …

运维锅总详解VLAN

本文介绍了VLAN作用、公司多个部门VLAN举例、VLAN间路由、VLAN协议控制字段解释及工作流程、VLAN历史演进等方面对VLAN技术进行详细分析。希望对您理解VLAN有所帮助! 一、VLAN作用 VLAN(Virtual Local Area Network,虚拟局域网)…

FreeSWITCH 1.10.10 简单图形化界面26-在网页上播放SIP设备视频

​ FreeSWITCH 1.10.10 简单图形化界面26-在网页上播放SIP设备视频 1、前言2、大概流程3、测试环境4、安装流媒体服务器5、设置流媒体服务器接口6、简单写个web接口7、测试一下1、web播放在线播放器1在线播放器2本地video控件 2、vlc播放vlc播放rtmpvlc播放rtsp 8、总结 1、前…

简过网:公务员公示后是不是就没有问题了?

A:请问,公务员录用考试公示期过后是不是说明就正式录用了? Q:公务员已经公示录用,就说明前期政审已经过关,档案在前期的审查工作中没有发现问题,在入职前,档案会调入组织部&#xf…

10.发布确认

解决消息不丢失的一个重要环节。 前面说过消息持久化,可能出现一种情况就是: 尽管它告诉rabbitmq将消息保存到磁盘,但是依然存在当消息刚准备存储到磁盘的时候,但是还没有存储完,消息还在缓存的一个间隔点。此时消息…

充电桩--交流充电桩硬件原理以及竞品方案

聚焦光伏领域、深耕储能市场、探究充电技术 微信公众号 小Q下午茶 聚焦光伏领域,深耕储能市场,探究充电技术 47篇原创内容 公众号 一、交流充电桩系统介绍 为了实现能源安全和“双碳”目标的达成,充电桩是需要智能电网支持,…

Linux 各目录

Linux 是一个非常严谨的操作系统,每个目录都有自己的作用,这些作用是固定的,没有特殊情况,应严格执行; Linux 中所有东西以文件形式存储和管理,命令也不例外; 以下四个 bin 是二进制文件&…

linux C++ onnxruntime yolov8 视频检测Demo

linux C onnxruntime yolov8 视频检测Demo 目录 项目目录 效果 ​编辑CMakeLists.txt 代码 下载 项目目录 效果 ./yolov8_demo --help ./yolov8_demo -c2 -ptrue ./yolov8_demo -c1 -strue CMakeLists.txt # cmake needs this line cmake_minimum_required(VERSION 3…

力扣最热一百题——3.最长连续序列

目录 题目链接:128. 最长连续序列 - 力扣(LeetCode) 题目描述 示例 提示 解法一:排序双指针剪枝 思路 1. 获取数组长度并进行特判 2. 对数组进行排序 3. 初始化变量 4. 遍历数组并寻找最长连续子序列 5. 返回结果 总结…

Linux笔记-对.a静态库的进一步理解(2024-04-09)

过程 问: Linux中生成.a库时候,如果代码里面调用了一些只引用未定义的函数,gcc不报错,但能生成对应的.a文件,这是为什么?再写一个执行程序去调用.a库时,链接时就会报这个.a库未定义的引用&…

列举excel中调整行高列宽的五种方法

列举excel中调整行高列宽的五种方法 在Excel中调整行高列宽的方法有以下五种: 使用鼠标手动调整行高列宽:将鼠标悬停在行或列的边界上,光标会变成双向箭头,此时按住鼠标左键并拖动边界即可调整行高或列宽。 使用快捷键调整行高列…

node和npm安装;electron、 electron-builder安装

1、node和npm安装 参考: https://blog.csdn.net/sw150811426/article/details/137147783 下载: https://nodejs.org/dist/v20.15.1/ 安装: 点击下载msi直接运行安装 安装完直接cmd打开可以,默认安装就已经添加了环境变量&…