⾼可⽤:保证redis⼀直处于可⽤状态,即时出现了故障也有备⽤⽅案保证可⽤性
⾼并发:⼀个redis实例已经可以⽀持多达11w并发读操作或者8.1w并发写操作;但是如果对于有更⾼并发需
求的应⽤来说,我们可以通过 读写分离 、 集群配置 来解决⾼并发问题
Redis集群Redis集群中每个节点是对等的,⽆中⼼结构
数据按照slots分布式存储在不同的redis节点上,节点中的数据可共享,可以动态调整数据的分布
可扩展性强,可以动态增删节点,最多可扩展⾄1000+节点
集群每个节点通过主备(哨兵模式)可以保证其⾼可⽤性
集群搭建
[root@VM-4-3-centos cluster-conf]# cd /usr/local/redis-5.0.5
[root@VM-4-3-centos cluster-conf]# mkdir cluster-conf
[root@VM-4-3-centos cluster-conf]# cat redis.conf | grep -v "#"|grep -v "^$" > cluster-conf/redis-7001.conf
[root@VM-4-3-centos cluster-conf]# cd cluster-conf/
[root@VM-4-3-centos cluster-conf]# ls
redis-7001.conf
[root@VM-4-3-centos cluster-conf]# vim redis-7001.conf
拷⻉6个⽂件,端⼝分别为 7001-7006
[root@VM-4-3-centos cluster-conf]# sed 's/7001/7002/g' redis-7001.conf > redis-7002.conf
[root@VM-4-3-centos cluster-conf]# sed 's/7001/7003/g' redis-7001.conf > redis-7003.conf
[root@VM-4-3-centos cluster-conf]# sed 's/7001/7004/g' redis-7001.conf > redis-7004.conf
[root@VM-4-3-centos cluster-conf]# sed 's/7001/7005/g' redis-7001.conf > redis-7005.conf
[root@VM-4-3-centos cluster-conf]# sed 's/7001/7006/g' redis-7001.conf > redis-7006.conf
启动6个redis实例
[root@VM-4-3-centos cluster-conf]# redis-server redis-7001.conf &
[root@VM-4-3-centos cluster-conf]# redis-server redis-7002.conf &
[root@VM-4-3-centos cluster-conf]# redis-server redis-7003.conf &
[root@VM-4-3-centos cluster-conf]# redis-server redis-7004.conf &
[root@VM-4-3-centos cluster-conf]# redis-server redis-7005.conf &
[root@VM-4-3-centos cluster-conf]# redis-server redis-7006.conf &
查看6个实例是否启动
[root@VM-4-3-centos cluster-conf]# ps -ef|grep redis
root 4789 1 0 10:20 ? 00:00:00 redis-server *:7001 [cluster]
root 4794 1 0 10:20 ? 00:00:00 redis-server *:7002 [cluster]
root 4799 1 0 10:20 ? 00:00:00 redis-server *:7003 [cluster]
root 4806 1 0 10:21 ? 00:00:00 redis-server *:7004 [cluster]
root 4811 1 0 10:21 ? 00:00:00 redis-server *:7005 [cluster]
root 4816 1 0 10:21 ? 00:00:00 redis-server *:7006 [cluster]
启动集群
[root@VM-4-3-centos cluster-conf]# redis-cli --cluster create 81.68.82.130:7001 81.68.82.130:7002 81.68.82.130:7003 81.68.82.130:7004 81.68.82.130:7005 81.68.82.130:7006 --cluster-replicas 1
连接集群:
[root@VM-4-3-centos cluster-conf]# redis-cli -p 7001 -c
集群管理如果集群启动失败:等待节点加⼊
1. 云服务器检查安全组是否放⾏redis实例端⼝,以及+10000的端⼝
2. Linux防⽕墙是否放⾏redis服务(关闭防⽕墙)
3. Linux状态(top)---- 更换云主机操作系统
4. redis配置⽂件错误
创建集群:
[root@VM-4-3-centos cluster-conf]# redis-cli --cluster create 81.68.82.130:7001 81.68.82.130:7002 81.68.82.130:7003 81.68.82.130:7004 81.68.82.130:7005 81.68.82.130:7006 --cluster-replicas 1
查看集群状态
[root@VM-4-3-centos cluster-conf]# redis-cli --cluster info 81.68.82.130:7001
47.96.11.185:7001 (4678478a...) -> 2 keys | 5461 slots | 1 slaves.
47.96.11.185:7002 (e26eaf2a...) -> 0 keys | 5462 slots | 1 slaves.
47.96.11.185:7003 (5752eb20...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 3 keys in 3 masters.
0.00 keys per slot on average.
平衡节点的数据槽数
[root@VM-4-3-centos cluster-conf]# redis-cli --cluster rebalance 81.68.82.130:7001
>>> Performing Cluster Check (using node 81.68.82.130:7001)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
*** No rebalancing needed! All nodes are within the 2.00% threshold.
迁移节点槽
以下是将 7001端口 的 5459个节点槽迁移到 7002端口上
删除节点
[root@VM-4-3-centos cluster-conf]# redis-cli --cluster del-node 81.68.82.130:7001 2bbb2237941d3cbc51c841bf9b4bd6c6e9db94a3
>>> Removing node 2bbb2237941d3cbc51c841bf9b4bd6c6e9db94a3 from cluster 81.68.82.130:7001
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@VM-4-3-centos cluster-conf]# redis-cli --cluster info 81.68.82.130:7002
81.68.82.130:7002 (a5cd5c1f...) -> 2 keys | 10923 slots | 2 slaves.
81.68.82.130:7003 (609295d3...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 2 keys in 2 masters.
0.00 keys per slot on average.
添加节点
[root@VM-4-3-centos cluster-conf]# redis-cli --cluster add-node 81.68.82.130:7007 81.68.82.130:7002
>>> Adding node 81.68.82.130:7007 to cluster 81.68.82.130:7002
>>> Performing Cluster Check (using node 81.68.82.130:7002)
M: a5cd5c1f696d367aac2ab511c20401c5ca693df8 81.68.82.130:7002
slots:[0-10922] (10923 slots) master
2 additional replica(s)
S: 55c3cbd57c90a73cac4dd06872ad7fa837fcc5ed 81.68.82.130:7005
slots: (0 slots) slave
replicates a5cd5c1f696d367aac2ab511c20401c5ca693df8
S: a8d7c675745dcfa08f183c66e0c8870fea234180 81.68.82.130:7006
slots: (0 slots) slave
replicates a5cd5c1f696d367aac2ab511c20401c5ca693df8
M: 609295d3d82f46cc77775e0a122733067e87bce2 81.68.82.130:7003
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: c291dddb5fc9f5a9a8da5a22988030b26caa31b4 81.68.82.130:7004
slots: (0 slots) slave
replicates 609295d3d82f46cc77775e0a122733067e87bce2
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 81.68.82.130:7007 to make it join the cluster.
[OK] New node added correctly.