Redis
Redis 是一个开源的内存数据存储系统,它经常被用作缓存数据库,它可以把数据保存在内存中,还可以将数据落到磁盘进行持久化。它支持的数据格式有 string hash、list、set、sorted set,无论是互联网巨头或小企业都比较喜欢用它。
Why Redis
- 存取快,用作缓存,大量减少后端数据库调用
- 跨平台
- 支持大多数编程语言,丰富的客户端库
- 开源、稳定
Redis Cluster
Redis Cluster是一组Redis实例, 用于扩展Redis,使其更有弹性。集群中如果主节点无法访问,则从属节点可升级为主节点。三个主节点可以组成最小的Redis集群。每个主节点还可以有一个从属节点,可以组成最小的高可用集群(允许最小的故障转移)。每个主节点被分配到一个0到16383的哈希槽范围,集群内部通信是用的gossip协议传播集群信息或发现新的节点。
在 Kubernetes 中部署 Redis Cluster
每个Redis实例都要依赖一个配置文件来跟踪集群中其它实例及其角色。为此,我们需要 Kubernetes StatefulSets 和 PersistentVolumes 的组合。
先创建 redis-pv.yaml, 这里用了NFS:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-redis0
labels:
app: redis-cluster
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
nfs:
server: s1001.lab.org
path: "/data/nfs-data/k8s/redis/pv0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-redis1
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
nfs:
server: s1001.lab.org
path: "/data/nfs-data/k8s/redis/pv1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-redis2
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
nfs:
server: s1001.lab.org
path: "/data/nfs-data/k8s/redis/pv2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-redis3
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
nfs:
server: s1001.lab.org
path: "/data/nfs-data/k8s/redis/pv3"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-redis4
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
nfs:
server: s1001.lab.org
path: "/data/nfs-data/k8s/redis/pv4"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv-redis5
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
nfs:
server: s1001.lab.org
path: "/data/nfs-data/k8s/redis/pv5"
创建 redis-statefulset.yaml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
# ConfigMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
data:
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
# StatefulSet
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:6.0.9
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command:
- "redis-server"
args:
- "/conf/redis.conf"
- "--cluster-announce-ip $(POD_IP)"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Mi
# Service
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
spec:
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
clusterIP: None
selector:
app: redis-cluster
分别执行:
1
2
kubectl apply -f redis-pv.yaml
kubectl apply -f redis-statefulset.yaml
验证安装:
1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl get pods -l app=redis-cluster
redis-cluster-0 1/1 Running 0 2m
redis-cluster-1 1/1 Running 0 1m
redis-cluster-2 1/1 Running 0 1m
redis-cluster-3 1/1 Running 0 1m
redis-cluster-4 1/1 Running 0 1m
redis-cluster-5 1/1 Running 0 1m
//查看pv状态
kubectl get pv
//查看pvc (会自动创建pvc)
kubectl get pvc
接下来需要执行以下命令, 将节点加入集群, 前三个节点会称为主节点,后面三个节点会称为对应的从节点, 其中参数 cluster-replicas 代表为每个主节点会分配一个从节点:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range .items[*]}{.status.podIP}:6379 {end}')
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.244.1.54:6379 to 10.244.1.52:6379
Adding replica 10.244.2.78:6379 to 10.244.2.76:6379
Adding replica 10.244.2.77:6379 to 10.244.1.53:6379
M: d832fbf2d7de1d1411098e361cc273cde5e72ea3 10.244.1.52:6379
slots:[0-5460] (5461 slots) master
M: 0520220e916de7971c12172da5831e03f2778e1b 10.244.2.76:6379
slots:[5461-10922] (5462 slots) master
M: 5e992231b11bdfcd881b20a27ea8fee41ec7cd6f 10.244.1.53:6379
slots:[10923-16383] (5461 slots) master
S: fc24cefc40824e172a85da56074a759166d5b7f3 10.244.2.77:6379
replicates 5e992231b11bdfcd881b20a27ea8fee41ec7cd6f
S: 885f0e873f2cab6dae7ad044606c142d9caa70be 10.244.1.54:6379
replicates d832fbf2d7de1d1411098e361cc273cde5e72ea3
S: d27eafebdb94928437870d8230ef772bbb8916d4 10.244.2.78:6379
replicates 0520220e916de7971c12172da5831e03f2778e1b
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 10.244.1.52:6379)
M: d832fbf2d7de1d1411098e361cc273cde5e72ea3 10.244.1.52:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: d27eafebdb94928437870d8230ef772bbb8916d4 10.244.2.78:6379
slots: (0 slots) slave
replicates 0520220e916de7971c12172da5831e03f2778e1b
M: 5e992231b11bdfcd881b20a27ea8fee41ec7cd6f 10.244.1.53:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 0520220e916de7971c12172da5831e03f2778e1b 10.244.2.76:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: fc24cefc40824e172a85da56074a759166d5b7f3 10.244.2.77:6379
slots: (0 slots) slave
replicates 5e992231b11bdfcd881b20a27ea8fee41ec7cd6f
S: 885f0e873f2cab6dae7ad044606c142d9caa70be 10.244.1.54:6379
slots: (0 slots) slave
replicates d832fbf2d7de1d1411098e361cc273cde5e72ea3
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
集群部署完毕,槽点分配完毕,验证集群是否部署成功:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
//查询集群信息
kubectl exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:68
cluster_stats_messages_pong_sent:75
cluster_stats_messages_sent:143
cluster_stats_messages_ping_received:70
cluster_stats_messages_pong_received:68
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:143
//查询集群节点角色
for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role; echo; done
redis-cluster-0
master
252
10.244.1.54
6379
252
redis-cluster-1
master
252
10.244.2.78
6379
252
redis-cluster-2
master
252
10.244.2.77
6379
238
redis-cluster-3
slave
10.244.1.53
6379
connected
252
redis-cluster-4
slave
10.244.1.52
6379
connected
252
redis-cluster-5
slave
10.244.2.76
6379
connected
252
可以看到是三主三从节点,再来测试下写入数据, 注意参数 -c 为指定 以 redis-cluster 模式连接:
1
2
3
4
5
6
7
//写入数据
kubectl exec -it redis-cluster-0 -- redis-cli -c set hello world
OK
//在其它节点上读取
kubectl exec -it redis-cluster-1 -- redis-cli -c get hello
"world"
缺点与待解决的问题
Redis Cluster主备关系的节点尽量不要部署在同一台机器上,如果机器宕机, 那么这台机器的分片数据将不可用。StatefulSet可以利用 spec.affinity.nodeAffinity 来保证在同一台机器上只部署一个节点,但是这样机器利用率会很低。
单个Redis Cluster由于gossip协议的限制,横向扩展能力有限,集群规模的节点数达到一定的数量时,节点选主这类拓扑变更的效率就明显降低。解决这个问题的话,可以利用 proxy 对 key 做逻辑分片,接入多个 Redis Cluster集群。
由于不一定所有的应用都迁移到了 K8S 集群上,如何用将StatefulSet的应用暴露给集群外部的应用使用呢?一般的做法是通过NodePort暴露出来,有没有更好的方法呢?