安装docker
1
2
3
4
5
6
7
8
9
10
| yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
systemctl enable docker
systemctl start docker
|
添加yum源
1
2
3
4
5
6
7
8
9
| cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
|
安装 kubeadm kubectl
1
2
| yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
|
安装需要的镜像
通过下面命令可以查询到安装 kubernets 需要的docker镜像
1
2
3
4
5
6
7
8
9
10
| [root@s1001 ~]# kubeadm config images list
I0423 00:11:57.214027 28540 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0423 00:11:57.214118 28540 version.go:97] falling back to the local client version: v1.14.1
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
|
pull这些镜像的话需要FQ, 如果网络不允许的话可以试着用下面的方法, 先从其他镜像仓库pull到需要的镜像,再将镜像的名字改掉:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| //下载镜像
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.1
docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
//通过docker tag命令来修改镜像的标签:
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag docker.io/mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
//使用docker rmi删除不用镜像
docker rmi docker.io/mirrorgooglecontainers/kube-proxy:v1.14.1
docker rmi docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.1
docker rmi docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.1
docker rmi docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.1
docker rmi docker.io/mirrorgooglecontainers/etcd:3.3.10
docker rmi docker.io/mirrorgooglecontainers/pause:3.1
docker rmi docker.io/coredns/coredns:1.3.1
|
也可以把以上命令改为shell脚本,这个以后有时间再写吧。 现在可以看下我们已经下载并改名后的的镜像:
1
2
3
4
5
6
7
8
9
| [root@s1001 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.14.1 20a2d7035165 13 days ago 82.1MB
k8s.gcr.io/kube-apiserver v1.14.1 cfaa4ad74c37 13 days ago 210MB
k8s.gcr.io/kube-controller-manager v1.14.1 efb3887b411d 13 days ago 158MB
k8s.gcr.io/kube-scheduler v1.14.1 8931473d5bdb 13 days ago 81.6MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 3 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 4 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 16 months ago 742kB
|
使用kubeadm初始化
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
| kubeadm init \
--kubernetes-version=v1.14.1 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=172.16.10.11 \
--ignore-preflight-errors=Swap
[root@s1001 ~]# kubeadm init \
> --kubernetes-version=v1.14.1 \
> --pod-network-cidr=10.244.0.0/16 \
> --apiserver-advertise-address=172.16.10.11 \
> --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [s1001.lab.org kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.10.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [s1001.lab.org localhost] and IPs [172.16.10.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [s1001.lab.org localhost] and IPs [172.16.10.11 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.503276 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node s1001.lab.org as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node s1001.lab.org as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5mdjxl.o168za45ik8stuv5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.10.11:6443 --token 5mdjxl.o168za45ik8stuv5 \
--discovery-token-ca-cert-hash sha256:9d468f8a2327bd49105326b8e2056723207f6c079d95d10a0b708dff068ba9b2
|
kubeadm join 这个命令用于将其他k8s worker节点加入到集群,token 默认有效期是24小时,过期后就不可用了。解决方法如下:
1
2
| //重新生成新的token
kubeadm token create
|
查看k8s节点状态
1
2
3
| [root@s1001 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
s1001.lab.org NotReady master 23h v1.14.1
|
安装后可能会看到节点还是不可用的。通过运行 journalctl -f -u kubelet 查看日志找到问题。
1
2
3
4
5
6
7
8
| 追踪当前正在写的日志,可以使用-f标记。功能类似为tail -f,只要不终止,会一直监控
journalctl -f
也许最有用的过滤方式是你感兴趣的单位。-u选项来过滤
journalctl -u
所以我们最终使用的命令是:
journalctl -f -u kubelet
|
可以看到如下日志:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| [root@s1001 ~]# journalctl -f -u kubelet
-- Logs begin at Sun 2019-04-21 17:03:17 CST. --
Apr 24 00:16:19 s1001.lab.org kubelet[30722]: I0424 00:16:19.410458 30722 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/20b7a920-65e3-11e9-a6ba-000c29a72674-cni") pod "kube-flannel-ds-amd64-zj4gw" (UID: "20b7a920-65e3-11e9-a6ba-000c29a72674")
Apr 24 00:16:19 s1001.lab.org kubelet[30722]: I0424 00:16:19.410499 30722 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/20b7a920-65e3-11e9-a6ba-000c29a72674-flannel-cfg") pod "kube-flannel-ds-amd64-zj4gw" (UID: "20b7a920-65e3-11e9-a6ba-000c29a72674")
Apr 24 00:16:19 s1001.lab.org kubelet[30722]: W0424 00:16:19.757355 30722 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 00:16:19 s1001.lab.org kubelet[30722]: E0424 00:16:19.831123 30722 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 24 00:16:24 s1001.lab.org kubelet[30722]: W0424 00:16:24.758221 30722 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 00:16:24 s1001.lab.org kubelet[30722]: E0424 00:16:24.832041 30722 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 24 00:16:29 s1001.lab.org kubelet[30722]: W0424 00:16:29.758408 30722 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 00:16:29 s1001.lab.org kubelet[30722]: E0424 00:16:29.832984 30722 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 24 00:16:34 s1001.lab.org kubelet[30722]: W0424 00:16:34.758573 30722 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 00:16:34 s1001.lab.org kubelet[30722]: E0424 00:16:34.833905 30722 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 24 00:16:39 s1001.lab.org kubelet[30722]: W0424 00:16:39.758748 30722 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 24 00:16:39 s1001.lab.org kubelet[30722]: E0424 00:16:39.835093 30722 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
^C
|
可以看到是网络插件的问题,安装网络插件
1
| kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
|
再次查看节点状态:
1
2
3
| [root@s1001 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
s1001.lab.org Ready master 23h v1.14.1
|
问题解决,一个简单的k8s就可以玩了~。