单机部署k8s

时间:2022-07-22
本文章向大家介绍单机部署k8s,主要内容包括其使用实例、应用技巧、基本知识点总结和需要注意事项,具有一定的参考价值,需要的朋友可以参考一下。

安装docker-ce

systemctl  enable docker  systemctl start docker 

修改Cgroup Driver 为systemd 默认为cgroups

echo KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=systemd  > /etc/default/kubelet

重启docker

临时关闭swap 

swapoff -a

配置yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

安装指定版本 

yum clean all

yum makecache

yum -y install kubelet-1.18.3 kubectl-1.18.3 kubeadm-1.18.3

由于几个重要镜像google不能直接拉取,所以从Docker官方默认镜像平台拉取镜像并重新打tag的方式来绕过对 k8s.gcr.io 的访问

docker pull mirrorgcrio/kube-apiserver:v1.18.3
docker pull mirrorgcrio/kube-controller-manager:v1.18.3
docker pull mirrorgcrio/kube-scheduler:v1.18.3
docker pull mirrorgcrio/kube-proxy:v1.18.3
docker pull mirrorgcrio/pause:3.2
docker pull mirrorgcrio/etcd:3.4.3-0
docker pull mirrorgcrio/coredns:1.6.7
docker tag mirrorgcrio/kube-apiserver:v1.18.3 k8s.gcr.io/kube-apiserver:v1.18.3
docker tag mirrorgcrio/kube-controller-manager:v1.18.3 k8s.gcr.io/kube-controller-manager:v1.18.3
docker tag mirrorgcrio/kube-scheduler:v1.18.3 k8s.gcr.io/kube-scheduler:v1.18.3
docker tag mirrorgcrio/kube-proxy:v1.18.3 k8s.gcr.io/kube-proxy:v1.18.3
docker tag mirrorgcrio/pause:3.2 k8s.gcr.io/pause:3.2
docker tag mirrorgcrio/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag mirrorgcrio/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker image rm mirrorgcrio/kube-apiserver:v1.18.3
docker image rm mirrorgcrio/kube-controller-manager:v1.18.3
docker image rm mirrorgcrio/kube-scheduler:v1.18.3
docker image rm mirrorgcrio/kube-proxy:v1.18.3
docker image rm mirrorgcrio/pause:3.2
docker image rm mirrorgcrio/etcd:3.4.3-0
docker image rm mirrorgcrio/coredns:1.6.7

systemctl enable kubelet 

这个时候启动kubectl会出错 我们直接  kubeadm init --kubernetes-version=v1.18.3 --pod-network-cidr=10.244.0.0/16

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config     chown $(id -u):$(id -g) $HOME/.kube/config    kubectl get pods --all-namespaces

默认情况下kubernetes中的master并不能运行用户的Pod. 因此需要删除 Train,允许master执行Pod 执行下命令     kubectl taint nodes --all node-role.kubernetes.io/master-

    kubectl get pods --all-namespaces     mkdir -p /etc/cni/net.d/     cat <<EOF> /etc/cni/net.d/10-flannel.confn{n“name”: “cbr0”,n“type”: “flannel”,n“delegate”: {n“isDefaultGateway”: truen}n}nnEOF    mkdir /usr/share/oci-umount/oci-umount.d -p     mkdir /run/flannel/    cat <<EOF> /run/flannel/subnet.envnFLANNEL_NETWORK=10.244.0.0/16nFLANNEL_SUBNET=10.244.1.0/24nFLANNEL_MTU=1450nFLANNEL_IPMASQ=truennEOF

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

vim /etc/cni/net.d/10-flannel.conflist

{   "name": "cbr0",   "cniVersion": "0.2.0",   "plugins": [     {       "type": "flannel",       "delegate": {         "hairpinMode": true,         "isDefaultGateway": true       }     },     {       "type": "portmap",       "capabilities": {         "portMappings": true       }     }   ] }

systemctl daemon-reload

安装kuboard

kubectl apply -f https://kuboard.cn/install-script/v1.18.3/nginx-ingress.yaml     kubectl apply -f https://kuboard.cn/install-script/v1.18.x/nginx-ingress.yaml     kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml     kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.6/metrics-server.yaml     kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system     echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d)