欢迎访问 生活随笔!

生活随笔

当前位置: 首页 > 编程资源 > 编程问答 >内容正文

编程问答

yum安装k8s集群(单master两个node、阿里云镜像源)

发布时间:2025/1/21 编程问答 43 豆豆
生活随笔 收集整理的这篇文章主要介绍了 yum安装k8s集群(单master两个node、阿里云镜像源) 小编觉得挺不错的,现在分享给大家,帮大家做个参考.

yum安装k8s集群(单master节点方式)

一、环境准备

1、系统要求

按量付费阿里云主机三台

要求:centos7.6~7.8;以下为 https://kuboard.cn/install/install-k8s.html#%E6%A3%80%E6%9F%A5-centos-hostname 网站的检验结果。

CentOS 版本本文档是否兼容备注
7.8😄已验证
7.7😄已验证
7.6😄已验证
7.5😞已证实会出现 kubelet 无法启动的问题
7.4😞已证实会出现 kubelet 无法启动的问题
7.3😞已证实会出现 kubelet 无法启动的问题
7.2😞已证实会出现 kubelet 无法启动的问题

2、前置步骤(所有节点)

  • centos 版本为 7.6 或 7.7、CPU 内核数量大于等于 2,且内存大于等于 4G
  • hostname 不是 localhost,且不包含下划线、小数点、大写字母
  • 任意节点都有固定的内网 IP 地址(集群机器统一内网)
  • 任意节点上 IP 地址 可互通(无需 NAT 映射即可相互访问),且没有防火墙、安全组隔离
  • 任意节点不会直接使用 docker run 或 docker-compose 运行容器。Pod
#关闭防火墙: 或者阿里云开通安全组端口访问 systemctl stop firewalld systemctl disable firewalld#关闭 selinux: sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0#关闭 swap: swapoff -a #临时 sed -ri 's/.*swap.*/#&/' /etc/fstab #永久#将桥接的 IPv4 流量传递到 iptables 的链: # 修改 /etc/sysctl.conf # 如果有配置,则修改 sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.conf sed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.conf sed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf # 可能没有,追加 echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf # 执行命令以应用 sysctl -p

二、安装Docker环境(所有节点)

#1、安装docker ##1.1、卸载旧版本 sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine ##1.2、安装基础依赖 yum install -y yum-utils \ device-mapper-persistent-data \ lvm2##1.3、配置docker yum源 sudo yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo##1.4、安装并启动 docker yum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8 containerd.io systemctl enable docker systemctl start docker##1.5、配置docker加速 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://t1gbabbr.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker

三、安装k8s环境

1、安装k8s、kubelet、kubeadm、kubectl(所有节点)

# 配置K8S的yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF# 卸载旧版本 yum remove -y kubelet kubeadm kubectl# 安装kubelet、kubeadm、kubectl yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3#开机启动和重启kubelet systemctl enable kubelet && systemctl start kubelet ##注意,如果此时查看kubelet的状态,他会无限重启,等待接收集群命令,和初始化。这个是正常的。

2、初始化master节点(master节点)

#1、下载master节点需要的镜像【选做】 #创建一个.sh文件,内容如下, #!/bin/bash images=(kube-apiserver:v1.17.3kube-proxy:v1.17.3kube-controller-manager:v1.17.3kube-scheduler:v1.17.3coredns:1.6.5etcd:3.4.3-0pause:3.1 ) for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done#2、初始化master节点 kubeadm init \ --apiserver-advertise-address=172.26.165.243 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=192.168.0.0/16#service网络和pod网络;docker service create #docker container --> ip brigde #Pod ---> ip 地址,整个集群 Pod 是可以互通。255*255 #service ---> #3、配置 kubectl mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config#4、提前保存令牌 kubeadm join 172.26.165.243:6443 --token afb6st.b7jz45ze7zpg65ii \--discovery-token-ca-cert-hash sha256:e5e5854508dafd04f0e9cf1f502b5165e25ff3017afd23cade0fe6acb5bc14ab#5、部署网络插件 #上传网络插件,并部署 #kubectl apply -f calico-3.13.1.yaml kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml#网络好的时候,就没有下面的操作了 calico: image: calico/cni:v3.14.0 image: calico/cni:v3.14.0 image: calico/pod2daemon-flexvol:v3.14.0 image: calico/node:v3.14.0 image: calico/kube-controllers:v3.14.0#6、查看状态,等待就绪 watch kubectl get pod -n kube-system -o wide

3、worker加入集群

#1、使用刚才master打印的令牌命令加入 kubeadm join 172.26.248.150:6443 --token ktnvuj.tgldo613ejg5a3x4 \--discovery-token-ca-cert-hash sha256:f66c496cf7eb8aa06e1a7cdb9b6be5b013c613cdcf5d1bbd88a6ea19a2b454ec #2、如果超过2小时忘记了令牌,可以这样做 kubeadm token create --print-join-command #打印新令牌 kubeadm token create --ttl 0 --print-join-command #创建个永不过期的令牌

4、搭建NFS作为默认sc

4.1、配置NFS服务器

yum install -y nfs-utils #执行命令 vi /etc/exports,创建 exports 文件,文件内容如下: echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports #/nfs/data 172.26.248.0/20(rw,no_root_squash) #执行以下命令,启动 nfs 服务 # 创建共享目录 mkdir -p /nfs/data systemctl enable rpcbind systemctl enable nfs-server systemctl start rpcbind systemctl start nfs-server exportfs -r #检查配置是否生效 exportfs # 输出结果如下所示 /nfs/data /nfs/data #测试Pod直接挂载NFS了 apiVersion: v1 kind: Pod metadata:name: vol-nfsnamespace: default spec:volumes:- name: htmlnfs:path: /nfs/data #1000Gserver: 自己的nfs服务器地址containers:- name: myappimage: nginxvolumeMounts:- name: htmlmountPath: /usr/share/nginx/html/

4.2、搭建NFS-Client

#服务器端防火墙开放111、662、875、892、2049的 tcp / udp 允许,否则远端客户无法连接。 #安装客户端工具 yum install -y nfs-utils#执行以下命令检查 nfs 服务器端是否有设置共享目录 # showmount -e $(nfs服务器的IP) showmount -e 172.26.165.243 # 输出结果如下所示 Export list for 172.26.165.243 /nfs/data *#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmount mkdir /root/nfsmount # mount -t nfs $(nfs服务器的IP):/root/nfs_root /root/nfsmount #高可用备份的方式 mount -t nfs 172.26.165.243:/nfs/data /root/nfsmount # 写入一个测试文件 echo "hello nfs server" > /root/nfsmount/test.txt#在 nfs 服务器上执行以下命令,验证文件写入成功 cat /data/volumes/test.txt

4.3、设置动态供应

4.3.1、创建provisioner(NFS环境前面已经搭好)

字段名称填入内容备注
名称nfs-storage自定义存储类名称
NFS Server172.26.165.243NFS服务的IP地址
NFS Path/nfs/dataNFS服务所共享的路径
# 先创建授权 # vi nfs-rbac.yaml --- apiVersion: v1 kind: ServiceAccount metadata:name: nfs-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: nfs-provisioner-runner rules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["watch", "create", "update", "patch"]- apiGroups: [""]resources: ["services", "endpoints"]verbs: ["get","create","list", "watch","update"]- apiGroups: ["extensions"]resources: ["podsecuritypolicies"]resourceNames: ["nfs-provisioner"]verbs: ["use"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: run-nfs-provisioner subjects:- kind: ServiceAccountname: nfs-provisionernamespace: default roleRef:kind: ClusterRolename: nfs-provisioner-runnerapiGroup: rbac.authorization.k8s.io --- #vi nfs-deployment.yaml;创建nfs-client的授权 kind: Deployment apiVersion: apps/v1 metadata:name: nfs-client-provisioner spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccount: nfs-provisionercontainers:- name: nfs-client-provisionerimage: lizhenliang/nfs-client-provisionervolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAME #供应者的名字value: storage.pri/nfs #名字虽然可以随便起,以后引用要一致- name: NFS_SERVERvalue: 172.26.165.243- name: NFS_PATHvalue: /nfs/datavolumes:- name: nfs-client-rootnfs:server: 172.26.165.243path: /nfs/data ##这个镜像中volume的mountPath默认为/persistentvolumes,不能修改,否则运行时会报错 #创建storageclass # vi storageclass-nfs.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: storage-nfs provisioner: storage.pri/nfs reclaimPolicy: Delete#扩展"reclaim policy"有三种方式:Retain、Recycle、Deleted。 Retain #保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源: 手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。 手动清空后端存储volume上的数据。 手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。Delete 删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class", 默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用 户手动为创建后的动态PV编辑"reclaim policy"Recycle 保留PV,但清空其上数据,已废弃

4.3.2、创建存储类

#创建storageclass # vi storageclass-nfs.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: storage-nfs provisioner: storage.pri/nfs reclaimPolicy: Delete

"reclaim policy"有三种方式:Retain、Recycle、Deleted。

  • Retain

    • 保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源
      • 手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。
      • 手动清空后端存储volume上的数据。
      • 手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。
  • Delete

    • 删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class",
    • 默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用户手动为创建后的动态PV编辑"reclaim policy"
  • Recycle

    • 保留PV,但清空其上数据,已废弃

4.3.3、改变默认sc

##改变系统默认sc https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-classkubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

4.4、验证nfs动态供应

4.4.1、创建pvc

#vi pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: pvc-claim-01# annotations:# volume.beta.kubernetes.io/storage-class: "storage-nfs" spec:storageClassName: storage-nfs #这个class一定注意要和sc的名字一样accessModes:- ReadWriteManyresources:requests:storage: 1Mi

4.4.2、使用pvc

#vi testpod.yaml kind: Pod apiVersion: v1 metadata:name: test-pod spec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1"volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: pvc-claim-01

5、安装metrics-server

#1、先安装metrics-server(yaml如下,已经改好了镜像和配置,可以直接使用),这样就能监控到pod。node的资源情况(默认只有cpu、memory的资源审计信息哟,更专业的我们后面对接 Prometheus) --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:name: system:aggregated-metrics-readerlabels:rbac.authorization.k8s.io/aggregate-to-view: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-admin: "true" rules: - apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: metrics-server:system:auth-delegator roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:name: metrics-server-auth-readernamespace: kube-system roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata:name: v1beta1.metrics.k8s.io spec:service:name: metrics-servernamespace: kube-systemgroup: metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100 --- apiVersion: v1 kind: ServiceAccount metadata:name: metrics-servernamespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata:name: metrics-servernamespace: kube-systemlabels:k8s-app: metrics-server spec:selector:matchLabels:k8s-app: metrics-servertemplate:metadata:name: metrics-serverlabels:k8s-app: metrics-serverspec:serviceAccountName: metrics-servervolumes:# mount in tmp so we can safely use from-scratch images and/or read-only containers- name: tmp-diremptyDir: {}containers:- name: metrics-serverimage: mirrorgooglecontainers/metrics-server-amd64:v0.3.6imagePullPolicy: IfNotPresentargs:- --cert-dir=/tmp- --secure-port=4443- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostnameports:- name: main-portcontainerPort: 4443protocol: TCPsecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- name: tmp-dirmountPath: /tmpnodeSelector:kubernetes.io/os: linuxkubernetes.io/arch: "amd64" --- apiVersion: v1 kind: Service metadata:name: metrics-servernamespace: kube-systemlabels:kubernetes.io/name: "Metrics-server"kubernetes.io/cluster-service: "true" spec:selector:k8s-app: metrics-serverports:- port: 443protocol: TCPtargetPort: main-port --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:name: system:metrics-server rules: - apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: system:metrics-server roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system

参考链接:

https://www.yuque.com/leifengyang/kubesphere/grw8se

总结

以上是生活随笔为你收集整理的yum安装k8s集群(单master两个node、阿里云镜像源)的全部内容,希望文章能够帮你解决所遇到的问题。

如果觉得生活随笔网站内容还不错,欢迎将生活随笔推荐给好友。