首先、异常的发现
Troubleshooting was easy... kubectl describe node <node> revealed the taint, and figuring out the required configuration change was simple.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
homelab Ready master 23h v1.17.0
$ kubectl describe node homelab
Name: homelab
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=homelab
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 28 Dec 2019 13:28:27 -0800
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
官方描述
And there it is – by default, kubeadm init configured this node as a Kubernetes master, which would normally take care for managing other Kubernetes "worker" (or "non-master") nodes. The Kubernetes Concepts documentation describes the distinction between the Kubernetes master and non-master nodes as follows:
The Kubernetes Master is a collection of three processes that run on a single node in your cluster, which is designated as the master node. Those processes are: kube-apiserver, kube-controller-manager and kube-scheduler.Each individual non-master node in your cluster runs two processes:kubelet, which communicates with the Kubernetes Master.kube-proxy, a network proxy which reflects Kubernetes networking services on each node.
So anyway, as soon as I saw node-role.kubernetes.io/master:NoSchedule I began nodding my head, realizing what the issue was. One Google search returned me straight back to the very installation guide I followed I had skimmed over, and the instruction I had skipped:
Control plane node isolation
By default, your cluster will not schedule pods on the control-plane node for security reasons. If you want to be able to schedule pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:kubectl taint nodes --all node-role.kubernetes.io/master-
With output looking something like:
node "test-01" untainted taint "node-role.kubernetes.io/master:" not found taint "node-role.kubernetes.io/master:" not found
This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule pods everywhere.
So... one quick kubectl taint nodes --all node-role.kubernetes.io/master- command later, and my single-node K8s cluster was now actually useful for running pods!
NOTE: there's a LOT more output from kubectl describe node <node> that this; I'm just trimming the rest for brevity; all we needed was this clue about the configured Taints.