K8s基本操作命令

一、namespace操作

1)列出集群中所有API组名/版本号:

kubectl api-versions
输出:
1
admissionregistration.k8s.io/v1
2
admissionregistration.k8s.io/v1beta1
3
apiextensions.k8s.io/v1
4
apiextensions.k8s.io/v1beta1
5
apiregistration.k8s.io/v1
6
apiregistration.k8s.io/v1beta1
7
apps/v1
8
authentication.k8s.io/v1
9
authentication.k8s.io/v1beta1
10
authorization.k8s.io/v1
11
authorization.k8s.io/v1beta1
12
autoscaling/v1
13
autoscaling/v2beta1
14
autoscaling/v2beta2
15
batch/v1
16
batch/v1beta1
17
certificates.k8s.io/v1beta1
18
coordination.k8s.io/v1
19
coordination.k8s.io/v1beta1
20
discovery.k8s.io/v1beta1
21
events.k8s.io/v1beta1
22
extensions/v1beta1
23
networking.k8s.io/v1
24
networking.k8s.io/v1beta1
25
node.k8s.io/v1beta1
26
policy/v1beta1
27
rbac.authorization.k8s.io/v1
28
rbac.authorization.k8s.io/v1beta1
29
scheduling.k8s.io/v1
30
scheduling.k8s.io/v1beta1
31
storage.k8s.io/v1
32
storage.k8s.io/v1beta1
33
v1
Copied!

2)列出集群名称空间:

kubectl get namespaces
输出结果:
1
NAME STATUS AGE
2
default Active 2d20h
3
kube-node-lease Active 2d20h
4
kube-public Active 2d20h
5
kube-system Active 2d20h
Copied!

3)创建新的名称空间:

kubectl create namespace new-space-name
以上namespace均可以简写成ns,如:kubectl create ns new-space-name

4)列出集群内所有名称空间下所有pod:

kubectl get pod --all-namespaces

二、Pod操作

1)根据配置清单创建pod:

kubectl apply -f new-pod.yaml

2)删除pod:

kubectl delete pods pod-name

3)查看pod:

kubectl get pods

4)通过镜像文件直接创建pod和无状态应用:

1
kubectl create deployment test1 --image=maxidea/flask-demo-app:v1.0
Copied!

5)查看一下对应的pod和无状态应用:

1
$ kubectl get pods
2
NAME READY STATUS RESTARTS AGE
3
test1-86d54d9655-q9lv5 1/1 Running 0 5m15s
4
5
$ kubectl get deploy
6
NAME READY UP-TO-DATE AVAILABLE AGE
7
test1 1/1 1 1 5m30s
Copied!
访问这个服务,首先查看一下它的ip地址,可以看到它被部署到哪个节点上去(这里被部署到36工作节点上),使用命令格式:
kubectl get pods -o wide
1
$ kubectl get pods -o wide
2
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
3
test1-86d54d9655-q9lv5 1/1 Running 0 7m40s 10.244.2.2 36 <none> <none>
4
5
$ curl 10.244.2.2
6
flask-demo-app v1.0 / ClientIP: 10.244.0.0, ServerName: test1-86d54d9655-q9lv5, ServerIP: 10.244.2.2!
Copied!

6)查看pod的配置清单

1
$ kubectl create deployment test2 --image=maxidea/flask-demo-app:v1.0 --dry-run=client -o yaml
2
apiVersion: apps/v1
3
kind: Deployment
4
metadata:
5
creationTimestamp: null
6
labels:
7
app: test2
8
name: test2
9
spec:
10
replicas: 1
11
selector:
12
matchLabels:
13
app: test2
14
strategy: {}
15
template:
16
metadata:
17
creationTimestamp: null
18
labels:
19
app: test2
20
spec:
21
containers:
22
- image: maxidea/flask-demo-app:v1.0
23
name: flask-demo-app
24
resources: {}
25
status: {}
Copied!

7)查看pod的label

kubectl get pods --show-labels
1
NAME READY STATUS RESTARTS AGE LABELS
2
new-pod 1/1 Running 0 15h <none>
3
test1-86d54d9655-q9lv5 1/1 Running 0 18h app=test1,pod-template-hash=86d54d9655
4
test2-69444f54b-4phj2 1/1 Running 0 7m40s app=test2,pod-template-hash=69444f54b
Copied!

7)查看指定名称空间下pod的资源使用情况

kubectl top pods -n [namespaces]
例如:
1
[[email protected] ~]# kubectl top pods -n momtest
2
NAME CPU(cores) MEMORY(bytes)
3
r-49b86 132m 249Mi
4
r-569wf 177m 353Mi
5
r-5g9jp 168m 284Mi
6
r-6sp9t 164m 237Mi
7
r-859fw 154m 263Mi
8
r-8ggdz 183m 260Mi
Copied!
对指定pod资源状态自动刷新:
watch 'kubectl top pods -n momtest r-vqnc2'

8)查看每个节点主机资源使用情况

kubectl top node

三、Service操作

1)创建service

kubectl create service clusterip test1 --tcp=80
这里要注意,如果需要pod自动关联这个service,service的名字需要和pod一致。
查看这个创建的service:
1
$ kubectl get service
2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
3
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d4h
4
test1 ClusterIP 10.102.176.227 <none> 80/TCP 18s
Copied!
查看service相关详细信息:
1
$ kubectl describe service test1
2
Name: test1
3
Namespace: default
4
Labels: app=test1
5
Annotations: <none>
6
Selector: app=test1
7
Type: ClusterIP
8
IP: 10.102.176.227
9
Port: 80 80/TCP
10
TargetPort: 80/TCP
11
Endpoints: 10.244.2.2:80
12
Session Affinity: None
13
Events: <none>
Copied!
这里看到pod test1已经作为这个service的Endpoints了。
从同一集群的其他pod(用kubectl exec -it pod_name -- /bin/sh进入)访问这个service,注意地址结构:[service name].[namespace].svc.[cluster domain]也可以使用这个service的ip地址访问:10.102.176.227
1
# curl test1.default.svc.cluster.local
2
flask-demo-app v1.0 / ClientIP: 10.244.1.3, ServerName: test1-86d54d9655-q9lv5, ServerIP: 10.244.2.2!
Copied!

2)扩展/缩小service下的pod个数

例如,把test1的副本数量扩展/缩少到三个(用同一命令指定数量即可):
kubectl scale deployment/test1 --replicas=3
1
$ kubectl get pods -o wide
2
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
3
new-pod 1/1 Running 0 18h 10.244.1.3 35 <none> <none>
4
test1-86d54d9655-c9h5m 1/1 Running 0 84s 10.244.1.4 35 <none> <none>
5
test1-86d54d9655-htbkd 1/1 Running 0 84s 10.244.3.4 37 <none> <none>
6
test1-86d54d9655-q9lv5 1/1 Running 0 21h 10.244.2.2 36 <none> <none>
Copied!
可以看到在三个工作节点上会自动生成另外两个一样的pod。
1
flask-demo-app v1.0 / ClientIP: 10.244.1.3, ServerName: test1-86d54d9655-htbkd, ServerIP: 10.244.3.4!
2
flask-demo-app v1.0 / ClientIP: 10.244.1.3, ServerName: test1-86d54d9655-q9lv5, ServerIP: 10.244.2.2!
3
flask-demo-app v1.0 / ClientIP: 10.244.1.3, ServerName: test1-86d54d9655-c9h5m, ServerIP: 10.244.1.4!
Copied!

3)service的iptable规则

集群每创建一个新的pod,都会在节点宿主机的iptables上创建对应的新规则,例如:
1
# iptables -t nat -S KUBE-SERVICES | grep test1
2
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.102.176.227/32 -p tcp -m comment --comment "default/test1:80 cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
3
-A KUBE-SERVICES -d 10.102.176.227/32 -p tcp -m comment --comment "default/test1:80 cluster IP" -m tcp --dport 80 -j KUBE-SVC-VRCN2UH6RARGOJZA
4
5
# iptables -t nat -S KUBE-SVC-VRCN2UH6RARGOJZA
6
-N KUBE-SVC-VRCN2UH6RARGOJZA
7
-A KUBE-SVC-VRCN2UH6RARGOJZA -m comment --comment "default/test1:80" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-M4LMZLRGTFEOZERX
8
-A KUBE-SVC-VRCN2UH6RARGOJZA -m comment --comment "default/test1:80" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-LL5YVFAV2NKPJ2W3
9
-A KUBE-SVC-VRCN2UH6RARGOJZA -m comment --comment "default/test1:80" -j KUBE-SEP-OSFWLWIVOCAZBH6N
10
11
# iptables -t nat -S KUBE-SEP-M4LMZLRGTFEOZERX
12
-N KUBE-SEP-M4LMZLRGTFEOZERX
13
-A KUBE-SEP-M4LMZLRGTFEOZERX -s 10.244.1.4/32 -m comment --comment "default/test1:80" -j KUBE-MARK-MASQ
14
-A KUBE-SEP-M4LMZLRGTFEOZERX -p tcp -m comment --comment "default/test1:80" -m tcp -j DNAT [unsupported revision]
Copied!
完整的iptables规则如下图:
来源:https://github.com/cilium/k8s-iptables-diagram

4)service的ipvs规则

对于使用ipvs的集群,上述规则在ipvs下显示如下,对比iptables,ipvs更为简单和高效:(通过修改kube-proxyconfigMap里的mode字段为“ipvs”来配置)
工作节点上安装apt install ipvsadm
1
# ipvsadm -L -n
2
IP Virtual Server version 1.2.1 (size=4096)
3
Prot LocalAddress:Port Scheduler Flags
4
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
5
TCP 10.96.0.1:443 rr
6
-> 192.168.2.31:6443 Masq 1 0 0
7
-> 192.168.2.32:6443 Masq 1 0 0
8
-> 192.168.2.33:6443 Masq 1 0 0
9
TCP 10.96.0.10:53 rr
10
-> 10.244.0.2:53 Masq 1 0 0
11
-> 10.244.0.3:53 Masq 1 0 0
12
TCP 10.96.0.10:9153 rr
13
-> 10.244.0.2:9153 Masq 1 0 0
14
-> 10.244.0.3:9153 Masq 1 0 0
15
TCP 10.102.176.227:80 rr
16
-> 10.244.1.4:80 Masq 1 0 0
17
-> 10.244.2.2:80 Masq 1 0 0
18
-> 10.244.3.4:80 Masq 1 0 0
19
UDP 10.96.0.10:53 rr
20
-> 10.244.0.2:53 Masq 1 0 0
21
-> 10.244.0.3:53 Masq 1 0 0
Copied!

四、ConfigMap操作

1)列出configmap资源(kebu-system名称空间下)

1
$ kubectl get cm -n kube-system
2
NAME DATA AGE
3
coredns 1 3d23h
4
extension-apiserver-authentication 6 3d23h
5
kube-flannel-cfg 2 3d15h
6
kube-proxy 2 3d23h
7
kubeadm-config 2 3d23h
8
kubelet-config-1.18 1 3d23h
Copied!

2)修改kube-proxy

kubectl edit cm kube-proxy -n kube-system
-n kube-system为指定名称空间。
修改完kube-proxy,需要删除并重新生成kube-system下的对应6个kube-proxy的pod(由于这里测试用的主节点+工作节点有6个)。
实现批量删除,通过labels:
1
$ kubectl get pods -n kube-system --show-labels
2
NAME READY STATUS RESTARTS AGE LABELS
3
coredns-7ff77c879f-7t8nw 1/1 Running 0 3d23h k8s-app=kube-dns,pod-template-hash=7ff77c879f
4
coredns-7ff77c879f-bpkwh 1/1 Running 0 3d23h k8s-app=kube-dns,pod-template-hash=7ff77c879f
5
etcd-31 1/1 Running 2 3d23h component=etcd,tier=control-plane
6
etcd-32 1/1 Running 0 2d14h component=etcd,tier=control-plane
7
etcd-33 1/1 Running 0 2d14h component=etcd,tier=control-plane
8
kube-apiserver-31 1/1 Running 2 3d23h component=kube-apiserver,tier=control-plane
9
kube-apiserver-32 1/1 Running 0 2d14h component=kube-apiserver,tier=control-plane
10
kube-apiserver-33 1/1 Running 0 2d14h component=kube-apiserver,tier=control-plane
11
kube-controller-manager-31 1/1 Running 3 3d23h component=kube-controller-manager,tier=control-plane
12
kube-controller-manager-32 1/1 Running 1 2d14h component=kube-controller-manager,tier=control-plane
13
kube-controller-manager-33 1/1 Running 0 2d14h component=kube-controller-manager,tier=control-plane
14
kube-flannel-ds-amd64-76cdz 1/1 Running 0 3d15h app=flannel,controller-revision-hash=56c5465959,pod-template-generation=1,tier=node
15
kube-flannel-ds-amd64-ct8v2 1/1 Running 0 2d17h app=flannel,controller-revision-hash=56c5465959,pod-template-generation=1,tier=node
16
kube-flannel-ds-amd64-h6m9w 1/1 Running 0 2d17h app=flannel,controller-revision-hash=56c5465959,pod-template-generation=1,tier=node
17
kube-flannel-ds-amd64-l6rfc 1/1 Running 0 2d14h app=flannel,controller-revision-hash=56c5465959,pod-template-generation=1,tier=node
18
kube-flannel-ds-amd64-nm5rl 1/1 Running 0 2d17h app=flannel,controller-revision-hash=56c5465959,pod-template-generation=1,tier=node
19
kube-flannel-ds-amd64-qlv78 1/1 Running 0 2d14h app=flannel,controller-revision-hash=56c5465959,pod-template-generation=1,tier=node
20
kube-proxy-27226 1/1 Running 0 2d14h controller-revision-hash=55877fc8b6,k8s-app=kube-proxy,pod-template-generation=1
21
kube-proxy-5qmnq 1/1 Running 0 2d17h controller-revision-hash=55877fc8b6,k8s-app=kube-proxy,pod-template-generation=1
22
kube-proxy-79vk2 1/1 Running 0 2d17h controller-revision-hash=55877fc8b6,k8s-app=kube-proxy,pod-template-generation=1
23
kube-proxy-hwfss 1/1 Running 2 3d23h controller-revision-hash=55877fc8b6,k8s-app=kube-proxy,pod-template-generation=1
24
kube-proxy-ms2lx 1/1 Running 0 2d17h controller-revision-hash=55877fc8b6,k8s-app=kube-proxy,pod-template-generation=1
25
kube-proxy-z68st 1/1 Running 0 2d14h controller-revision-hash=55877fc8b6,k8s-app=kube-proxy,pod-template-generation=1
26
kube-scheduler-31 1/1 Running 4 3d23h component=kube-scheduler,tier=control-plane
27
kube-scheduler-32 1/1 Running 0 2d14h component=kube-scheduler,tier=control-plane
28
kube-scheduler-33 1/1 Running 1 2d14h component=kube-scheduler,tier=control-plane
Copied!
看到这6个pod都具有同一个label叫k8s-app=kube-proxy, 所以如2-2的操作,批量删除加入label后的指令是:
kubectl delete pods -l k8s-app=kube-proxy -n kube-system
kube-proxy的pod被删除后重新根据修改后的配置清单生成新的pod。

五、deployments/rc/rs删除操作

上面测试用的两个pod,如果直接使用kubectl delete pod来删除,过一会又会生成新的pod,原因是对应的pod在创建时,kubernetes创建了对应的deployment和rs(副本集replicasets)来监控pod的生命周期。所以,要删除这两个pod,首先要删除对应的deployment/rc/rs。
检查是否创建了deployments任务:kubectl get deployments
检查是否创建了副本控制器ReplicationController:kubectl get rc
检查死否创建了副本集replicasets:kubectl get rs
删除对应的对应的deployment:kubectl delete deployment test1
Last modified 1yr ago