日日操夜夜添-日日操影院-日日草夜夜操-日日干干-精品一区二区三区波多野结衣-精品一区二区三区高清免费不卡

公告:魔扣目錄網(wǎng)為廣大站長提供免費(fèi)收錄網(wǎng)站服務(wù),提交前請做好本站友鏈:【 網(wǎng)站目錄:http://www.ylptlb.cn 】, 免友鏈快審服務(wù)(50元/站),

點(diǎn)擊這里在線咨詢客服
新站提交
  • 網(wǎng)站:51998
  • 待審:31
  • 小程序:12
  • 文章:1030137
  • 會員:747

目錄
  • 一.系統(tǒng)環(huán)境
  • 二.前言
  • 三.污點(diǎn)taint
    • 3.1 污點(diǎn)taint概覽
    • 3.2 給節(jié)點(diǎn)添加污點(diǎn)taint
  • 四.容忍度tolerations
    • 4.1 容忍度tolerations概覽
    • 4.2 設(shè)置容忍度tolerations

一.系統(tǒng)環(huán)境

服務(wù)器版本 docker軟件版本 Kubernetes(k8s)集群版本 CPU架構(gòu)
CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 v1.21.9 x86_64

Kubernetes集群架構(gòu):k8scloude1作為master節(jié)點(diǎn),k8scloude2,k8scloude3作為worker節(jié)點(diǎn)

服務(wù)器 操作系統(tǒng)版本 CPU架構(gòu) 進(jìn)程 功能描述
k8scloude1/192.168.110.130 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master節(jié)點(diǎn)
k8scloude2/192.168.110.129 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker節(jié)點(diǎn)
k8scloude3/192.168.110.128 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker節(jié)點(diǎn)

二.前言

本文介紹污點(diǎn)taint 與容忍度tolerations,可以影響pod的調(diào)度。

使用污點(diǎn)taint 與容忍度tolerations的前提是已經(jīng)有一套可以正常運(yùn)行的Kubernetes集群,關(guān)于Kubernetes(k8s)集群的安裝部署,可以查看博客《Centos7 安裝部署Kubernetes(k8s)集群》

三.污點(diǎn)taint

3.1 污點(diǎn)taint概覽

節(jié)點(diǎn)親和性 是 Pod 的一種屬性,它使 Pod 被吸引到一類特定的節(jié)點(diǎn) (這可能出于一種偏好,也可能是硬性要求)。 污點(diǎn)(Taint) 則相反——它使節(jié)點(diǎn)能夠排斥一類特定的 Pod。

3.2 給節(jié)點(diǎn)添加污點(diǎn)taint

給節(jié)點(diǎn)增加一個污點(diǎn)的語法如下:給節(jié)點(diǎn) node1 增加一個污點(diǎn),它的鍵名是 key1,鍵值是 value1,效果是 NoSchedule。 這表示只有擁有和這個污點(diǎn)相匹配的容忍度的 Pod 才能夠被分配到 node1 這個節(jié)點(diǎn)。

#污點(diǎn)的格式:鍵=值:NoSchedule
kubectl taint nodes node1 key1=value1:NoSchedule
#只有鍵沒有值的話,格式為:鍵:NoSchedule
kubectl taint nodes node1 key1:NoSchedule

移除污點(diǎn)語法如下:

kubectl taint nodes node1 key1=value1:NoSchedule-

節(jié)點(diǎn)的描述信息里有一個Taints字段,Taints字段表示節(jié)點(diǎn)有沒有污點(diǎn)

[root@k8scloude1 deploy]# kubectl get nodes -o wide
NAME         STATUS   ROLES                  AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8scloude1   Ready    control-plane,master   8d    v1.21.0   192.168.110.130   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12
k8scloude2   Ready    <none>                 8d    v1.21.0   192.168.110.129   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12
k8scloude3   Ready    <none>                 8d    v1.21.0   192.168.110.128   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12
[root@k8scloude1 deploy]# kubectl describe nodes k8scloude1
Name:               k8scloude1
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8scloude1
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.110.130/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 10.244.158.64
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 09 Jan 2022 16:19:06 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
......

查看節(jié)點(diǎn)是否有污點(diǎn),Taints: node-role.kubernetes.io/master:NoSchedule表示k8s集群的master節(jié)點(diǎn)有污點(diǎn),這是默認(rèn)就存在的污點(diǎn),這也是master節(jié)點(diǎn)為什么不能運(yùn)行應(yīng)用pod的原因。

[root@k8scloude1 deploy]# kubectl describe nodes k8scloude2 | grep -i Taints
Taints:             <none>
[root@k8scloude1 deploy]# kubectl describe nodes k8scloude1 | grep -i Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@k8scloude1 deploy]# kubectl describe nodes k8scloude3 | grep -i Taints
Taints:             <none>

創(chuàng)建pod,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod運(yùn)行在標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點(diǎn)上。

關(guān)于pod的調(diào)度詳細(xì)內(nèi)容,請查看博客《pod(八):pod的調(diào)度——將 Pod 指派給節(jié)點(diǎn)》

[root@k8scloude1 pod]# vim schedulepod4.yaml 
[root@k8scloude1 pod]# cat schedulepod4.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  nodeSelector:
    kubernetes.io/hostname: k8scloude1
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點(diǎn)為k8scloude1節(jié)點(diǎn)

[root@k8scloude1 pod]# kubectl get nodes -l kubernetes.io/hostname=k8scloude1
NAME         STATUS   ROLES                  AGE   VERSION
k8scloude1   Ready    control-plane,master   8d    v1.21.0

創(chuàng)建pod,因?yàn)閗8scloude1上有污點(diǎn),pod1不能運(yùn)行在k8scloude1上,所以pod1狀態(tài)為Pending

[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml 
pod/pod1 created
 #因?yàn)閗8scloude1上有污點(diǎn),pod1不能運(yùn)行在k8scloude1上,所以pod1狀態(tài)為Pending
[root@k8scloude1 pod]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pod1   0/1     Pending   0          9s    <none>   <none>   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted
[root@k8scloude1 pod]# kubectl get pod -o wide
No resources found in pod namespace.

四.容忍度tolerations

4.1 容忍度tolerations概覽

容忍度(Toleration) 是應(yīng)用于 Pod 上的。容忍度允許調(diào)度器調(diào)度帶有對應(yīng)污點(diǎn)的 Pod。 容忍度允許調(diào)度但并不保證調(diào)度:作為其功能的一部分, 調(diào)度器也會評估其他參數(shù)。

污點(diǎn)和容忍度(Toleration)相互配合,可以用來避免 Pod 被分配到不合適的節(jié)點(diǎn)上。 每個節(jié)點(diǎn)上都可以應(yīng)用一個或多個污點(diǎn),這表示對于那些不能容忍這些污點(diǎn)的 Pod, 是不會被該節(jié)點(diǎn)接受的。

4.2 設(shè)置容忍度tolerations

只有擁有和這個污點(diǎn)相匹配的容忍度的 Pod 才能夠被分配到 node節(jié)點(diǎn)。

查看k8scloude1節(jié)點(diǎn)的污點(diǎn)

[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -i taint
Taints:             node-role.kubernetes.io/master:NoSchedule

你可以在 Pod 規(guī)約中為 Pod 設(shè)置容忍度,創(chuàng)建pod,tolerations參數(shù)表示可以容忍污點(diǎn):node-role.kubernetes.io/master:NoSchedule ,nodeSelector:kubernetes.io/hostname: k8scloude1表示pod運(yùn)行在標(biāo)簽為kubernetes.io/hostname=k8scloude1的節(jié)點(diǎn)上。

[root@k8scloude1 pod]# vim schedulepod4.yaml 
[root@k8scloude1 pod]# cat schedulepod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  tolerations:
  - key: "node-role.kubernetes.io/master"
    operator: "Equal"
    value: ""
    effect: "NoSchedule"
  nodeSelector:
    kubernetes.io/hostname: k8scloude1
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml 
pod/pod1 created

查看pod,即使k8scloude1節(jié)點(diǎn)有污點(diǎn),pod還是正常運(yùn)行。

taint污點(diǎn)和cordon,drain的區(qū)別:某個節(jié)點(diǎn)上有污點(diǎn),可以設(shè)置tolerations容忍度,讓pod運(yùn)行在該節(jié)點(diǎn),某個節(jié)點(diǎn)被cordon,drain,則該節(jié)點(diǎn)不能被分配出去運(yùn)行pod。

關(guān)于cordon,drain的詳細(xì)信息,請查看博客《cordon節(jié)點(diǎn),drain驅(qū)逐節(jié)點(diǎn),delete 節(jié)點(diǎn)》

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          4s    10.244.158.84   k8scloude1   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted
[root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

注意,tolerations容忍度有兩種寫法,任選一種即可:

tolerations:
- key: "key1"
  operator: "Equal"
  value: "value1"
  effect: "NoSchedule"
tolerations:
- key: "key1"
  operator: "Exists"
  effect: "NoSchedule"  

給k8scloude2節(jié)點(diǎn)打標(biāo)簽

[root@k8scloude1 pod]# kubectl label nodes k8scloude2 taint=T
node/k8scloude2 labeled
[root@k8scloude1 pod]# kubectl get node --show-labels
NAME         STATUS   ROLES                  AGE   VERSION   LABELS
k8scloude1   Ready    control-plane,master   8d    v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8scloude2   Ready    <none>                 8d    v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linux,taint=T
k8scloude3   Ready    <none>                 8d    v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux

對k8scloude2設(shè)置污點(diǎn)

#污點(diǎn)taint的格式:鍵=值:NoSchedule
[root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian=true:NoSchedule
node/k8scloude2 tainted
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -i Taints
Taints:             wudian=true:NoSchedule

創(chuàng)建pod,tolerations參數(shù)表示容忍污點(diǎn)wudian=true:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運(yùn)行在標(biāo)簽為nodeSelector=taint: T的節(jié)點(diǎn)。

[root@k8scloude1 pod]# vim schedulepod4.yaml 
[root@k8scloude1 pod]# cat schedulepod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  tolerations:
  - key: "wudian"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
  nodeSelector:
    taint: T
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl get pod -o wide
No resources found in pod namespace.
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml 
pod/pod1 created

查看pod,k8scloude2節(jié)點(diǎn)就算有污點(diǎn)也能運(yùn)行pod

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          8s    10.244.112.177   k8scloude2   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted
[root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

污點(diǎn)容忍的另一種寫法:operator: "Exists",沒有value值。

[root@k8scloude1 pod]# vim schedulepod4.yaml 
[root@k8scloude1 pod]# cat schedulepod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  tolerations:
  - key: "wudian"
    operator: "Exists"
    effect: "NoSchedule"
  nodeSelector:
    taint: T
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml 
pod/pod1 created

查看pod,k8scloude2節(jié)點(diǎn)就算有污點(diǎn)也能運(yùn)行pod

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          10s   10.244.112.178   k8scloude2   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted
[root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

給k8scloude2節(jié)點(diǎn)再添加一個污點(diǎn)

[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints
Taints:             wudian=true:NoSchedule
[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang=shide:NoSchedule
node/k8scloude2 tainted
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep Taints
Taints:             wudian=true:NoSchedule
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints:             wudian=true:NoSchedule
                    zang=shide:NoSchedule
Unschedulable:      false
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A1 Taints
Taints:             wudian=true:NoSchedule
                    zang=shide:NoSchedule

創(chuàng)建pod,tolerations參數(shù)表示容忍2個污點(diǎn):wudian=true:NoSchedule和zang=shide:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運(yùn)行在標(biāo)簽為nodeSelector=taint: T的節(jié)點(diǎn)。

[root@k8scloude1 pod]# vim schedulepod4.yaml 
[root@k8scloude1 pod]# cat schedulepod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  tolerations:
  - key: "wudian"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
  - key: "zang"
    operator: "Equal"
    value: "shide"
    effect: "NoSchedule"
  nodeSelector:
    taint: T
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml 
pod/pod1 created

查看pod,k8scloude2節(jié)點(diǎn)就算有2個污點(diǎn)也能運(yùn)行pod

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod1   1/1     Running   0          6s    10.244.112.179   k8scloude2   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted

創(chuàng)建pod,tolerations參數(shù)表示容忍污點(diǎn):wudian=true:NoSchedule,nodeSelector:taint: T參數(shù)表示pod運(yùn)行在標(biāo)簽為nodeSelector=taint: T的節(jié)點(diǎn)。

[root@k8scloude1 pod]# vim schedulepod4.yaml 
[root@k8scloude1 pod]# cat schedulepod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
  namespace: pod
spec:
  tolerations:
  - key: "wudian"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
  nodeSelector:
    taint: T
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: pod1
    resources: {}
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
      hostPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yaml 
pod/pod1 created

查看pod,一個節(jié)點(diǎn)有兩個污點(diǎn)值,但是yaml文件只容忍一個,所以pod創(chuàng)建不成功。

[root@k8scloude1 pod]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pod1   0/1     Pending   0          8s    <none>   <none>   <none>           <none>
[root@k8scloude1 pod]# kubectl delete pod pod1 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "pod1" force deleted
[root@k8scloude1 pod]# kubectl get pods -o wide
No resources found in pod namespace.

取消k8scloude2污點(diǎn)

[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints:             wudian=true:NoSchedule
                    zang=shide:NoSchedule
Unschedulable:      false
#取消污點(diǎn)
[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang-
node/k8scloude2 untainted
[root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian-
node/k8scloude2 untainted
[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -A2 Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 Taints
Taints:             <none>
Unschedulable:      false
Lease:
[root@k8scloude1 pod]# kubectl describe nodes k8scloude3 | grep -A2 Taints
Taints:             <none>
Unschedulable:      false
Lease:

Tips:如果自身機(jī)器有限,只能有一臺機(jī)器,則可以把master節(jié)點(diǎn)的污點(diǎn)taint取消,就可以在master上運(yùn)行pod了。

以上就是pod污點(diǎn)taint 與容忍度tolerations詳解的詳細(xì)內(nèi)容,更多關(guān)于污點(diǎn)taint容忍度tolerations 的資料請關(guān)注其它相關(guān)文章!

分享到:
標(biāo)簽:Pod 容忍度 服務(wù)器 污點(diǎn) 詳解
用戶無頭像

網(wǎng)友整理

注冊時(shí)間:

網(wǎng)站:5 個   小程序:0 個  文章:12 篇

  • 51998

    網(wǎng)站

  • 12

    小程序

  • 1030137

    文章

  • 747

    會員

趕快注冊賬號,推廣您的網(wǎng)站吧!
最新入駐小程序

數(shù)獨(dú)大挑戰(zhàn)2018-06-03

數(shù)獨(dú)一種數(shù)學(xué)游戲,玩家需要根據(jù)9

答題星2018-06-03

您可以通過答題星輕松地創(chuàng)建試卷

全階人生考試2018-06-03

各種考試題,題庫,初中,高中,大學(xué)四六

運(yùn)動步數(shù)有氧達(dá)人2018-06-03

記錄運(yùn)動步數(shù),積累氧氣值。還可偷

每日養(yǎng)生app2018-06-03

每日養(yǎng)生,天天健康

體育訓(xùn)練成績評定2018-06-03

通用課目體育訓(xùn)練成績評定