第262集:容器编排

教学目标

  • 理解容器编排的概念和重要性
  • 掌握Kubernetes的核心概念和架构
  • 熟悉Pod、Deployment、Service等资源的管理
  • 学习容器编排的存储和网络配置
  • 能够搭建和维护生产级的Kubernetes集群

核心知识点

1. 容器编排概述

1.1 容器编排的必要性

  • 大规模管理:管理成百上千个容器实例
  • 高可用性:确保服务持续可用,自动恢复故障容器
  • 自动扩缩容:根据负载自动调整容器数量
  • 负载均衡:在多个容器实例之间分配流量
  • 服务发现:自动发现和连接服务

1.2 主流容器编排平台对比

平台 优势 适用场景
Kubernetes 社区活跃,生态丰富,标准化程度高 企业级应用、微服务架构
Docker Swarm 简单易用,与Docker集成紧密 小型项目、快速部署
Nomad 轻量级,支持多种工作负载 混合工作负载、资源受限环境
ECS 与AWS集成紧密,管理简单 AWS云环境、无服务器应用

2. Kubernetes基础

2.1 Kubernetes架构

+-------------------+     +-------------------+     +-------------------+
|   Master Node     |     |   Worker Node    |     |   Worker Node    |
|                   |     |                   |     |                   |
|  +-------------+  |     |  +-------------+  |     |  +-------------+  |
|  | API Server  |  |     |  | Kubelet     |  |     |  | Kubelet     |  |
|  +-------------+  |     |  +-------------+  |     |  +-------------+  |
|  +-------------+  |     |  +-------------+  |     |  +-------------+  |
|  | Scheduler   |  |     |  | Kube-proxy  |  |     |  | Kube-proxy  |  |
|  +-------------+  |     |  +-------------+  |     |  +-------------+  |
|  +-------------+  |     |  +-------------+  |     |  +-------------+  |
|  | Controller  |  |     |  | Container   |  |     |  | Container   |  |
|  | Manager     |  |     |  | Runtime     |  |     |  | Runtime     |  |
|  +-------------+  |     |  +-------------+  |     |  +-------------+  |
|  +-------------+  |     |                   |     |                   |
|  | etcd        |  |     |                   |     |                   |
|  +-------------+  |     |                   |     |                   |
+-------------------+     +-------------------+     +-------------------+

2.2 Kubernetes核心概念

概念 描述 作用
Pod 最小部署单元,包含一个或多个容器 共享网络和存储
Node 集群中的工作节点 运行Pod
Deployment 管理Pod的副本和更新 声明式管理应用
Service 为Pod提供稳定的网络访问 负载均衡和服务发现
Ingress 管理外部访问集群的规则 HTTP/HTTPS路由
ConfigMap 存储配置数据 配置管理
Secret 存储敏感信息 密码、证书等
PV 持久化卷 存储资源
PVC 持久化卷声明 存储请求

3. Kubernetes安装

3.1 使用kubeadm安装集群

# 在所有节点上执行
# 禁用swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# 加载内核模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 配置内核参数
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system

# 安装容器运行时
sudo apt-get update
sudo apt-get install -y containerd

# 配置containerd
sudo mkdir -p /etc/containerd
cat <<EOF | sudo tee /etc/containerd/config.toml
version = 2
[plugins]
  [plugins."io.containerd.grpc.v1.cri"]
    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
EOF

sudo systemctl restart containerd

# 安装kubeadm、kubelet、kubectl
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# 在Master节点上初始化集群
sudo kubeadm init --pod-network-cidr=192.168.0.0/16

# 配置kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 安装网络插件(以Calico为例)
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# 在Worker节点上加入集群
# 使用kubeadm init输出的join命令
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

# 验证集群状态
kubectl get nodes

3.2 使用Minikube安装单节点集群

# 安装Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# 启动Minikube
minikube start --driver=docker

# 查看集群状态
minikube status

# 获取集群信息
kubectl cluster-info

# 启用插件
minikube addons enable dashboard
minikube addons enable metrics-server

# 访问Dashboard
minikube dashboard

4. Pod管理

4.1 创建Pod

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.21
    ports:
    - containerPort: 80
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
    livenessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 30
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 5
# 创建Pod
kubectl apply -f pod.yaml

# 查看Pod
kubectl get pods

# 查看Pod详情
kubectl describe pod nginx-pod

# 查看Pod日志
kubectl logs nginx-pod

# 进入Pod
kubectl exec -it nginx-pod -- /bin/bash

# 删除Pod
kubectl delete pod nginx-pod

4.2 多容器Pod

# multi-container-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.21
    ports:
    - containerPort: 80
    volumeMounts:
    - name: shared-data
      mountPath: /usr/share/nginx/html
  - name: content-generator
    image: busybox
    command: ["/bin/sh", "-c"]
    args:
      - while true; do
          echo "Hello from content generator" > /shared/index.html;
          sleep 10;
        done
    volumeMounts:
    - name: shared-data
      mountPath: /shared
  volumes:
  - name: shared-data
    emptyDir: {}

5. Deployment管理

5.1 创建Deployment

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        env:
        - name: ENVIRONMENT
          value: "production"
        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: log-level
# 创建Deployment
kubectl apply -f deployment.yaml

# 查看Deployment
kubectl get deployments

# 查看Pod
kubectl get pods -l app=nginx

# 扩容Deployment
kubectl scale deployment nginx-deployment --replicas=5

# 更新镜像
kubectl set image deployment/nginx-deployment nginx=nginx:1.22

# 回滚Deployment
kubectl rollout undo deployment/nginx-deployment

# 查看滚动更新状态
kubectl rollout status deployment/nginx-deployment

5.2 滚动更新策略

# deployment-rolling-update.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80

6. Service管理

6.1 ClusterIP Service

# service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

6.2 NodePort Service

# service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30080

6.3 LoadBalancer Service

# service-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
# 创建Service
kubectl apply -f service-loadbalancer.yaml

# 查看Service
kubectl get services

# 查看Service详情
kubectl describe service nginx-service

# 测试Service
kubectl run -it --rm debug --image=busybox --restart=Never -- wget -O- http://nginx-service

7. Ingress管理

7.1 安装Ingress Controller

# 安装NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

# 验证安装
kubectl get pods -n ingress-nginx

7.2 创建Ingress规则

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80
# 创建Ingress
kubectl apply -f ingress.yaml

# 查看Ingress
kubectl get ingress

# 查看Ingress详情
kubectl describe ingress nginx-ingress

8. 存储管理

8.1 ConfigMap

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  app.properties: |
    app.name=My Application
    app.version=1.0.0
  log-level: INFO
  database-url: postgresql://user:password@db:5432/mydb
# 从文件创建ConfigMap
kubectl create configmap app-config --from-file=app.properties

# 从字面值创建ConfigMap
kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm

# 查看ConfigMap
kubectl get configmaps

# 查看ConfigMap详情
kubectl describe configmap app-config

8.2 Secret

# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: db-secret
type: Opaque
data:
  username: YWRtaW4=
  password: cGFzc3dvcmQ=
# 从字面值创建Secret
kubectl create secret generic db-secret --from-literal=username=admin --from-literal=password=password

# 从文件创建Secret
kubectl create secret generic tls-secret --from-file=tls.crt=cert.pem --from-file=tls.key=key.pem

# 查看Secret
kubectl get secrets

# 查看Secret详情(解码)
kubectl get secret db-secret -o jsonpath='{.data.password}' | base64 -d

8.3 PersistentVolume

# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: manual
  hostPath:
    path: /mnt/data

8.4 PersistentVolumeClaim

# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: manual
# 创建PV和PVC
kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml

# 查看PV
kubectl get pv

# 查看PVC
kubectl get pvc

# 在Pod中使用PVC
cat > pod-with-pvc.yaml << 'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-pvc
spec:
  containers:
  - name: app
    image: nginx
    volumeMounts:
    - name: my-pvc
      mountPath: /data
  volumes:
  - name: my-pvc
    persistentVolumeClaim:
      claimName: pvc-claim
EOF

kubectl apply -f pod-with-pvc.yaml

9. 网络管理

9.1 网络策略

# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
# allow-nginx-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-nginx
spec:
  podSelector:
    matchLabels:
      app: nginx
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 80
# 创建网络策略
kubectl apply -f network-policy.yaml

# 查看网络策略
kubectl get networkpolicies

# 查看网络策略详情
kubectl describe networkpolicy deny-all

9.2 DNS配置

# 查看CoreDNS Pods
kubectl get pods -n kube-system -l k8s-app=kube-dns

# 查看CoreDNS配置
kubectl get configmap coredns -n kube-system -o yaml

# 测试DNS解析
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup nginx-service

实用案例分析

案例1:部署微服务应用

场景描述

部署一个包含前端、后端和数据库的微服务应用到Kubernetes集群。

部署步骤

  1. 创建命名空间
# 创建命名空间
kubectl create namespace microservices

# 设置默认命名空间
kubectl config set-context --current --namespace=microservices
  1. 部署数据库
# postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:14
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_DB
          value: mydb
        - name: POSTGRES_USER
          value: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: password
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: postgres-storage
        persistentVolumeClaim:
          claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432
# 创建Secret
kubectl create secret generic postgres-secret --from-literal=password=mypassword

# 创建PVC
cat > postgres-pvc.yaml << 'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
EOF

kubectl apply -f postgres-pvc.yaml

# 部署PostgreSQL
kubectl apply -f postgres-deployment.yaml
  1. 部署后端服务
# backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: my-backend:1.0
        ports:
        - containerPort: 8080
        env:
        - name: DATABASE_URL
          value: postgresql://user:mypassword@postgres:5432/mydb
        - name: ENVIRONMENT
          value: production
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  selector:
    app: backend
  ports:
  - port: 80
    targetPort: 8080
# 部署后端服务
kubectl apply -f backend-deployment.yaml
  1. 部署前端服务
# frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: my-frontend:1.0
        ports:
        - containerPort: 80
        env:
        - name: API_URL
          value: http://backend
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer
# 部署前端服务
kubectl apply -f frontend-deployment.yaml
  1. 配置Ingress
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: microservices-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: backend
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend
            port:
              number: 80
# 创建Ingress
kubectl apply -f ingress.yaml

案例2:自动扩缩容配置

场景描述

为Kubernetes中的Deployment配置基于CPU和内存使用的自动扩缩容。

配置步骤

  1. 安装Metrics Server
# 安装Metrics Server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# 验证安装
kubectl get pods -n kube-system -l k8s-app=metrics-server

# 查看节点资源使用
kubectl top nodes

# 查看Pod资源使用
kubectl top pods
  1. 配置HPA
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: backend-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: backend
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 50
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 2
        periodSeconds: 15
      selectPolicy: Max
# 创建HPA
kubectl apply -f hpa.yaml

# 查看HPA
kubectl get hpa

# 查看HPA详情
kubectl describe hpa backend-hpa

# 模拟负载测试
kubectl run -i --rm load-test --image=busybox --restart=Never -- /bin/sh -c "while true; do wget -O- http://backend; sleep 0.1; done"

课后练习

  1. 基础练习

    • 使用Minikube搭建本地Kubernetes集群
    • 创建一个简单的Pod并查看其状态
    • 创建一个Deployment并扩容到3个副本
  2. 进阶练习

    • 部署一个包含Service和Ingress的应用
    • 配置ConfigMap和Secret并在Pod中使用
    • 创建PersistentVolume和PersistentVolumeClaim
  3. 挑战练习

    • 部署一个完整的微服务应用到Kubernetes
    • 配置自动扩缩容和监控
    • 实现应用的滚动更新和回滚
  4. 思考问题

    • 如何选择合适的容器编排平台?
    • 如何优化Kubernetes集群的性能?
    • 如何确保Kubernetes集群的安全性?

总结

本集详细介绍了Linux系统中容器编排的概念和实现方法,包括Kubernetes基础、Pod与Deployment管理、Service与Ingress配置、存储与网络管理以及集群运维等内容。通过本集的学习,您应该能够:

  • 理解容器编排的概念和重要性
  • 掌握Kubernetes的核心概念和架构
  • 熟悉Pod、Deployment、Service等资源的管理
  • 学习容器编排的存储和网络配置
  • 能够搭建和维护生产级的Kubernetes集群

容器编排是现代云原生应用部署和管理的核心技术,它提供了自动化、可扩展性和高可用性。在实际项目中,应根据应用特点和业务需求选择合适的容器编排方案,并建立完善的监控、日志和运维体系,以确保应用的稳定运行和持续优化。

« 上一篇 云平台部署 下一篇 » 基础设施即代码