⚡ マイクロサービス時代のネットワーク設計 - Service MeshとK8s

概要

マイクロサービスアーキテクチャの普及により、ネットワーク設計は従来の3層構造から、複雑な相互通信を持つ分散システムへと進化しました。本記事では、Kubernetesクラスター、Service Mesh、API Gatewayを組み合わせた現代的なマイクロサービスネットワーク設計について詳しく解説します。

マイクロサービスネットワークの特徴

従来のモノリシック vs マイクロサービス

モノリシックアーキテクチャ

1
2
3
4
5
6
7
Client → Load Balancer → Application Server → Database
        (単一の巨大なアプリケーション)

特徴:
- シンプルなネットワーク構成
- 内部通信は関数呼び出し
- 単一の障害点

マイクロサービスアーキテクチャ

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Client → API Gateway → [Service A] ⟷ [Service B]
                           ↓           ↑
                       [Service C] ⟷ [Service D]
                       [Database]

特徴:
- 複雑なネットワーク通信
- サービス間の依存関係
- 分散システムの課題

Kubernetesネットワーク基盤設計

GKEクラスター構成

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# GKE クラスター設定
apiVersion: container.v1
kind: Cluster
metadata:
  name: microservices-cluster
  location: asia-northeast1

spec:
  # VPC設定
  network: microservices-vpc
  subnetwork: gke-subnet
  
  # IP範囲設定
  ipAllocationPolicy:
    clusterSecondaryRangeName: pods-range
    servicesSecondaryRangeName: services-range
  
  # プライベートクラスター設定
  privateClusterConfig:
    enablePrivateNodes: true
    masterIpv4CidrBlock: 10.0.100.0/28
    enablePrivateEndpoint: false
  
  # ワークロード用ノードプール
  nodePools:
    - name: microservices-pool
      config:
        machineType: e2-standard-4
        diskSizeGb: 50
        preemptible: false
      
      # セキュリティ設定
      networkConfig:
        podRange: pods-range
        podIpv4CidrBlock: 10.1.0.0/16

ネットワーク範囲設計

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
VPC: microservices-vpc (10.0.0.0/16)

Subnets:
  gke-subnet: 10.0.1.0/24
    Purpose: GKE Nodepool
    Size: 254 nodes
  
  pods-range: 10.1.0.0/16  
    Purpose: Pod IP addresses
    Size: 65,536 pods
  
  services-range: 10.2.0.0/16
    Purpose: Kubernetes Services  
    Size: 65,536 services
  
  ingress-subnet: 10.0.2.0/24
    Purpose: Ingress Controller
    Internet: Yes

Kubernetesネットワークポリシー

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# マイクロサービス間通信制御
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: user-service-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: user-service
  
  policyTypes:
  - Ingress
  - Egress
  
  # 入力トラフィック制御
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: api-gateway
    ports:
    - protocol: TCP
      port: 8080
  
  # 出力トラフィック制御
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database-service
    ports:
    - protocol: TCP
      port: 5432

Service Mesh実装

Istioアーキテクチャ

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Istio制御プレーン
Control_Plane:
  Istiod:
    Purpose: 
      - 設定管理
      - 証明書発行
      - サービスディスカバリ
    Location: istio-system namespace
  
# データプレーン  
Data_Plane:
  Envoy_Proxy:
    Purpose:
      - トラフィック管理
      - セキュリティ
      - 観測性
    Deployment: Sidecar pattern

Istio設定

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Istio インストール
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: control-plane
spec:
  values:
    global:
      meshID: production-mesh
      multiCluster:
        clusterName: microservices-cluster
      network: gke-network
    
    pilot:
      env:
        PILOT_ENABLE_WORKLOAD_ENTRY_AUTOREGISTRATION: true
        PILOT_ENABLE_CROSS_CLUSTER_WORKLOAD_ENTRY: true
  
  components:
    # Ingress Gateway
    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
      k8s:
        service:
          type: LoadBalancer
          loadBalancerIP: RESERVED_IP

トラフィック管理

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# Virtual Service - ルーティング制御
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: user-service-routing
spec:
  hosts:
  - user-service
  http:
  - match:
    - headers:
        version:
          exact: v2
    route:
    - destination:
        host: user-service
        subset: v2
      weight: 100
  
  - route:
    - destination:
        host: user-service
        subset: v1
      weight: 80
    - destination:
        host: user-service
        subset: v2
      weight: 20

---
# Destination Rule - トラフィック分散
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: user-service-destination
spec:
  host: user-service
  trafficPolicy:
    loadBalancer:
      simple: LEAST_CONN
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        http1MaxPendingRequests: 50
        maxRequestsPerConnection: 10
  
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

セキュリティ設定(mTLS)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Service間 mTLS 強制
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: production
spec:
  mtls:
    mode: STRICT

---
# 認可ポリシー
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: user-service-authz
  namespace: production
spec:
  selector:
    matchLabels:
      app: user-service
  
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/production/sa/api-gateway"]
    to:
    - operation:
        methods: ["GET", "POST"]
        paths: ["/api/users/*"]

API Gateway設計

Kong API Gateway設定

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# Kong Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kong-gateway
  namespace: api-gateway
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kong
  template:
    spec:
      containers:
      - name: kong
        image: kong:3.4
        env:
        - name: KONG_DATABASE
          value: "off"
        - name: KONG_DECLARATIVE_CONFIG
          value: /kong/declarative/kong.yml
        - name: KONG_PROXY_ACCESS_LOG
          value: /dev/stdout
        - name: KONG_ADMIN_ACCESS_LOG
          value: /dev/stdout
        ports:
        - containerPort: 8000  # Proxy
        - containerPort: 8443  # Proxy SSL
        - containerPort: 8001  # Admin API

---
# Kong Service
apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
  namespace: api-gateway
spec:
  type: LoadBalancer
  selector:
    app: kong
  ports:
  - name: proxy
    port: 80
    targetPort: 8000
  - name: proxy-ssl
    port: 443
    targetPort: 8443

API Gateway設定

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# Kong設定ファイル
_format_version: "3.0"

services:
  - name: user-service
    url: http://user-service.production.svc.cluster.local:8080
    plugins:
    - name: rate-limiting
      config:
        minute: 100
        policy: cluster
    - name: prometheus
      config:
        per_consumer: true
  
  - name: order-service  
    url: http://order-service.production.svc.cluster.local:8080
    plugins:
    - name: oauth2
      config:
        scopes: ["read", "write"]
        enable_authorization_code: true

routes:
  - name: user-api
    service: user-service
    paths:
    - /api/users
    methods:
    - GET
    - POST
    - PUT
    - DELETE
  
  - name: order-api
    service: order-service  
    paths:
    - /api/orders
    methods:
    - GET
    - POST

観測性(Observability)設計

分散トレーシング

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Jaeger設定
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: jaeger
  namespace: observability
spec:
  strategy: production
  
  collector:
    maxReplicas: 3
    resources:
      limits:
        memory: 1Gi
  
  storage:
    type: elasticsearch
    elasticsearch:
      nodeCount: 3
      storage:
        size: 50Gi
  
  query:
    replicas: 2

メトリクス収集

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# Prometheus設定
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: monitoring
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      evaluation_interval: 15s
    
    rule_files:
      - "/etc/prometheus/rules/*.yml"
    
    scrape_configs:
    # Kubernetes API Server
    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
    
    # Istio Proxy メトリクス
    - job_name: 'istio-proxy'
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - production
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: .*-metrics;.*-metrics

ログ集約

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# Fluent Bit設定
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush         5
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
    
    [INPUT]
        Name              tail
        Path              /var/log/containers/*_production_*.log
        multiline.parser  docker, cri
        Tag               kube.*
        Refresh_Interval  5
        Mem_Buf_Limit     50MB
        Skip_Long_Lines   On
    
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Merge_Log           On
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off
    
    [OUTPUT]
        Name        bigquery
        Match       *
        project_id  my-project
        dataset_id  microservices_logs
        table_id    application_logs

パフォーマンス最適化

負荷分散設計

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: user-service-hpa
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: user-service
  
  minReplicas: 3
  maxReplicas: 50
  
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 50
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15

キャッシング戦略

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# Redis Cluster
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
  namespace: cache
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        command:
        - redis-server
        - /conf/redis.conf
        ports:
        - containerPort: 6379
        - containerPort: 16379
        volumeMounts:
        - name: conf
          mountPath: /conf
        - name: data
          mountPath: /data
      volumes:
      - name: conf
        configMap:
          name: redis-config
  
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi

セキュリティベストプラクティス

Pod Security Standards

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# Pod Security Policy
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: microservices-psp
spec:
  privileged: false
  allowPrivilegeEscalation: false
  requiredDropCapabilities:
    - ALL
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'persistentVolumeClaim'
  runAsUser:
    rule: 'MustRunAsNonRoot'
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      - min: 1
        max: 65535

RBAC設定

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: user-service-sa
  namespace: production

---
# Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: user-service-role
  namespace: production
rules:
- apiGroups: [""]
  resources: ["secrets", "configmaps"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]

---
# RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: user-service-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: user-service-sa
  namespace: production
roleRef:
  kind: Role
  name: user-service-role
  apiGroup: rbac.authorization.k8s.io

災害復旧とBCP

マルチリージョン設計

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# 災害復旧用クラスター
Multi_Region_Setup:
  Primary_Region: asia-northeast1
  DR_Region: asia-northeast2
  
  Replication_Strategy:
    Database: Cross-region replica
    Application: Active-passive
    Configuration: GitOps sync
  
  Failover_Process:
    RTO: 15 minutes
    RPO: 5 minutes
    Automation: Ansible playbooks

バックアップ戦略

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# Velero バックアップ
apiVersion: velero.io/v1
kind: Schedule
metadata:
  name: daily-backup
  namespace: velero
spec:
  schedule: "0 2 * * *"
  template:
    includedNamespaces:
    - production
    - api-gateway
    excludedResources:
    - events
    - events.events.k8s.io
    storageLocation: gcs-backup
    volumeSnapshotLocations:
    - gcp-snapshots
    ttl: 720h0m0s

運用自動化

CI/CD パイプライン

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# GitHub Actions workflow
name: Microservices Deployment
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Setup kubectl
      uses: azure/setup-kubectl@v3
      with:
        version: 'v1.28.0'
    
    - name: Deploy to GKE
      env:
        KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
      run: |
        echo "$KUBE_CONFIG_DATA" | base64 -d > kubeconfig
        export KUBECONFIG=kubeconfig
        
        # Canary deployment
        kubectl set image deployment/user-service \
          user-service=gcr.io/project/user-service:${{ github.sha }} \
          -n production
        
        # Wait for rollout
        kubectl rollout status deployment/user-service -n production
        
        # Run health checks
        kubectl exec -n production deployment/user-service -- \
          wget -qO- http://localhost:8080/health

まとめ

マイクロサービス時代のネットワーク設計ポイント:

基盤技術の選択:

  • Kubernetes: コンテナオーケストレーション
  • Service Mesh: サービス間通信制御
  • API Gateway: 外部インターフェース管理

重要な設計原則:

  • 観測性: 分散トレーシング・メトリクス・ログ
  • セキュリティ: mTLS・RBAC・ネットワークポリシー
  • スケーラビリティ: HPA・負荷分散・キャッシング

運用の自動化:

  • CI/CD パイプライン統合
  • 自動スケーリング
  • 災害復旧自動化

適切な設計により、複雑なマイクロサービス環境でも安定した運用が実現できます。


📅 作成日: 2025年09月09日

参考リンク:

技術ネタ、趣味や備忘録などを書いているブログです
Hugo で構築されています。
テーマ StackJimmy によって設計されています。