K8s by Example: Logging Patterns

Kubernetes captures container stdout/stderr and makes logs available via kubectl. For production, ship logs to a centralized system (Elasticsearch, Loki, CloudWatch). Three patterns: node-level agent (DaemonSet), sidecar streaming, sidecar with agent. Use structured JSON logs for queryability.

logging-stdout.yaml

The simplest approach: write logs to stdout/stderr. Kubernetes captures them automatically. Use JSON format for structured logs that are easy to parse and query. Include timestamp, level, message, and context.

apiVersion: v1
kind: Pod
metadata:
  name: json-logger
spec:
  containers:
    - name: app
      image: my-app:v1
      env:
        - name: LOG_FORMAT
          value: "json"
        - name: LOG_LEVEL
          value: "info"
logging-node-agent.yaml

Node-level logging agent (DaemonSet) collects logs from all containers on each node. Reads from /var/log/containers. Most efficient: one agent per node instead of per Pod. Fluent Bit is lightweight; Fluentd for complex routing.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
spec:
  selector:
    matchLabels:
      app: fluent-bit
  template:
    metadata:
      labels:
        app: fluent-bit
    spec:
      serviceAccountName: fluent-bit
      tolerations:
        - operator: Exists
      containers:
        - name: fluent-bit
          image: fluent/fluent-bit:2.2
          volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: containers
              mountPath: /var/lib/docker/containers
              readOnly: true
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
        - name: containers
          hostPath:
            path: /var/lib/docker/containers
logging-sidecar-streaming.yaml

Sidecar streaming: app writes logs to files; sidecar tails files and streams to stdout. Useful when app can’t write to stdout (legacy apps). Node agent then collects sidecar’s stdout.

apiVersion: v1
kind: Pod
metadata:
  name: app-with-log-streamer
spec:
  containers:
    - name: legacy-app
      image: legacy-app:v1
      volumeMounts:
        - name: logs
          mountPath: /var/log/app
    - name: log-streamer
      image: busybox:1.36
      args:
        - /bin/sh
        - -c
        - tail -F /var/log/app/*.log
      volumeMounts:
        - name: logs
          mountPath: /var/log/app
          readOnly: true
  volumes:
    - name: logs
      emptyDir: {}
logging-sidecar-agent.yaml

Sidecar agent: dedicated log shipper per Pod. More resource overhead but allows Pod-specific configuration, parsing, and routing. Useful when different apps need different log processing.

apiVersion: v1
kind: Pod
metadata:
  name: app-with-fluentd
spec:
  containers:
    - name: app
      image: my-app:v1
      volumeMounts:
        - name: logs
          mountPath: /var/log/app
    - name: fluentd
      image: fluent/fluentd:v1.16
      volumeMounts:
        - name: logs
          mountPath: /var/log/app
          readOnly: true
        - name: fluentd-config
          mountPath: /fluentd/etc
      env:
        - name: ELASTICSEARCH_HOST
          value: "elasticsearch.logging.svc"
  volumes:
    - name: logs
      emptyDir: {}
    - name: fluentd-config
      configMap:
        name: fluentd-app-config
logging-fluentd-config.yaml

Fluentd configuration: parse logs, add metadata (Pod name, namespace, labels), route to destinations. Use filters to transform logs (add fields, parse JSON, drop debug logs in production).

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-app-config
data:
  fluent.conf: |
    <source>
      @type tail
      path /var/log/app/*.log
      pos_file /var/log/app.log.pos
      tag app.logs
      <parse>
        @type json
        time_key ts
        time_format %Y-%m-%dT%H:%M:%S%z
      </parse>
    </source>
    <filter app.logs>
      @type record_transformer
      <record>
        hostname "#{ENV['HOSTNAME']}"
        namespace "#{ENV['POD_NAMESPACE']}"
      </record>
    </filter>
    <match app.logs>
      @type elasticsearch
      host "#{ENV['ELASTICSEARCH_HOST']}"
      port 9200
      index_name app-logs
    </match>
logging-levels.yaml

Configure log levels per environment. Debug in dev, info/warn in production. Use ConfigMap or environment variables to change without rebuilding. Structured logs should include the level field.

apiVersion: v1
kind: ConfigMap
metadata:
  name: logging-config
data:
  LOG_LEVEL: "info"
  LOG_FORMAT: "json"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
spec:
  template:
    spec:
      containers:
        - name: api
          image: api-server:v1
          envFrom:
            - configMapRef:
                name: logging-config
terminal

Access container logs with kubectl. Use -f to follow, -c to select container, —previous for crashed containers, —since for time range. For multi-container Pods, specify container name.

$ kubectl logs json-logger
{"ts":"2024-01-15T10:30:00Z","level":"info","msg":"Started"}

$ kubectl logs json-logger -f

$ kubectl logs json-logger --previous

$ kubectl logs json-logger --since=1h

$ kubectl logs json-logger -c app

$ kubectl logs -l app=api-server --all-containers

$ kubectl logs deployment/api-server --tail=100

Index | GitHub | Use arrow keys to navigate |