K8s by Example: Logging Patterns
| Kubernetes captures container stdout/stderr and makes logs available via kubectl. For production, ship logs to a centralized system (Elasticsearch, Loki, CloudWatch). Three patterns: node-level agent (DaemonSet), sidecar streaming, sidecar with agent. Use structured JSON logs for queryability. |
| logging-stdout.yaml | |
| The simplest approach: write logs to stdout/stderr. Kubernetes captures them automatically. Use JSON format for structured logs that are easy to parse and query. Include timestamp, level, message, and context. | |
| logging-node-agent.yaml | |
| Node-level logging agent (DaemonSet) collects logs from all containers on each node. Reads from | |
| logging-sidecar-streaming.yaml | |
| Sidecar streaming: app writes logs to files; sidecar tails files and streams to stdout. Useful when app can’t write to stdout (legacy apps). Node agent then collects sidecar’s stdout. | |
| logging-sidecar-agent.yaml | |
| Sidecar agent: dedicated log shipper per Pod. More resource overhead but allows Pod-specific configuration, parsing, and routing. Useful when different apps need different log processing. | |
| logging-fluentd-config.yaml | |
| Fluentd configuration: parse logs, add metadata (Pod name, namespace, labels), route to destinations. Use filters to transform logs (add fields, parse JSON, drop debug logs in production). | |
| logging-levels.yaml | |
| Configure log levels per environment. Debug in dev, info/warn in production. Use ConfigMap or environment variables to change without rebuilding. Structured logs should include the level field. | |
| terminal | |
| Access container logs with kubectl. Use | |