Fix Kubernetes Pod Crash Loop Root Cause & Solution
Direct answer
Kubernetes pods enter CrashLoopBackOff when the container process exits repeatedly. Common causes are OOMKilled (memory limit exceeded), failing liveness probes, and application startup errors. Run kubectl logs --previous and kubectl describe pod to identify the specific failure.
Structured breakdown
Cause
Pods crash loop due to memory limits, failed probes, or startup errors. Check pod logs with kubectl logs, then inspect events with kubectl describe pod to pinpoint the exact failure reason.
Fix
- Check pod logs using kubectl logs <pod-name> --previous to see crash output
- Run kubectl describe pod <pod-name> and inspect the Events section for OOMKilled or probe failures
- Increase memory limits in your deployment manifest if OOMKilled is the reason
Outcome
Pod restarts stop and the container runs stably with correct resource limits and probe configuration.
Common causes
- Memory limit exceeded (OOMKilled)
- Liveness probe failing repeatedly
- Application crash on startup due to missing config or dependency
- Required dependency or service unavailable at boot
- Image pull errors or wrong container image tag
Fix steps
- 1
Check pod logs using kubectl logs <pod-name> --previous to see crash output
- 2
Run kubectl describe pod <pod-name> and inspect the Events section for OOMKilled or probe failures
- 3
Increase memory limits in your deployment manifest if OOMKilled is the reason
- 4
Validate liveness and readiness probe configuration — ensure endpoints return 200
- 5
Verify environment variables, config maps, and secrets are correctly mounted
Analyze this issue
Paste the issue description, logs, or symptoms into the fix tool to inspect this problem with your own runtime details.
Need more context?
If the standard steps do not resolve the issue, open the fix tool and include the current logs, configuration, and deployment changes.
Open Fix ToolFrequently asked questions
Related technical context
These examples show the commands, logs, and configuration patterns most often used to verify this issue.
Command examples
kubectl logs <pod-name> --previouskubectl describe pod <pod-name>kubectl get events --sort-by='.lastTimestamp'
Log snippet
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137Config snippet
resources:
limits:
memory: "512Mi"
requests:
memory: "256Mi"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15