How to Debug a 502 on Kubernetes

Here is a thorough method to check the readiness of your application in order to determine the cause of your 502 problem

You did it — you’ve built a Docker image, created a Cluster, set up a Deployment, structured a Service, and configured an Ingress alongside an automated TLS certificate issuer!

Your app is ready to go!

As you load the domain in your browser and prepare to see your completed work, you are instead greeted with an ugly and awful error page reading:

502 Server Error

Error: Server Error

The server encountered a temporary error and could not complete your request. Please try again in 30 seconds.

Here is a picture:

Aside from deep-level debugging on the app layer, its often beneficial to first determine the overall healthiness of your application’s setup, starting from the pod, then container, then service, then ingress, and finishing with application-level debug logs. At the end of the article is a list of common debugging steps to try out if you don’t want to run through a full end-to-end health check.

While this guide was completed using Google Cloud Platform (GCP) and Google Kubernetes Engine (GKE), all of these steps will still apply for any platform that is running a recent version of Kubernetes.

Is your pod running?

Sometimes it's just this simple — perhaps your pod failed to startup properly, or one of the containers inside the pod is stuck in an invalid state.

$ kubectl get pods

Are your pod containers healthy?

If your pod is healthy-looking at a basic level, next you should describe the pod and ensure that the necessary ports are open and listening, by describing the pod (using the name from the previous step). Ensure that the Port is open and listening to the expected address.

$ kubectl describe pod my-pod-6c966b5bc7–6bcjf

Ensure that the Port is open and listening to the expected address.

Let's assume your pod that is returning a 502 is supposed to be listening on 40004, you should search for the container and confirm that the port is listed as such.

Is your service active?

Next, we should verify that the service is active, and that is just as simple as the previous checks. For each container, you might have a separate service.

$ kubectl get svc

Now a typical configuration would map a service to a container within a pod, and then expose the pod via a service via port 80. If your exact configuration doesn’t match, that’s fine, just make sure that all of your services are available.

Is your service healthy and mapped correctly?

Using each service name, you can retrieve more details on the current state of the service by once again using kubectl describe. You will usually only need to do this on the service that is associated with the problem pod and ingress URL.

$ kubectl describe svc my-first-svc

You should examine this output and confirm that the Target Port and the Endpoints match the expected listing port on your problem pod, below this sample port is displayed as 40004

Is your ingress healthy?

Before you dive into the details, as always we need to collect some basic information. Run:

$ kubectl get ing

This command will list out all of your ingress points and their associated basic metadata, like so:

Is your ingress configured correctly?

Once you’ve determined the overall health of the ingress resource, you can then look at the detailed view of the ingress in question, using the same describe syntax as before:

$ kubectl describe ing my-app-ingress

This will usually beget some output like:

The interesting bits from the above to examine are the rules section and the backend section.

First, let's look at rulesand confirm that our service and ingress URL sync up.

In the below, our ingress under examination is the domain, which should map the same endpoint we saw earlier in the related service with a matching IP address and port.

Here you see it does in fact match ( => my-first-svc:80 (

Next, we can look at thebackends section… where a clue has been uncovered:

The backend that is associated with our service is marked as unhealthy.

In this case, the Kubernetes Ingress will not forward traffic if the backend is unhealthy, and will result in a 502 error. This is often simply due to the pod not passing its health check or sending back a proper 200 response.


If you’ve made it this far, you need to debug some things now:

  • Are your readiness and liveness checks on the container configured?
  • Is your application logging errors or warning messages?
  • Is your application log empty? Fix this!
  • Is your application functional? Try $ kubectl exec -it {the-pod_id} sh and poke around inside the file system to confirm that everything is there as expected.
  • Are you still stuck? Sometimes you might need to just delete the service attached to the container in failure and make a new one using kubectl apply — you’d be surprised how many times this occurs

Father, Husband, Engineer, CTO at Libretto, 15+ yrs of software engineering —

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store