When your Node.js application returns a “not found” error for a path like /api/v1/health
on Kubernetes, here are the likely causes and steps to troubleshoot:
1. Application Configuration Issues
Check Your Application Routes
Ensure the route /api/v1/health
is correctly defined in your application. For a typical NestJS application, you may have something like this:
@Controller('api/v1')
export class HealthController {
@Get('health')
getHealth(): string {
return 'OK';
}
}
If the route is missing or improperly defined, the application won’t recognize the path.
2. Container Issues
Test Locally
Run the application locally (outside of Kubernetes) and test the endpoint:
curl http://localhost:3000/api/v1/health
If the endpoint doesn’t work locally, debug your application configuration first.
3. Kubernetes Service Configuration
Verify Kubernetes Service
If your Kubernetes service is misconfigured, requests might not reach your pod.
- Check Service Configuration:
kubectl get svc kubectl describe svc <your-service-name>
Ensure the service forwards requests to the correct port. For example, if your Node.js app listens on port3000
inside the container, the service should target that port.
Example Service
YAML:
apiVersion: v1
kind: Service
metadata:
name: nodejs-service
spec:
selector:
app: nodejs-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
4. Kubernetes Ingress Configuration
Check Ingress Rules
If you’re using an ingress to expose the application, ensure the path /api/v1/health
is correctly configured.
Example Ingress
YAML:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nodejs-ingress
spec:
rules:
- host: your-domain.com
http:
paths:
- path: /api/v1/health
pathType: Prefix # Prefix ensures all paths starting with `/` are routed
backend:
service:
name: nodejs-service
port:
number: 80
Verify ingress:
kubectl get ingress
kubectl describe ingress <ingress-name>
5. Logs and Debugging
Check Pod Logs
Inspect the logs of your Node.js pod to see if the application is receiving and processing requests:
kubectl logs <pod-name>
Exec into Pod
To debug further, you can exec into the pod and manually curl the service:
kubectl exec -it <pod-name> -- sh
curl http://localhost:3000/api/v1/health
6. DNS and Networking Issues
Verify DNS Resolution
Ensure the service name resolves correctly within the cluster:
kubectl exec -it <pod-name> -- nslookup nodejs-service
Check Network Policies
If you’re using network policies, ensure they allow traffic to the pod.
7. Common Fixes
- Expose Ports Correctly: Ensure the
EXPOSE
statement in your Dockerfile matches the app’s listening port. - Health Check Configuration: Kubernetes may mark the pod as healthy even if
/api/v1/health
isn’t working. Ensure readiness and liveness probes match the app’s configuration.
Example Deployment
with readiness probe:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app
spec:
replicas: 2
selector:
matchLabels:
app: nodejs-app
template:
metadata:
labels:
app: nodejs-app
spec:
containers:
- name: nodejs-app
image: your-image
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /api/v1/health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
If the issue persists, share any logs or configurations, and I’ll help debug further!