Hey there! 👋
If you found this page on Google, chances are that you are trying to fix the FailedGetResourceMetric error in HPA. If that's the case, you are in the right place!
In case you're new to HPA (HorizontalPodAutoscaler), we also have a short introduction to HPA that you may want to check out first.
There are a few different reasons why this error could happen, so let's go through them.
1. Misising Metrics
failed to get cpu utilization: unable to get metrics for resource cpu:
no metrics returned from resource metrics API
The first error is probably the most common one, but also the easiest to fix.
To understand if Kubernetes needs to create more replicas of a service, the Horizontal Pod Autoscaler (HPA) needs to know the current CPU and memory usage of the service. The error above means that it cannot find metrics for the service it's trying to scale.
metrics-server is the most popular solution to gather collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.
Most managed Kubernetes clusters services already have metrics-server (or an alternative metrics service) installed by default. However, if you are self-hosting a Kubernetes cluster, you may need to install it manually.
To install you can simply run:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
This will install all the necessary components to get metrics-server up and running. It may take a few seconds to collect the first metrics, but you should be able to validate that it's working by running:
kubectl top pods
Yay! 🎉 You should now be able to see the CPU and memory usage of your pods and nodes, as well as auto scale them using HPA and VPA!
2. Missing Requests
Now that you have metrics server up and running, you may still get the following errors:
the HPA was unable to compute the replica count: failed to get cpu utilization:
missing request for cpu
the HPA was unable to compute the replica count: failed to get memory utilization:
missing request for memory
Let's take a step back for a minute to understand what's going on.
When you create a HPA to scale based on CPU/Memory averageUtilization, the HPA needs to know what's the current usage and the requested resources across all matching pods.
- Current Usage: This is collected by the metrics-server (or other metrics service like Prometheus) and made available via the Metrics API.
- Requested Resources: Sum of resources requests on each container in matching pod.
It's common to think that HPA uses resource limits to calculate the average utilization, but that's not the correct. HPA uses the resource requests instead of limits, which is what this error is trying to tell us.
In summary, there are missing resource requests on some or all containers in your pods. To fix this,find the target scaled object (Deployment, StatefulSet, etc) and set the resource requests to the containers that are missing them.
Important: This requirement also applies to sidecars that may be automatically injected and not directly visible on the source YAML manifest. This is very common on service meshes like Linkerd and Istio. If you are using a service mesh, make sure that you add the resource requests to the sidecar containers too. This is often done by adding annotations to the target scale object, but check the corresponding documentation for your service mesh.
3. Bonus: Scaling limited by maximum replica count
This one might be a bit more self explanatory than the previous ones, as well as not directly related to FailedGetResourceMetric, but I think it's still worth mentioning as well!
You may be seeing the following condition on your HPA and wondering why:
ScalingLimited ➝ the desired replica count is more than the maximum replica count
It basically means that Kubernetes is trying to scale a service to more replicas than the maximum replica count allowed by the HPA.
This may or may not be OK, depending on your use case.
If you're noticing a performance or availability degradation in your service, you may want to increase the
maxReplicas value in the HPA to allow for higher replica counts. However, if you are getting this error and you are not noticing any degradation, you may want to investigate why Kubernetes is trying to scale to more replicas than the maximum allowed and tweak your resource requests and HPA metric targets accordingly.