failed to garbage collect required amount of images. Wanted to free 6283487641 bytes, but freed 0 by

I have searched many websites and articles but not found any perfect answer. I am using eks version 1.18. I can see a few of the pods are "Evicted", but when trying to check the node I can see the error "(combined from similar events): failed to garbage collect required amount of images. Wanted to

I have searched many websites and articles but not found any perfect answer. I am using eks version 1.18. I can see a few of the pods are "Evicted", but when trying to check the node I can see the error "(combined from similar events): failed to garbage collect required amount of images. Wanted to free 6283487641 bytes, but freed 0 bytes".

Is there any way we can find the reason why it's failing? or how to fix this issue? Any suggestions are most welcome.

enter image description here

I can see the disk "overlays" filesystem is almost full within a few hours. I am not sure what's going on. The below screenshot shows my memory utilization.

enter image description here

7

4 Answers

So a workaround that can stabilize situation for a while (giving your time to mount a larger volume for storing images) is to start using local images cache, by changing imagePullPolicy in your Deployment (or Pod) manifest to:

spec.containers.imagePullPolicy: "ifNotPresent" 

One situation I've encountered where such quick storage exhaustion can happen is when you set imagePullPolicy to Always and then the image fails to pull completely (one reason being that there is not enough space). k8s then enters into an image pull loop (not throttled sufficiently by the backoff mechanism) and those unique incomplete image parts with different checksums combined with the "always pull" request can quickly consume all available storage dedicated to docker images (on the partition where containerd is located).

see if you can change the Kubernetes GC policies. I guess the issues may be due to recent changes in the flags

the new ones are using the flags as --eviction syntax, can you check if that is the case with your setup causing the failure on clearing the space

Please refer to the docs here

https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/

My local k3d cluster had the same issue, it turned out I was low on space and I had a ton of dangling images https://docs.docker.com/engine/reference/commandline/image_prune/ and running docker image prune -a and recreating the cluster fixed it for me.

4

Simply. In my case, the disk was almost full on reported node.

Check if node has disk pressure:

kubectl describe node node-x 

Check pods on that node:

kubectl get pods -A -o wide | grep node-x 

Access each pod and check df -m

kubectl exec -it pod_name sh 

Some tips:

  • depending on K8s setup you may can focus on root / file system on pods and node-x, so space should be reduced

  • you can change node-x to node-y and compare how they differ in space by accessing these nodes and their pods (in case node-y is healthy)

  • try to clean up space on node-x via SSH, maybe Docker is occupying the disk? Quick tips: docker image prune -a --filter "until=48h" - remove unused images, clean up old logs journalctl --vacuum-time=2d, etc.

  • check kubectl logs each_pod_on_node_x if some pod written too many lines

2

ncG1vNJzZmirpJawrLvVnqmfpJ%2Bse6S7zGiorp2jqbawutJobW9waGh%2FeH2On5iipJWZerW7jKCYq5qRnLJur86lo56bpGK%2Fpr3UoqmenF2WurDBza1kqJ5dnrqis8SsZLCZnqmypXnTqGSfqpWaend%2Bl2xrcW9maX5urg%3D%3D

 Share!