* first pass * fix up homepage * more work * housekeeping * add script to modify home link * add check docs * build docs site * Create CNAME * fix path to check-docs * update from template * fix logo in readme * fix link * remove logspam * remove old folders * fix all links * fix up readme * change up Insights description * add customization docs * phrasing * title * titles * titles * change webhook docs * refresh template * rebuild site * refresh from template repo * phrasing * add tagline * update readme\, add readme sync script * fix logo * rebuild * fix readme script * rebuild
2.2 KiB
Efficiency
These checks ensure that CPU and memory settings are configured, so that Kubernetes can schedule your workload effectively.
Presence Checks
To simplify ensure that these values have been set, the following attributes are available:
| key | default | description |
|---|---|---|
resources.cpuRequestsMissing |
warning |
Fails when resources.requests.cpu attribute is not configured. |
resources.memoryRequestsMissing |
warning |
Fails when resources.requests.memory attribute is not configured. |
resources.cpuLimitsMissing |
warning |
Fails when resources.limits.cpu attribute is not configured. |
resources.memoryLimitsMissing |
warning |
Fails when resources.limits.memory attribute is not configured. |
Background
Configuring resource requests and limits for containers running in Kubernetes is an important best practice to follow. Setting appropriate resource requests will ensure that all your applications have sufficient compute resources. Setting appropriate resource limits will ensure that your applications do not consume too many resources.
Having these values appropriately configured ensures that:
-
Cluster autoscaling can function as intended. New nodes are scheduled once pods are unable to be scheduled on an existing node due to insufficient resources. This will not happen if resource requests are not configured.
-
Each container has sufficient access to compute resources. Without resource requests, a pod may be scheduled on a node that is already overutilized. Without resource limits, a single poorly behaving pod could utilize the majority of resources on a node, significantly impacting the performance of other pods on the same node.