Starexe
📖 Tutorial

How to Dynamically Scale Pod Resources in Kubernetes v1.36 Using In-Place Vertical Scaling

Last updated: 2026-05-04 18:05:45 Intermediate
Complete guide
Follow along with this comprehensive guide

Introduction

Kubernetes v1.36 brings a powerful new capability to beta: In-Place Vertical Scaling for Pod-Level Resources. This feature, now enabled by default via the InPlacePodLevelResourcesVerticalScaling feature gate, allows you to adjust the aggregate resource budget (.spec.resources) of a running Pod without necessarily restarting its containers. It's a game-changer for managing complex Pods with sidecars, enabling you to expand a shared pool of CPU and memory on the fly. In this guide, we'll walk through the exact steps to leverage this feature, from understanding the underlying model to executing a resize and verifying the result.

How to Dynamically Scale Pod Resources in Kubernetes v1.36 Using In-Place Vertical Scaling

What You Need

  • A Kubernetes cluster running v1.36 or later (the feature is beta and enabled by default).
  • kubectl installed and configured to access your cluster.
  • A running Pod that uses Pod-level resources (i.e., spec.resources.limits defined without per-container limits).
  • Optional: a resizePolicy set in your container definitions to control restart behavior (see Step 1).
  • Administrative permissions to patch Pods (usually pods/resize subresource access).

Step-by-Step Guide

Step 1: Understand Pod-Level Resources and Inheritance

Before you resize, you need a Pod that defines resources at the Pod level rather than per-container. Here's the key concept: when you set spec.resources (limits or requests) on a Pod, all containers that don't specify their own resource limits inherit from this shared pool. This is especially useful for sidecar patterns where you want a collective budget. The resizePolicy inside each container tells the Kubelet whether to attempt a non-disruptive update (via cgroup) or restart the container. Note that resizePolicy is currently not supported at the Pod level – each container must define its own.

Step 2: Create or Identify a Suitable Pod

You'll need a Pod definition that uses Pod-level resources and has at least one container. Consider this example YAML – it defines a shared CPU limit of 2 CPUs and 4Gi memory, with two containers that inherit these limits:

apiVersion: v1
kind: Pod
metadata:
  name: shared-pool-app
spec:
  resources:            # Pod-level limits
    limits:
      cpu: "2"
      memory: "4Gi"
  containers:
  - name: main-app
    image: my-app:v1
    resources: {}       # inherits from Pod
    resizePolicy:
    - resourceName: "cpu"
      restartPolicy: "NotRequired"
  - name: sidecar
    image: logger:v1
    resources: {}       # inherits
    resizePolicy:
    - resourceName: "cpu"
      restartPolicy: "NotRequired"

Apply this to your cluster with kubectl apply -f pod.yaml. Ensure the Pod is in Running state.

Step 3: Perform the In-Place Resize

To double the shared CPU pool from 2 to 4 CPUs, use the kubectl patch command with the --subresource resize flag. The patch targets spec.resources.limits.cpu:

kubectl patch pod shared-pool-app --subresource resize --patch '{ "spec": { "resources": { "limits": { "cpu": "4" } } } }'

The Kubelet will immediately process this request. For each container, it checks the resizePolicy:

  • If restartPolicy: NotRequired, it updates cgroup limits via the CRI without restarting.
  • If restartPolicy: RestartContainer, it restarts the container to apply the new aggregate boundary safely.

In our example, both containers have NotRequired, so the update is nearly instantaneous and non-disruptive.

Step 4: Verify the Resize and Monitor Node Stability

After patching, check the Pod status:

kubectl describe pod shared-pool-app

Look for Conditions – you should see a new condition PodResizePending briefly, then PodResizeExecuted once the Kubelet applies the change. Verify the new limit by examining the container resource limits inside the node (or use a tool like kubectl exec to check /sys/fs/cgroup). The Kubelet also performs node-level feasibility checks: it ensures the node has enough capacity, evaluates the NodeRestriction admission plugin, and updates the Pod's resource spec only after confirming safety. If the node is overloaded, the resize may be rejected – monitor node resource usage with kubectl top nodes.

Tips for Success

  • Always define an explicit resizePolicy on each container that inherits from Pod-level resources. Without it, the default policy may restart the container unnecessarily (check the container runtime's default).
  • Start small: test with a non-critical Pod before applying to production. In-place resizing uses the resize subresource, which has its own RBAC rules – ensure your service account has pods/resize permissions.
  • Combine with Horizontal Pod Autoscaler (HPA) for dynamic adjustments: you can trigger Pod-level resizes based on metrics, but note that HPA doesn't currently support this directly – you'll need a custom controller. For manual scaling, use the patch command in scripts.
  • Understand restart implications: If a container requires RestartContainer, the resize will cause a brief downtime. Use NotRequired for sidecars that can handle cgroup updates without restarting (e.g., stateless loggers).
  • Monitor node pressure: The Kubelet reserves resources from the node's allocatable pool. A resize may fail if the node is near capacity. Consider using Verification steps to confirm.
  • Beware of the current limitation: resizePolicy cannot be set at the Pod level – you must define it per container. Future versions may address this.