
Falco Feeds extends the power of Falco by giving open source-focused companies access to expert-written rules that are continuously updated as new threats are discovered.

Kubernetes 1.35 will be released soon, bringing 17 changes to its security features.
It includes new validations, the deprecation of old technologies, and broader support for user namespaces, to name a few.
Let’s dig in!
Changes in Kubernetes 1.35 that may break things
#5573 Remove cgroup v1 support
SIG group: sig-node
Stage: Net New to Beta
Kubelet Configuration: failCgroupV1 Default: true
The cgroups feature in the Linux Kernel is the foundation for both container isolation and resource quotas. Kubernetes relies on this feature for many of its internal workings.
It’s been about a decade since cgroups v2 was available, providing enhanced resource management and isolation. So it’s finally time to drop support for its original version.
⚠️ Starting with Kubernetes 1.35, support for cgroups v1 is disabled by default. It is a first step before completely deprecating and removing this support altogether.
✅ Check if your Linux server is using cgroups v2 before upgrading to Kubernetes 1.35:
$ stat -fc %T /sys/fs/
cgroup/cgroup2fs#2535 Ensure secret pulled images
SIG group: sig-node
Stage: Major Change to Beta
Feature Gate: KubeletEnsureSecretPulledImages Default: true
Up until now, checking if a Pod is authorized to access an image has been done only when the image is pulled. This means that if one image has already been pulled, a Pod may access it even without the proper permissions.
There is a workaround. Setting ImagePullPolicy to Always indirectly forces the kubelet to perform a verification against the registry. However, this workaround is neither intuitive nor watertight.
This enhancement adds the field imagePullCredentialsVerificationPolicy so cluster admins can define when these authorization verifications should take place.
ℹ️ This kind of extra verification is mandatory for multi-tenant clusters, where Pods from some users should never be able to access someone else’s images. The fact that this configuration exists and can be misconfigured is another reason you shouldn’t include any sensitive information in your container images.
You can keep the current behavior of not performing checks with NeverVerify. Use NeverVerifyAllowlistedImages to allow the images listed in preloadedImagesVerificationAllowlist, or AlwaysVerify to, as the name says, always require a credential re-verification.
⚠️ The new default value is NeverVerifyPreloadedImages, which won’t apply extra verifications to:
- Images pulled outside the kubelet.
- Images pulled by the kubelet while this feature was disabled.
However, it will perform checks on new pulls, which can cause some workloads to break due to pod creation failures.
⚠️ This feature will also cause an increase in image pulls.
✅ Prepare in advance, ensuring your Pods have the credentials to pull all the images they need. You can temporarily disable the feature gate or set the pull policy to NeverVerify until everything is fixed.
✅ Review your monitoring thresholds around pulls so you won’t get alerted unnecessarily. Check the metrics for pod creation failures to identify whether your pods are systemically experiencing credential issues.
#4006 Transition from SPDY to WebSockets
SIG group: sig-api-machinery
Stage: Graduating to Stable
Feature Gate: KUBECTL_REMOTE_COMMAND_WEBSOCKETS Default: true
Feature Gate: TranslateStreamCloseWebsocketRequests Default:true
Feature Gate: KUBECTL_PORT_FORWARD_WEBSOCKETS Default: true
Feature Gate: PortForwardWebsockets Default:true
Feature Gate: AuthorizePodWebsocketUpgradeCreatePermission Default: true
Remember the web back in 2009? The term “Web 2.0” was trending, and Google Wave excited us with promises of a truly interactive web experience. To support this paradigm shift, Google proposed replacing HTTP/1.x with its newly developed SPDY protocol. It was so popular that it became the basis for HTTP/2.
A lot has happened since. The flashy Web 2.0 is now referred to as the old HTML 5, and WebSockets in HTTP/2 have long surpassed the now-obsolete SPDY in capabilities. However, the Kubernetes CLI tool still relies on SPDY.
However, this transition has some security implications. The WebSocket specification allows connections to be upgraded from GET to POST, since all connections must start using the GET method. This means that read-only users could escalate permissions to run commands like kubectl exec.
⚠️ To prevent this privilege escalation, the API server will now require that users have permissions for the create verb whenever a connection upgrade is requested.
✅ Review your RBAC policies before upgrading to Kubernetes 1.35 to ensure that the create permission is granted to those users who need it. You’ll also be able to temporarily disable this new permissions check by setting the AuthorizePodWebsocketUpgradeCreatePermission feature gate to false.
Read more about this new check in the KEP, and more details about the implementation in kubectl in our “Kubernetes 1.31 - What’s New?” article.
#4872 Harden Kubelet serving certificate validation in kube-API server
SIG group: sig-auth
Stage: Net New to Alpha
Feature Gate: KubeletCertCNValidation Default: false
kube-apiserver flags: --enable-kubelet-cert-cn-validation, --kubelet-certificate-authority.
There is a security gap in the current API server certificate validation process that could allow:
- An attacker with access to an old node,
- Where that node has a still valid certificate,
- To configure that old node’s IP to match a new node’s IP (via ARP poisoning or other routing attacks),
- And use that certificate to impersonate the new node against the API server,
- Managing to reroute traffic to itself.
It’s not an easy attack, but it’s feasible in cloud environments where machines change faster than their certificates expire.
⚠️ One of the things the API server will do to prevent this from happening is to require the Common Name (CN) of the kubelet's serving certificate to be equal to system:node:<nodename>. In this case, nodename is the name of the Node object as reported by the kubelet.
✅ Before enabling this feature, ensure that all the kubelet’s certificates comply with this requirement. If your kubelets request certificates via a CSR, you’ll be fine, since they were already created that way. However, you’ll need to reissue any non-conforming certificates that you may have created manually or through another mechanism.
ℹ️ There won’t be documentation for this feature in Kubernetes 1.35 as it’s still a work in progress, but you can check more details in the KEP and start preparing for the future.
Net new enhancements in Kubernetes 1.35
#5284 Constrained impersonation
SIG group: sig-auth
Stage: Net New to Alpha
Feature Gate: ConstrainedImpersonation Default: false
The user impersonation mechanism in Kubernetes allows a user to act as another user.
This is useful, for example, for an admin user debugging an authorization policy. While temporarily impersonating another user, admins can quickly submit requests as that user and check if they are denied.
However, the current mechanism also allows users to impersonate other users with more permissions. That’s why admins should be cautious when granting the impersonation permission.
The new constrained impersonation feature adds an extra check to limit users' permissions when impersonating. With ConstrainedImpersonation enabled, users won’t be able to perform any action while impersonating someone that they couldn’t perform on their own.
#4828 Flagz for Kubernetes components
SIG group: sig-instrumentation
Stage: Major Change to Alpha
Feature Gate: ComponentFlagz Default: false
Similar to the statusZ page, the flagz endpoint provides runtime diagnostics for Kubernetes components. In particular, it will provide the command-line arguments that were used to start a component.
Cluster administrators can use this tool to ensure that all components are running with the expected configuration and that there are no deviations from security policies.
ℹ️ Read more in our “Kubernetes 1.32 - What’s New?” article.
#5607 Allow HostNetwork Pods to use user namespaces
SIG group: sig-node
Stage: Net New to Alpha
Feature Gate: UserNamespacesHostNetworkSupport Default: false
Currently, if a service like the API server wants to access the host network stack directly by using hostNetwork: true, it must also use the host users via hostUsers: true.
This is a security risk because if a Pod running as root gets compromised, it’s also easy for the attacker to gain root access in the host.
With this enhancement, the API server will allow pods with hostNetwork: true and hostUsers: false, and Pods will be able to access the host network stack while maintaining the isolation provided by Kubernetes user namespaces.
⚠️ Keep in mind that your underlying container runtime must support this combination; otherwise, the Pod will remain stuck in the ContainerCreating state and report an exception event.
#5538 CSI driver opt-in for service account tokens via secrets field
SIG group: sig-storage
Stage: Net New to Alpha
Feature Gate: CSIServiceAccountTokenSecrets Default: false
If you currently need to provide an account token to mount a volume, like a cloud bucket, that token is stored in VolumeContext alongside other non-sensitive information, like the pod name or the namespace.
This enhancement provides a new Secrets field designed to store this kind of sensitive data. The serviceAccountTokenInSecrets field will tell the CSI driver to find the tokens in the secrets field:
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: example-csi-driver
spec:
# ... existing fields ...
tokenRequests:
- audience: "example.com"
expirationSeconds: 3600
# New field for opting into secrets delivery
serviceAccountTokenInSecrets: true # defaults to false ⚠️ The API server will trigger a warning if a CSIDriver has the serviceAccountTokenInSecrets field set to false.
⚠️ Also, keep in mind that CSI driver developers must implement support for these new secrets.
Existing enhancements that will be enabled by default in Kubernetes 1.35
#4317 Pod Certificates
SIG group: sig-auth
Stage: Graduating to Beta
Feature Gate: PodCertificateRequest Default: true
Back in Kubernetes 1.19, the certificate signing request API was graduated to stable, providing a simple mechanism for requesting and obtaining X.509 certificates.
However, there is no simple way to provide those certificates to your workloads.
This enhancement introduces:
- The
PodCertificateRequestAPI, a mechanism to issue certificates for Pods. - A
PodCertificatevolume source that instructs the kubelet to provision a certificate for the Pod.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: pod-certificates-example
spec:
restartPolicy: OnFailure
automountServiceAccountToken: false
containers:
- name: main
…
volumeMounts:
- name: spiffe-credentials
mountPath: /run/workload-spiffe-credentials
volumes:
- name: spiffe-credentials
projected:
sources:
- podCertificate:
signerName: "row-major.net/spiffe"
keyType: ED25519
credentialBundlePath: credentialbundle.pemThis feature turns a burdensome, repetitive task into a breeze while making security easier to implement. What’s not to love?
#127 Support User Namespaces in pods
SIG group: sig-node
Stage: Graduating to Beta
Feature Gate: UserNamespacesSupport Default: true
User namespaces increase Pod isolation by separating the user processes run as from the users in the host.
For example, this is useful for Pods that need to run as root. You can use user namespaces to run processes as root inside the Pod, while they are actually running as unprivileged in the host.
If such a Pod is compromised and the attacker manages to break out of the container, the impact will be limited, as they will run as an unprivileged user.
You can enable this feature by setting hostUsers: false in your Pod description.
ℹ️ Read more in our “Kubernetes 1.25 - What’s New?” article.
#4639 VolumeSource: OCI Artifact and/or Image
SIG group: sig-node
Stage: Graduating to Beta
Feature Gate: ImageVolume Default: true
Kubernetes can now use OCI artifacts and images as volume sources.
This allows developers to separate binaries from configurations and other assets, simplifying deployments and enabling image reuse.
A simple use case would be web servers like nginx. To deploy a website, you currently have to base your image on nginx, then add all your assets on top of that. Doing this for all your websites creates a lot of duplication.
Now, all your website Pods can use the same base image (adapted to use image volumes), and deploy the configuration and web assets in a separate, minimal image.
The definition of these oci-volume is done as follows:
kind: Pod
metadata:
name: example-pod
spec:
volumes:
- name: oci-volume
image:
reference: "example.com/my-image:latest"
pullPolicy: IfNotPresent
containers:
- name: my-container
image: busybox
volumeMounts:
- mountPath: /data
name: oci-volume⚠️ However, allowing to mount OCI images in this manner opens the door to potential attack vectors.
✅ Before you enable this feature, strengthen your security policies and don’t allow images to be mounted from untrusted registries or that contain runnable content. You may consider blocking Pods that define image volumes.
ℹ️ Read more in Kubernetes 1.33 - What’s new?.
#3104 Separate kubectl user preferences from cluster configs
SIG group: sig-cli
Stage: Graduating to Beta
Environment Variable: KUBECTL_KUBERC Default: true
With the inclusion of a kuberc file (~/.kube/kuberc) in the Kubernetes CLI, users can cleanly separate cluster credentials and server configurations from user-specific preferences.
It is designed so that different configurations can be applied to different clients. For example, it can allow users to enforce delete confirmation in their local client, but not in their CI pipelines.
ℹ️ Since Kubernetes 1.35, a new credentialPluginPolicy field is available to limit which credential plugins can kubectl run.
Any kubeconfig file can define a users[n].exec.command field pointing to a “credential plugin” executable that kubeclt will run on your behalf. This is intended as a helper to authenticate to the cluster with external identity providers. However, this is also an attack vector, as these plugins can be any arbitrary executable that kubectl will run without you noticing.
The credentialPluginPolicy field will AllowAll executables by default. You can set this field to DenyAll to prevent any executable from running. This setting is also helpful to make an inventory of plugins, as you’ll get an error message when kubectl tries to run them.
Finally, you can also set it to Allowlist, and then limit the permitted plugins to only those defined in credentialPluginAllowlist:
apiVersion: kubectl.config.k8s.io/v1beta1
kind: Preference
credentialPluginPolicy: Allowlist
credentialPluginAllowlist:
- name: /usr/local/bin/cloudco-login
- name: get-identityℹ️ Read more in our “Kubernetes 1.31 - What’s New?” article.
#5589 Remove gogo protobuf dependency for Kubernetes API types
SIG group: sig-api-machinery
Stage: Major Change to Stable
The Kubernetes API relies on gogo protobuf. However, this library was deprecated in 2021. As this poses a security risk, among other things, work is underway to remove this dependency.
This enhancement focuses on removing these dependencies from the Kubernetes API objects. Instead, the standard golang protobuf library will be used.
ℹ️ Check the implementation details in the KEP, or adventure into the PR.
#3331 Structured Authentication Config
SIG group: sig-auth
Stage: Graduating to Stable
Feature Gate: StructuredAuthenticationConfiguration Default: true
Currently, you can configure the authentication for the API server using several command flags, like --oidc-issuer-url, --oidc-client-id or --oidc-username-claim.
This enhancement adds support to perform this configuration via a config file provided with the flag --authentication-config:
apiVersion: apiserver.config.k8s.io/v1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: <https://example.com> # Same as --oidc-issuer-url
audiences:
- my-app # Same as --oidc-client-id.
claimMappings:
username:
expression: 'claims.username + ":external-user"'
groups:
expression: 'claims.roles.split(",")'
uid:
expression: 'claims.sub'
extra:
- key: 'example.com/tenant'
valueExpression: 'claims.tenant'
userValidationRules:
- expression: "!user.username.startsWith('system:')"
message: 'username cannot used reserved system: prefix'The main goal behind this change is to gain more flexibility, allowing you to:
- Update the configuration without restarting the server.
- Define more than one audience claim.
- Use expressions instead of exact matching.
- Use more than one OIDC provider.
ℹ️ As a bonus, having this configuration in a structure file will make it easier to check for misconfigurations and security policy drifts.
⚠️ Keep in mind that the --authentication-config flag is incompatible with the old --oidc-* command line arguments. The API server will report this as a misconfiguration and will exit immediately.
#859 Include kubectl command metadata in http request headers
SIG group: sig-cli
Stage: Graduating to Stable
Environment Variable: KUBECTL_COMMAND_HEADERS Default: true
For some time, kubectl has been including additional HTTP headers in the requests to the API server. Now this feature is considered stable.
$ kubectl apply -f - -o yaml
Kubectl-Command: apply
Kubectl-Session: 67b540bf-d219-4868-abd8-b08c77fefecaℹ️ Being able to track all the commands executed in the same session provides context when investigating security incidents originating from kubectl commands.
ℹ️ Read more in Kubernetes 1.22 - What’s new?.
#3619 Fine-grained SupplementalGroups control
SIG group: sig-node
Stage: Graduating to Stable
Feature Gate: SupplementalGroupsPolicy Default: true
Linux users may belong to several groups beyond their main one, defined in the /etc/group file:
$ id golo
uid=1000(golo) gid=1000(golo) groups=1000(golo),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),122(lpadmin),135(lxd),136(sambashare),137(vboxusers)If a container image defines its own groups in the /etc/group file, by default, Kubernetes merges that with the groups information defined in the Pod.
ℹ️ This is dangerous, as a malicious container image could tailor the /etc/group file to escalate privileges.
This now-stable feature added a supplementalGroupsPolicy field in the pod definition. When set as Strict, as opposed to the default Merge, Kubernetes will ignore any groups configuration not defined in the Pod config:
apiVersion: v1
kind: Pod
metadata:
name: strict-supplementalgroups-policy
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
supplementalGroups: [4000]
supplementalGroupsPolicy: Strict
…ℹ️ Read more in the Kubernetes blog.
#3983 Add support for a drop-in kubelet configuration directory
SIG group: sig-node
Stage: Graduating to Stable
Config Flag: --config-dir Default: ''
After several releases in beta, this feature is now considered stable.
Similar to other Linux tools, you can now define a drop-in (e.g., /etc/kubernetes/kubelet.conf.d) directory for the kubelet configuration.
You can check the effective kubelet configuration using kubectl proxy and the configz endpoint:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
$ curl -X GET <http://127.0.0.1:8001/api/v1/nodes/><node-name>/proxy/configz | jq .
{
"kubeletconfig": {
"enableServer": true,
"staticPodPath": "/var/run/kubernetes/static-pods",
…ℹ️ There’s a reason why this practice is so extended in Linux: it enhances transparency while keeping the configuration easier to maintain and less error-prone. For Kubernetes, it also helps support the best practices for configuration management in line with the OWASP Top 10 for Kubernetes.
ℹ️ Read more in Kubernetes 1.32 - What’s new?, or in the Kubernetes Docs.
Wrapping things up
If you liked this, you might want to check out our previous “What's new in Kubernetes” editions:
- Kubernetes 1.33 - What’s new?
- Kubernetes 1.32 - What’s new?
- Older releases: 1.31, 1.30, 1.27, 1.26, 1.25, 1.24, 1.23, 1.22, 1.21, 1.20, 1.19, 1.18, 1.17, 1.16, 1.15, 1.14, 1.13, 1.12.
Get involved with the Kubernetes project:
- Visit the project homepage.
- Check out the Kubernetes project on GitHub.
- Get involved with the Kubernetes community.
- Meet the maintainers on the Kubernetes Slack.
- Follow @Kubernetes.io on Bluesky.