KubeCon/CloudNativeCon 2021: our highlights & takeaways blog post hero image
CLOUD & MANAGED SERVICESCLOUD
14/05/2021 • Peter Jans

KubeCon/CloudNativeCon 2021: our highlights & takeaways

Did you miss KubeCon/CloudNativeCon 2021 last week, or didn’t have any time to spare to go through all the sessions? We’ve got your back! These are our highlights and takeaways from this year’s digital event on everything related to Cloud Native and Kubernetes.

Multi-Cloud and Multi-Cluster with CNCF’s Kuma on Kubernetes and VMs – Marco Palladino, Kong

This keynote was the first session our team members followed during ServiceMeshCon 2021. Even though it could be considered a sales pitch for the Kuma project, it gave a good idea of the current challenges of implementing service meshes.

A lot of companies – including us – have successfully installed service meshes add-ons on their Kubernetes cluster. Now that we know what’s possible with these service meshes, we want to be able to use them everywhere. Think about multiple clusters, on-premise and in the cloud, but perhaps also virtual machines. In the end, the goal is to have a workload and schedule it somewhere on our infrastructure. You don’t necessarily need to know exactly where this will be scheduled. Patterns like spreading load 20% on VMs, 30% on on-premise Kubernetes clusters and 50% on AWS EKS clusters are a possibility.

This keynote was a good introduction on what was explained more in-depth throughout the day.

Screenshot of presentation about Multi-Cloud and Multi-Cluster with CNCF’s Kuma on Kubernetes and VMs by Marco Palladino at KubeCon/CloudNativeCon 2021

“Extend All The Things!”: Cloud Provider Edition – Joe Betz, Google

One of the powerful features of Kubernetes is extensibility: the possibility to add additional functionalities to the cluster, making it more powerful and adjust it to your needs. There are many different ways to achieve this. In this session, Joe explained multiple extensibility features.

You can create your own controllers, Custom Resource Definitions, you can use Mutating and Validation WebHooks, you can write an API that extends the existing API, you can use a specific CSI (Container Storage Interface), CNI (Container Network Interface), CRI (Container Runtime Interface). There’s also a new way to use specific registries based on different patterns. If you want to know more about any of these, I suggest watching a recording of this talk.

Apart from adding functionalities, Joe also explained the shift from in-tree cloud providers to out-of-tree cloud providers. This means that add-ons for cloud providers like AWS and GCE will no longer be part of the Kubernetes core, but a separate plugin. This way, the core is smaller and easier to maintain, since the cloud provider add-on has less dependencies on other code within the core and a dedicated release cycle. From Kubernetes 1.24 onwards (begin 2022), the in-tree cloud providers will no longer be available.

Screenshot of presentation about "Extend All The Things!": Cloud Provider Edition by Joe Betz at KubeCon/CloudNativeCon 2021

How DoD Uses K8s and Flux to Achieve Compliance and Deployment Consistency – Michael Medellin & Gordon Tillman, Department of Defense

The DoD, just like any other software company, needs an infrastructure to deploy their applications on. They, like many others, have been transitioning to Kubernetes these last few years. But of course, the DoD is not your ordinary software company. They are a monolithic entity in charge of the entire armed forces of the US with its own compliancy and regulatory challenges.

In this presentation, Michael and Gordon talk about how they do it and what tools and practices they use to ship secure, reliable, and resilient software to their globally distributed user base. 

First and foremost is Git, which is the single source of truth for all their infrastructure. Flux runs on top of this Git which makes sure no changes are made to the clusters without going through the entire deployment pipeline. It does this by signing the commits that are allowed to be applied to the infrastructure. Finally, there’s ClusterAPI which is used as an overview of all the clusters and their management.

There’s also the deployment of new clusters. Once again, git is the single source of truth here. First, Terraform deploys the underlying networking components, bastion instances, and endpoints to securely connect to stuff outside the cluster. Next up, custom resources are made based on the output of Terraform and are committed to git. After this point you actually come we come back to Flux who makes sure the commits are signed and deployed.

This talk is a must for anyone interested in how you can tackle deployment/management of secure, reliable, and resilient infrastructure in an environment filled with compliancy and regulatory challenges. You can find the talk’s presentation slides here.

Screenshot of presentation about How DoD Uses K8s and Flux to Achieve Compliance and Deployment Consistency by Michael Medellin & Gordon Tillman, Department of Defense at KubeCon/CloudNativeCon 2021

Contour, a High Performance Multitenant Ingress Controller for Kubernetes – Steve Sloka, VMware

This session was about Contour (Github link) an open source Kubernetes ingress controller like Nginx. Contour provides a control plane for the Envoy edge and service proxy. It supports dynamic configuration updates, TLS termination, passthrough and load-balancing algorithms.

The session started with some new features the Contour team implemented in their last version of Contour. The most interesting new feature is Rate Limiting. With this Rate Limiting feature, you can decide how much traffic is allowed to certain services. It’s definitely useful against some cyberattacks, like DDoS. The team also added Gateway API functionality to Contour.

After the theoretical part of the session, Steve showed us how you can use Global and Local rate limiting using a ConfigMap.

Screenshot of presentation about Contour, a High Performance Multitenant Ingress Controller for Kubernetes by Steve Sloka from VMware

Introduction and Deep Dive into Containerd – Kohei Tokunaga & Akihiro Suda, NTT Corporation

This talks gave an overview and the recent updates of Containerd, as well as how it’s being used by Kubernetes, Docker and other container-based systems. Basically, you learn how various products such as Docker use Containerd to provide container services. Kubernetes itself will directly interact with Containerd, where older K8s implementations still used Dockerd.

The talk also deep dove into how to leverage Containerd by extending and customizing it for your use case with low-level plugins like remote snapshotters, as well as by implementing your own Containerd client. An interesting extension to Containerd is a snapshotter plugin that allows a container to do a lazy pull of an image and already start up without waiting for the entire image contents being locally available.

Upcoming features and recent discussion in Containerd community were also covered. Useful updates in Containerd 1.5 include the addition of the zstd compression algorithm, which allows for a faster compression and decompression, OCIcrypt decryption by default and nerdctl (contaiNERD ctl) as a non-core subproject. Future features will include Filesystem quota and CRI support for user namespaces, so that one can run Kubernetes pods as a user that is different from the daemon user.

You can find the presentations slides here.

Screenshot of presentation about Introduction and Deep Dive into Containerd

GitOps Con opening keynotes – Cornelia Davis, Weaveworks

GitOps Con was one of the co-located events leading up to KubeCon/CloudNativeCon Europe 2021. In the opening keynotes, Cornelia Davis talked about Git as an interface to operations rather than being just a store. Git can be used to represent the desired state, whereas a K8s cluster represents the actual state.

GitOps can enable DevOps teams to release more frequently, reduce lead time and operate their applications more effectively. This is achieved by using familiar tooling (Git) and allowing platform teams to focus on security, compliance (Git log), resilience and cost management.

An interesting approach with regards to security was working with a pull model for continuous delivery (CD). By using an operator inside the cluster that updates deployments based on the desired state in Git, there’s no need for a central CD component. This in turn enhances overall security by not having a single component with access to multiple clusters.

In the cncf/podtato-head GitHub repo, you can find a demo project for showcasing cloud-native application delivery use cases using different tools for various use cases.

You can find the opening keynotes and the other talks from GitOps Con on the GitOps Working Group YouTube channel.

Screenshot of GitOps Con opening keynotes by Cornelia Davis, Chief Technology Officer at Weaveworks

Hacking into Kubernetes Security – Ellen Körbes, Tilt & Tabitha Sable, Datadog

Kubernetes is a cool and fun place to run your containers. But is it safe? In this talk Ellen and Tabitha demonstrated how important it is to secure you Kubernetes cluster by adding RBAC, admission control, network policies to the cluster and updating/patching vulnerabilities.

This talk was more like a live action scene than just reading some PowerPoint slides. In the first place, they showed how important it is to configure RBAC in your cluster. Without RBAC or with a half configurered RBAC, it’s easy to access someone else’s work or namespace. If someone inside the cluster can access the namespaces, this also means that someone outside the cluster can access your namespace. Just using RBAC is not enough though. You also need to configure your network policies so you can manage your network flow by blocking or allowing network traffic between pods and/or namespaces. 

One of the most important things to do is to subscribe on the CVE list of Kubernetes. By subscribing to the CVE list you’re always notified when a new exploit has been found. This will help you to patch your cluster before any hacker tries to get inside your cluster. 

But what do you do if, despite all efforts, you have been hacked? First of all: be calm and inform your admin about this. Together, check what’s been changed in your cluster by using audit logging or other tools. Make a sheet and write everything down. Then try to fix or patch any open vulnerabilities and backdoors. Finally, report the case to the police.

Curious about how Kubernetes might help your business? Check out our Kubernetes services below!