Skip to content

Kubernetes

Kubernetes

Certified Kubernetes Offerings

The State of Cloud-Native Development. Details data on the use of Kubernetes, serverless computing and more

Kubernetes open-source container-orchestation

Kubernetes architecture

10 most common mistakes

5 Open-source projects that make #Kubernetes even better


Templating YAML in Kubernetes with real code. YQ YAML processor

Kubernetes Limits

Kubernetes Knowledge Hubs

Extending Kubernetes

Adding Custom Resources. Extending Kubernetes API with Kubernetes Resource Definitions. CRD vs Aggregated API

  • Custom Resources
  • Use a custom resource (CRD or Aggregated API) if most of the following apply:
    • You want to use Kubernetes client libraries and CLIs to create and update the new resource.
    • You want top-level support from kubectl; for example, kubectl get my-object object-name.
    • You want to build new automation that watches for updates on the new object, and then CRUD other objects, or vice versa.
    • You want to write automation that handles updates to the object.
    • You want to use Kubernetes API conventions like .spec, .status, and .metadata.
    • You want the object to be an abstraction over a collection of controlled resources, or a summarization of other resources.
  • Kubernetes provides two ways to add custom resources to your cluster:
    • CRDs are simple and can be created without any programming.
    • API Aggregation requires programming, but allows more control over API behaviors like how data is stored and conversion between API versions.
  • Kubernetes provides these two options to meet the needs of different users, so that neither ease of use nor flexibility is compromised.
  • Aggregated APIs are subordinate API servers that sit behind the primary API server, which acts as a proxy. This arrangement is called API Aggregation (AA). To users, it simply appears that the Kubernetes API is extended.
  • CRDs allow users to create new types of resources without adding another API server. You do not need to understand API Aggregation to use CRDs.
  • Regardless of how they are installed, the new resources are referred to as Custom Resources to distinguish them from built-in Kubernetes resources (like pods).

Crossplane, a Universal Control Plane API for Cloud Computing. Crossplane Workloads Definitions

Kubectl commands

Kubectl Cheat Sheets

List all resources and sub resources that you can constrain with RBAC

  • kind of a handy way to see all thing things you can affect with Kubernetes RBAC. This will list all resources and sub resources that you can constrain with RBAC. If you want to see just subresources append “| grep {name}/”:
kubectl get --raw /openapi/v2  | jq '.paths | keys[]'

Copy a configMap in kubernetes between namespaces

  • Copy a configMap in kubernetes between namespaces with deprecated “–export” flag:
kubectl get configmap --namespace=<source> <configmap> --export -o yaml | sed "s/<source>/<dest>/" | kubectl apply --namespace=<dest> -f -
kubectl get configmap <configmap-name> --namespace=<source-namespace> -o yaml | sed β€˜s/namespace: <from-namespace>/namespace: <to-namespace>/’ | kubectl create -f

Copy secrets in kubernetes between namespaces

kubectl get secret <secret-name> --namespace=<source>β€Š-o yaml | sed β€˜s/namespace: <from-namespace>/namespace: <to-namespace>/’ | kubectl create -f

Export resources with kubectl and python

Kubectl Alternatives

Manage Kubernetes (K8s) objects with Ansible Kubernetes Module
Jenkins Kubernetes Plugins

Client Libraries for Kubernetes

Go Client for Kubernetes

Fabric8 Java Client for Kubernetes

Helm Kubernetes Tool

Lens Kubernetes IDE

  • Lens Kubernetes IDE 🌟 Lens is the only IDE you’ll ever need to take control of your Kubernetes clusters. It’s open source and free. Download it today!

lens ide

Cluster Autoscaler Kubernetes Tool

HPA and VPA

Cluster Autoscaler and Helm

Cluster Autoscaler and DockerHub

Cluster Autoscaler in GKE, EKS, AKS and DOKS

Cluster Autoscaler in OpenShift

Kubernetes Special Interest Groups (SIGs). Kubernetes Community

Kubectl Plugins


Kubectl Plugins and Tools

Kubernetes Troubleshooting

Kubernetes Tutorials

Famous Kubernetes resources of 2019

Famous Kubernetes resources of 2020

Kubernetes Patterns

Top 10 Kubernetes patterns

e-Books

Famous Kubernetes resources of 2019

Kubernetes Patterns eBooks

Kubernetes Operators

Flux. The GitOps Operator for Kubernetes

Writing Kubernetes Operators

Kubernetes Networking

Xposer Kubernetes Controller To Manage Ingresses

  • Xposer 🌟 A Kubernetes controller to manage (create/update/delete) Kubernetes Ingresses based on the Service
    • Problem: We would like to watch for services running in our cluster; and create Ingresses and generate TLS certificates automatically (optional)
    • Solution: Xposer can watch for all the services running in our cluster; Creates, Updates, Deletes Ingresses and uses certmanager to generate TLS certificates automatically based on some annotations.

CNI Container Networking Interface

Project Calico

Kubernetes Sidecars

Kubernetes Security

Security Best Practices Across Build, Deploy, and Runtime Phases

  • Kubernetes Security 101: Risks and 29 Best Practices 🌟
  • Build Phase:
    1. Use minimal base images
    2. Don’t add unnecessary components
    3. Use up-to-date images only
    4. Use an image scanner to identify known vulnerabilities
    5. Integrate security into your CI/CD pipeline
    6. Label non-fixable vulnerabilities
  • Deploy Phase:
    1. Use namespaces to isolate sensitive workloads
    2. Use Kubernetes network policies to control traffic between pods and clusters
    3. Prevent overly permissive access to secrets
    4. Assess the privileges used by containers
    5. Assess image provenance, including registries
    6. Extend your image scanning to deploy phase
    7. Use labels and annotations appropriately
    8. Enable Kubernetes role-based access control (RBAC)
  • Runtime Phase:
    1. Leverage contextual information in Kubernetes
    2. Extend vulnerability scanning to running deployments
    3. Use Kubernetes built-in controls when available to tighten security
    4. Monitor network traffic to limit unnecessary or insecure communication
    5. Leverage process whitelisting
    6. Compare and analyze different runtime activity in pods of the same deployments
    7. If breached, scale suspicious pods to zero

kubernetes security controls landscape

Kubernetes Authentication and Authorization

Kubernetes Authentication Methods

Kubernetes supports several authentication methods out-of-the-box, such as X.509 client certificates, static HTTP bearer tokens, and OpenID Connect.

X.509 client certificates
Static HTTP Bearer Tokens
OpenID Connect
Implementing a custom Kubernetes authentication method

Pod Security Policies (SCCs - Security Context Constraints in OpenShift)

EKS Security

Kubernetes Scheduling and Scheduling Profiles

Assigning Pods to Nodes. Pod Affinity and Anti-Affinity

Pod Topology Spread Constraints and PodTopologySpread Scheduling Plugin

Kubernetes Storage

Cloud Native Storage

Kubernetes Volumes Guide

ReadWriteMany PersistentVolumeClaims

Non-production Kubernetes Local Installers

Kubernetes in Public Cloud

GKE vs EKS vs AKS

AWS EKS (Hosted/Managed Kubernetes on AWS)

Tools for multi-cloud Kubernetes management

Compare tools for multi-cloud Kubernetes management 🌟

On-Premise Production Kubernetes Cluster Installers

Comparative Analysis of Kubernetes Deployment Tools

Deploying Kubernetes Cluster with Kops

  • GitHub: Kubernetes Cluster with Kops
  • Kubernetes.io: Installing Kubernetes with kops
  • Minikube and docker client are great for local setups, but not for real clusters. Kops and kubeadm are tools to spin up a production cluster. You don’t need both tools, just one of them.
  • On AWS, the best tool is kops. Since AWS EKS (hosted kubernetes) is currently available, this is the preferred option (you don’t need to maintain the masters).
  • For other installs, or if you can’t get kops to work, you can use kubeadm.
  • Setup kops in your windows with virtualbox.org and vagrantup.com . Once downloaded, to type a new linux VM, just spin up ubuntu via vagrant in cmd/powershell and run kops installer:
C:\ubuntu> vagrant init ubuntu/xenial64
C:\ubuntu> vagrant up
C:\ubuntu> vagrant ssh-config
C:\ubuntu> vagrant ssh
$ curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
$ chmod +x kops-linux-amd64
$ sudo mv kops-linux-amd64 /usr/local/bin/kops

Deploying Kubernetes Cluster with Kubeadm

Deploying Kubernetes Cluster with Ansible

kube-aws Kubernetes on AWS

Kubespray

Conjure up

WKSctl

Terraform (kubernetes the hard way)

Caravan

ClusterAPI

Microk8s

k8s-tew

  • k8s-tew Kubernetes is a fairly complex project. For a newbie it is hard to understand and also to use. While Kelsey Hightower’s Kubernetes The Hard Way, on which this project is based, helps a lot to understand Kubernetes, it is optimized for the use with Google Cloud Platform.

Kubernetes Distributions

Red Hat OpenShift
Weave Kubernetes Platform
Ubuntu Charmed Kubernetes
VMware Kubernetes Tanzu and Project Pacific
Rancher: Enterprise management for Kubernetes

rancher architecture

Rancher 2
Rancher 2 RKE
  • Rancher 2 RKE Rancher 2 that runs in docker containers. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks.
K3S
  • k3s Basic kubernetes with automated installer. Lightweight Kubernetes Distribution.
  • K8s vs k3s “K3s is designed to be a single binary of less than 40MB that completely implements the Kubernetes API. In order to achieve this, they removed a lot of extra drivers that didn’t need to be part of the core and are easily replaced with add-ons. K3s is a fully CNCF (Cloud Native Computing Foundation) certified Kubernetes offering. This means that you can write your YAML to operate against a regular “full-fat” Kubernetes and they’ll also apply against a k3s cluster. Due to its low resource requirements, it’s possible to run a cluster on anything from 512MB of RAM machines upwards. This means that we can allow pods to run on the master, as well as nodes. And of course, because it’s a tiny binary, it means we can install it in a fraction of the time it takes to launch a regular Kubernetes cluster! We generally achieve sub-two minutes to launch a k3s cluster with a handful of nodes, meaning you can be deploying apps to learn/test at the drop of a hat.”
  • k3sup (said ‘ketchup’) is a light-weight utility to get from zero to KUBECONFIG with k3s on any local or remote VM. All you need is ssh access and the k3sup binary to get kubectl access immediately.
  • Install Kubernetes with k3sup and k3s
K3S Use Cases
  • K3S Use Cases:
    1. Edge computing and Embedded Systems
    2. IOT Gateway
    3. CI environments (i.e. Jenkins with Configuration as Code)
    4. Single-App Clusters
K3S in Public Clouds
K3D
  • k3d k3s that runs in docker containers.
K3OS
  • k3OS k3OS is a Linux distribution designed to remove as much OS maintenance as possible in a Kubernetes cluster. It is specifically designed to only have what is needed to run k3s. Additionally the OS is designed to be managed by kubectl once a cluster is bootstrapped. Nodes only need to join a cluster and then all aspects of the OS can be managed from Kubernetes. Both k3OS and k3s upgrades are handled by the k3OS operator.
  • K3OS Value Add:
    • Supports multiple architectures
      • K3OS runs on x86 and ARM processors to give you maximum flexibility.
    • Runs only the minimum required services
      • Fewer services means a tiny attack surface, for greater security.
    • Doesn’t require a package manager
      • The required services are built into the distribution image.
    • Models infrastructure as code
      • Manage system configuration with version control systems.
K3C
  • K3C Lightweight local container engine for container development. K3C is a local container engine designed to fill the same gap Docker does in the Kubernetes ecosystem. Specifically k3c focuses on developing and running local containers, basically docker run/build. Currently k3s, the lightweight Kubernetes distribution, provides a great solution for Kubernetes from dev to production. While k3s satisifies the Kubernetes runtime needs, one still needs to run docker (or a docker-like tool) to actually develop and build the container images. k3c is intended to replace docker for just the functionality needed for the Kubernetes ecosystem.
Hosted Rancher
Rancher on Microsoft Azure
Rancher RKE on vSphere
Rancher Kubernetes on Oracle Cloud
Rancher Software Defined Storage with Longhorn
Rancher Fleet to manage multiple kubernetes clusters
Kontena Pharos
Mirantis Docker Enterprise with Kubernetes and Docker Swarm
  • Mirantis Docker Enterprise 3.1+ with Kubernetes
  • Docker Enterprise 3.1 announced. Features:
    • Istio is now built into Docker Enterprise 3.1!
    • Comes with Kubernetes 1.17. Kubernetes on Windows capability.
    • Enable Istio Ingress for a Kubernetes cluster with the click of a button
    • Intelligent defaults to get started quickly
    • Virtual services supported out of the box
    • Inbuilt support for GPU Orchestration
    • Launchpad CLI for Docker Enterprise deployment & upgrades

Cloud Development Kit (CDK) for Kubernetes

  • cdk8s.io 🌟 Define Kubernetes apps and components using familiar languages. cdk8s is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. cdk8s apps synthesize into standard Kubernetes manifests which can be applied to any Kubernetes cluster.
  • github.com/awslabs/cdk8s

AWS Cloud Development Kit (AWS CDK)

  • AWS: Introducing CDK for Kubernetes 🌟
  • Traditionally, Kubernetes applications are defined with human-readable, static YAML data files which developers write and maintain. Building new applications requires writing a good amount of boilerplate config, copying code from other projects, and applying manual tweaks and customizations. As applications evolve and teams grow, these YAML files become harder to manage. Sharing best practices or making updates involves manual changes and complex migrations.
  • YAML is an excellent format for describing the desired state of your cluster, but it is does not have primitives for expressing logic and reusable abstractions. There are multiple tools in the Kubernetes ecosystem which attempt to address these gaps in various ways:
  • We realized this was exactly the same problem our customers had faced when defining their applications through CloudFormation templates, a problem solved by the AWS Cloud Development Kit (AWS CDK), and that we could apply the same design concepts from the AWS CDK to help all Kubernetes users.

SpringBoot with Docker

Docker in Docker

Serverless with OpenFaas and Knative

Serverless

Kubernetes interview questions

Container Ecosystem

Kubernetes components

Container Flowchart

Container flowchart

Videos