Skip to content

OpenShift docs

OpenShift Container Platform

OpenShift

Red Hat’s approach to Kubernetes. Standardization

Reference Author URL
“Given the difficulty of navigating the cloud-native ecosystem, especially the one around Kubernetes, there is a high demand for easy-to-administer development platforms that deliver applications in Kubernetes-managed containers.” OMDIA Red Hat’s approach to Kubernetes
Industry momentum has aligned behind Kubernetes as the orchestration platform for Linux® containers. Choosing Kubernetes means you’ll be running the de facto standard regardless of which cloud environments and providers are in your future. CNCF Survey 2019 Red Hat’s approach to Kubernetes
“It’s not just enough to do Kubernetes. You do need to do CI/CD. You need to use alerting. You need to understand how the security model of the cloud and your applications interplay.” Clayton Coleman,Senior Distinguished Engineer, Red Hat Red Hat’s approach to Kubernetes
“Kubernetes is scalable. It helps develop applications faster. It does hybrid and multicloud. These are not just technology buzzwords, they’re real, legitimate business problems.” Brian Gracely,Director, Product Strategy, Red Hat OpenShift Red Hat’s approach to Kubernetes
“Our job is to make it easier and easier to use, either from an ops point of view or a developer point of view—while acknowledging it is complex, because we’re solving a complex problem.” Chris Wright,Chief Technology Officer, Red Hat Red Hat’s approach to Kubernetes

rh openshift solutions 2020

OpenShift Container Platform 3 (OCP 3)

OpenShift Cheat Sheets

Helm Charts and OpenShift 3

Chaos Monkey for kubernetes/Openshift

OpenShift GitOps

Debugging apps

Capacity Management

OpenShift High Availability

Troubleshooting Java applications on Openshift

Red Hat Communities of Practice. Uncontained.io Project

Identity Management

Quota Management

Source-to-Image (S2I) Image Building Tools

  • Source-to-Image (S2I) Build
    • Source-to-Image (S2I) is a tool for building reproducible, Docker-formatted container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source and is ready to use with the docker run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, etc.

OpenShift Container Platform 4 (OCP 4)

OCP 4 Architecture

OCP 4 Overview

tenant

Three New Functionalities
  1. Self-Managing Platform
  2. Application Lifecycle Management (OLM):
    • OLM Operator:
      • Responsible for deploying applications defined by ClusterServiceVersion (CSV) manifest.
      • Not concerned with the creation of the required resources; users can choose to manually create these resources using the CLI, or users can choose to create these resources using the Catalog Operator.
    • Catalog Operator:
      • Responsible for resolving and installing CSVs and the required resources they specify. It is also responsible for watching CatalogSources for updates to packages in channels and upgrading them (optionally automatically) to the latest available versions.
      • A user that wishes to track a package in a channel creates a Subscription resource configuring the desired package, channel, and the CatalogSource from which to pull updates. When updates are found, an appropriate InstallPlan is written into the namespace on behalf of the user.
  3. Automated Infrastructure Management (Over-The-Air Updates)

ocp update1 ocp update2 ocp update3

New Technical Components
  • New Installer:
  • Storage: Cloud integrated storage capability used by default via OCS Operator (Red Hat)
  • Operators End-To-End!: responsible for reconciling the system to the desired state
    • Cluster configuration kept as API objects that ease its maintenance (“everything-as-code” approach):
      • Every component is configured with Custom Resources (CR) that are processed by operators.
      • No more painful upgrades and synchronization among multiple nodes and no more configuration drift.
    • List of operators that configure cluster components (API objects):
      • API server
      • Nodes via Machine API
      • Ingress
      • Internal DNS
      • Logging (EFK) and Monitoring (Prometheus)
      • Sample applications
      • Networking
      • Internal Registry
      • Oauth (and authentication in general)
      • etc
  • At the Node Level:
    • RHEL CoreOS is the result of merging CoreOS Container Linux and RedHat Atomic host functionality and is currently the only supported OS to host OpenShift 4.
    • Node provisioning with ignition, which came with CoreOS Container Linux
    • Atomic host updates with rpm-ostree
    • CRI-O as a container runtime
    • SELinux enabled by default
  • Machine API: Provisioning of nodes. Abstraction mechanism added (API objects to declaratively manage the cluster):
    • Based on Kubernetes Cluster API project
    • Provides a new set of machine resources:
      • Machine
      • Machine Deployment
      • MachineSet:
        • distributes easily your nodes among different Availability Zones
        • manages multiple node pools (e.g. pool for testing, pool for machine learning with GPU attached, etc)
  • Everything “just another pod”
Installation & Cluster Autoscaler
  • New installer openshift-install tool, replacement for the old Ansible scripts.
  • 40 min (AWS). Terraform.
  • 2 installation patterns:
    1. Installer Provisioned Infrastructure (IPI)
    2. User Provisioned Infrastructure (UPI)
  • The whole process can be done in one command and requires minimal infrastructure knowledge (IPI): openshift-install create cluster

OCP IPI

OCP IPI UPI


IPI & UPI
  • 2 installation patterns:
    1. Installer Provisioned Infrastructure (IPI): On supported platforms, the installer is capable of provisioning the underlying infrastructure for the cluster. The installer programmatically creates all portions of the networking, machines, and operating systems required to support the cluster. Think of it as best-practice reference architecture implemented in code.  It is recommended that most users make use of this functionality to avoid having to provision their own infrastructure.  The installer will create and destroy the infrastructure components it needs to be successful over the life of the cluster.
    2. User Provisioned Infrastructure (UPI): For other platforms or in scenarios where installer provisioned infrastructure would be incompatible, the installer can stop short of creating the infrastructure, and allow the platform administrator to provision their own using the cluster assets generated by the install tool. Once the infrastructure has been created, OpenShift 4 is installed, maintaining its ability to support automated operations and over-the-air platform updates.

OCP IPI2

OCP UPI


Cluster Autoscaler Operator
  • Adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments
  • Increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The ClusterAutoscaler does not increase the cluster resources beyond the limits that you specify.
  • A huge improvement over the manual, error-prone process used in the previous version of OpenShift and RHEL nodes.

OCP Autoscaler1 OCP Autoscaler2

Operators
Introduction
  • Core of the platform
  • The hierarchy of operators, with clusterversion at the top, is the single door for configuration changes and is responsible for reconciling the system to the desired state.
  • For example, if you break a critical cluster resource directly, the system automatically recovers itself. 
  • Similarly to cluster maintenance, operator framework used for applications. As a user, you get SDK, OLM (Lifecycle Manager of all Operators and their associated services running across their clusters) and embedded operator hub.
  • OLM Arquitecture
  • Adding Operators to a Cluster (They can be added via CatalogSource)
  • The supported method of using Helm charts with Openshift is via the Helm Operator
  • twitter.com/operatorhubio
  • View the list of Operators available to the cluster from the OperatorHub:
$ oc get packagemanifests -n openshift-marketplace 
NAME AGE 
amq-streams 14h 
packageserver 15h 
couchbase-enterprise 14h 
mongodb-enterprise 14h 
etcd 14h myoperator 14h 
...

OCP Operators

Catalog
  • Developer Catalog
  • Installed Operators
  • OperatorHub (OLM)
  • Operator Management:
    • Operator Catalogs are groups of Operators you can make available on the cluster. They can be added via CatalogSource (i.e. “catalogsource.yaml”). Subscribe and grant a namespace access to use the installed Operators.
    • Operator Subscriptions keep your services up to date by tracking a channel in a package. The approval strategy determines either manual or automatic updates.

Operator Subscriptions

Certified Opeators, OLM Operators and Red Hat Operators
  • Certified Operators packaged by Certified:
    • Not provided by Red Hat
    • Supported by Red Hat
    • Deployed via “Package Server” OLM Operator
  • OLM Operators:
    • Packaged by Red Hat
    • “Package Server” OLM Operator includes a CatalogSource provided by Red Hat
  • Red Hat Operators:
    • Packaged by Red Hat
    • Deployed via “Package Server” OLM Operator
  • Community Edition Operators:
    • Deployed by any means
    • Not supported by Red Hat

OCP Certified Operators

Deploy and bind enterprise-grade microservices with Kubernetes Operators
OpenShift Container Storage Operator (OCS)
OCS 3 (OpenShift 3)
  • OpenShift Container Storage based on GlusterFS technology.
  • Not OpenShift 4 compliant: Migration tooling will be available to facilitate the move to OCS 4.x (OpenShift Gluster APP Mitration Tool).
OCS 4 (OpenShift 4)
  • OCS Operator based on Rook.io with Operator LifeCycle Manager (OLM).
  • Tech Stack:
    • Rook (don’t confuse this with non-redhat “Rook Ceph” -> RH ref).
      • Replaces Heketi (OpenShift 3)
      • Uses Red Hat Ceph Storage and Noobaa.
    • Red Hat Ceph Storage
    • Noobaa:
      • Red Hat Multi Cloud Gateway (AWS, Azure, GCP, etc)
      • Asynchronous replication of data between my local ceph and my cloud provider
      • Deduplication
      • Compression
      • Encryption
  • Backups available in OpenShift 4.2+ (Snapshots + Restore of Volumes)
  • OCS Dashboard in OCS Operator

OCS Dashboard

Cluster Network Operator (CNO) & Routers
oc describe clusteroperators/ingress
oc logs --namespace=openshift-ingress-operator deployments/ingress-operator


ServiceMesh Operator

OCS Servicemesh 1 OCS Servicemesh 2 OCS Servicemesh 3

OCS Servicemesh 4


Serverless Operator (Knative)
Crossplane Operator (Universal Control Plane API for Cloud Computing)
Monitoring & Observability
Grafana
  • Integrated Grafana v5.4.3 (deployed by default):
  • Monitoring -> Dashboards
  • Project “openshift-monitoring”
  • https://grafana.com/docs/v5.4/
Prometheus
  • Integrated Prometheus v2.7.2 (deployed by default):
  • Monitoring -> metrics
  • Project “openshift-monitoring”
  • https://prometheus.io/docs/prometheus/2.7/getting_started/
Alerts & Silences
  • Integrated Alertmanager 0.16.2 (deployed by default):
    • Monitoring -> Alerts
    • Monitoring -> Silences
    • Silences temporarily mute alerts based on a set of conditions that you define. Notifications are not sent for alerts that meet the given conditions.
  • Project “openshift-monitoring”
  • https://prometheus.io/docs/alerting/alertmanager/
Cluster Logging (EFK)
  • EFK: Elasticsearch + Fluentd + Kibana
  • Cluster Logging EFK not deployed by default
  • As an OpenShift Container Platform cluster administrator, you can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services.
  • The OpenShift Container Platform cluster logging solution requires that you install both the Cluster Logging Operator and Elasticsearch Operator. There is no use case in OpenShift Container Platform for installing the operators individually. You must install the Elasticsearch Operator using the CLI following the directions below. You can install the Cluster Logging Operator using the web console or CLI. Deployment procedure based on CLI + web console:
OCP Release Elasticsearch Fluentd Kibana EFK deployed by default
OpenShift 3.11 5.6.13.6 0.12.43 5.6.13 No
OpenShift 4.1 5.6.16 ? 5.6.16 No


Build Images. Next-Generation Container Image Building Tools
  • Redesign of how images are built on the platform.
  • Instead of relying on a daemon on the host to manage containers, image creation, and image pushing, we are leveraging Buildah running inside our build pods.
  • This aligns with the general OpenShift 4 theme of making everything “just another pod”
  • A simplified set of build workflows, not dependent on the node host having a specific container runtime available. 
  • Dockerfiles that built under OpenShift 3.x will continue to build under OpenShift 4.x and S2I builds will continue to function as well.
  • The actual BuildConfig API is unchanged, so a BuildConfig from a v3.x cluster can be imported into a v4.x cluster and work without modification.
  • Podman & Buildah for docker users
  • Openshift ImageStreams
  • Openshift 4 image builds
  • Custom image builds with Buildah
  • Rootless podman and NFS

Buildah

Registry & Quay
  • A Docker registry is a place to store and distribute Docker images.
  • It serves as a target for your docker push and docker pull commands.
  • Openshift ImageStreams
  • The registry is now managed by an Operator instead of oc adm registry.
  • Quay.io is a hosted Docker registry from CoreOS:
    • Main features:
      • “Powerful build triggers”
      • “Advanced team permissions”
      • “Secure storage”
    • One of the more enterprise-friendly options out there, offering fine-grained permission controls.
    • They support any git server and let you build advanced workflows by doing things like mapping git branches to Docker tags so that when you commit code it automatically builds a corresponding image.
    • Quay offers unlimited free public repositories. Otherwise, you pay by the number of private repositories. There’s no extra charge for storage or bandwidth.
  • Quay 3.0 released in May 2019: support for multiple architectures, Windows containers, and a Red Hat Enterprise Linux (RHEL)-based image to this container image registry.
  • Quay 3.1 released in September 2019: The newest Quay feature is repository mirroring, which complements our existing geographic replication features. Repository mirroring reflects content between distinct, different registries. With this, you can synchronize whitelisted repositories or a source registry subset into Quay. This makes it much easier to distribute images and related data through Quay.
  • Quay Community Edition operator
  • Quay 3.1 Certified Operator is not available in Openshift and must be purchased
  • Open Source ProjectQuay.io Container Registry:
Local Development Environment
  • For version 3 we have Container Development Kit (or its open source equivalent for OKD - minishift) which launches a single node VM with Openshift and it does it in a few minutes. It’s perfect for testing also as a part of CI/CD pipeline.
  • Openshift 4 on your laptop: There is a working solution for single node OpenShift cluster. It is provided by a new project called CodeReady Containers.
  • Procedure:
untar
crc setup
crc start
environment variables
oc login

OpenShift Youtube

OpenShift 4 Training

OpenShift 4 Roadmap

Kubevirt Virtual Machine Management on Kubernetes

Storage in OCP 4. OpenShift Container Storage (OCS)

Red Hat Advanced Cluster Management for Kubernetes

OpenShift Kubernetes Engine (OKE)

openshift4 architecture

Red Hat CodeReady Containers. OpenShift 4 on your laptop

OpenShift Hive: Cluster-as-a-Service. Easily provision new PaaS environments for developers

OpenShift 4 Master API Protection in Public Cloud

Backup and Migrate to OpenShift 4

OKD4. OpenShift 4 without enterprise-level support

OpenShift Serverless with Knative

Helm Charts and OpenShift 4

Red Hat Marketplace

Kubestone. Benchmarking Operator for K8s and OpenShift

OpenShift Cost Management

Operators in OCP 4

Quay Container Registry

OpenShift Topology View

OpenShift.io online IDE

  • openshift.io 🌟 an online IDE for building container-based apps, built for team collaboration.

Cluster Autoscaler in OpenShift

e-Books

Kubernetes e-Books

Online Learning

Local Installers

Cloud Native Development Architecture. Architectural Diagrams

Cloud-native development

Cluster Installers

OKD 3

OpenShift 3

OpenShift 4

OpenShift 4 deployment on VMWare vSphere
Deploying OpenShift 4.4 to VMware vSphere 7

openshift 4 to vsphere 7

Networking (OCP 3 and OCP 4)

Security

How is OpenShift Container Platform Secured?

Security Context Constraints

Review Security Context Constraints
  • Security Context Constraints (SCCs) control what actions pods can perform and what resources they can access.
  • SCCs combine a set of security configurations into a single policy object that can be applied to pods. These security configurations include, but are not limited to, Linux Capabilities, Seccomp Profiles, User and Group ID Ranges, and types of mounts.
  • OpenShift ships with several SCCs. The most constrained is the restricted SCC, and the least constrained in the privileged SCC. The other SCCs provide intermediate levels of constraint for various use cases. The restricted SCC is granted to all authenticated users by default.
  • The default SCC for most pods should be the restricted SCC. If required, a cluster administrator may allow certain pods to run with different SCCs. Pods should be run with the most restrictive SCC possible.
  • Pods inherit their SCC from the Service Account used to run the pod. With the default project template, new projects get a Service Account named default that is used to run pods. This default service account is only granted the ability to run the restricted SCC.
  • Recommendations:
    • Use OpenShift’s Security Context Constraint feature, which has been contributed to Kubernetes as Pod Security Policies. PSPs are still beta in Kubernetes 1.10, 1.11, and 1.12.
    • Use the restricted SCC as the default
    • For pods that require additional access, use the SCC that grants the least amount of additional privileges or create a custom SCC Audit
    • To show all available SCCs: oc describe scc
    • To audit a single pod: oc describe pod <POD> | grep openshift.io\/scc
    • Remediation: Apply the SCC with the least privilege required

OpenShift Network Model & Network Policy

Network Security Zones
  • stackoverflow.com: Is that possible to deploy an openshift or kubernetes in DMZ zone? 🌟
  • OpenShift and Network Security Zones: Coexistence Approaches 🌟🌟🌟
    • Introduction: Kubernetes and consequently OpenShift adopt a flat Software Defined Network (SDN) model, which means that all pods in the SDN are in the same logical network. Traditional network implementations adopt a zoning model in which different networks or zones are dedicated to specific purposes, with very strict communication rules between each zone. When implementing OpenShift in organizations that are using network security zones, the two models may clash. In this article, we will analyze a few options for coexistence. But first, let’s understand the two network models a bit more in depth.
    • Network Zones have been the widely accepted approach for building security into a network architecture. The general idea is to create separate networks, each with a specific purpose. Each network contains devices with similar security profiles. Communications between networks is highly scrutinized and controlled by firewall rules (perimeter defense).
    • Conclusion: A company’s security organization must be involved when deciding how to deploy OpenShift with regard to traditional network zones. Depending on their level of comfort with new technologies you may have different options. If physical network separation is the only acceptable choice, you will have to build a cluster per network zone. If logical network type of separations can be considered, then there are ways to stretch a single OpenShift deployment across multiple network zones. This post presented a few technical approaches.

Network Security Zones

OpenShift Route and OpenShift Ingress
OpenShift Egress

Openshift Compliant Docker Images

Gitlab

Atlassian Confluence6

Sonatype Nexus 3

Rocket Chat

OpenShift on IBM Cloud. IBM Cloud Pak

OpenShift on AWS

Other Awesome Lists

Videos


Slides