What is Red Hat OpenShift?

OpenShift is a family of containerization software products developed by Red Hat. Its flagship product is the OpenShift Container Platform – a hybrid cloud platform as a service built around Linux containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.

OpenShift aims to simplify the error-prone, tiresome tasks involved in application development, including application deployment and day-to-day operational management. It achieves this by providing an integrated application development platform that lets developers focus on writing high-quality code. OpenShift also empowers operations teams to maintain high levels of visibility and control.

OpenShift offers a web interface with a responsive UI, which is accessible from all modern web browsers and mobile devices, and runs on Windows, Linux, and macOS. OpenShift also offers several powerful command-line tools.

OpenShift architecture and components

The following diagram shows some of OpenShift’s main architecture layers and components.

diagram of OpenShift architecture and components

Image source: OpenShift

Infrastructure layer

This tier allows applications to be hosted on virtual or physical servers and on private or public cloud infrastructure.

Service layer

The service layer defines pods and their access policies. It provides persistent IP addresses and hostnames to pods, and allows applications to connect each other. It also supports simple internal load balancing to distribute work across application components.

There are two main types of nodes in an OpenShift cluster: master nodes and worker nodes. Applications reside on worker nodes; a cluster can have multiple worker nodes. Worker nodes are where all services run, and can be virtual or physical.

Main node

The main node is responsible for cluster management and worker node management. It performs the following key tasks:

  • API and authentication—receiving all management requests, which go through the API. These requests are encrypted and authenticated using SSL to ensure the security of the cluster.
  • Datastore—storing state and information related to the environment and applications.
  • Scheduler—making pod placement decisions, taking into account current memory, CPU, and other environmental utilization.
  • Health checks and scaling—monitoring pod health and scaling pods up and down based on CPU utilization or other metrics. If a pod fails, the master node automatically restarts it. If it fails too often, it will be marked as an unhealthy pod and won’t restart for a while.

Worker nodes

A worker node runs pods, which are the smallest unit that can be defined, deployed, and managed. A pod contains one or more containers, which hold applications and their dependencies. For example, a container could include a database, a front-end component, or a search engine.

By default, data stored in containers is lost when the container shuts down, because containers are temporary entities. To avoid this, you can use persistent storage for databases or stateful services.

All containers in a single pod share the same IP address and use the same data volume.

Persistent storage

Persistent storage ensures that data in a container is kept in a persistent storage volume attached to the container. This means that if you restart or delete the container, the stored data will not be lost. Persistent storage is a basis for running stateful applications in OpenShift.

OpenShift Ingress

When you create an OpenShift Container Platform cluster, each pod and service running in the cluster is assigned a unique IP address. The IP address is accessible by other pods and services running nearby, but not by external clients.

The Ingress Operator is a component that implements the IngressController API and allows external access to the OpenShift Container Platform cluster service.

It deploys one or more HAProxy-based ingress controllers to handle routing, making services accessible to external clients.

OpenShift services

OpenShift service mesh

Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control of networked microservices in a service mesh. It lets you connect, secure, and monitor microservices within an OpenShift Container Platform environment.

Red Hat OpenShift Service Mesh adds communication capabilities to existing distributed applications without changing service code. You can use the service mesh control plane features to configure and manage your service mesh.

Red Hat OpenShift Service Mesh capabilities like service discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring for your existing services.

OpenShift Pipelines

Red Hat OpenShift Pipelines is a cloud-native continuous integration and continuous delivery (CI/CD) solution powered by Kubernetes resources. It is designed for distributed teams working on microservices-based architectures.

OpenShift Pipelines lets you automate deployments across multiple platforms, abstracting low-level implementation details using Tekton building blocks. Tekton provides several standard custom resource definitions (CRDs) for defining portable CI/CD pipelines in Kubernetes deployments.

Key capabilities of OpenShift Pipelines include:

  • A serverless CI/CD system that runs pipelines with all necessary dependencies in isolated containers.
  • Pipelines defined using standard CI/CD concepts, which are easily extensible and integrate with existing Kubernetes tools.
  • Ability to build images using Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko, which are portable to any Kubernetes platform.
  • Developer console for creating Tekton resources, viewing pipeline execution logs, and managing pipelines in the OpenShift Container Platform namespace.

Cluster Manager

Red Hat OpenShift Cluster Manager is a managed service that allows you to install, modify, operate, and upgrade Red Hat OpenShift clusters. This service allows you to work with all clusters in your organization from a single dashboard.

OpenShift Cluster Manager guides you through the installation of OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It also manages self-installed OpenShift Container Platform clusters as well as ROSA and OpenShift Dedicated clusters.

OpenShift vs. Kubernetes

Kubernetes is an open source container framework developed by Google. It provides portable containers to make managing workloads and services easier, automating containerized applications’ deployment, operations, and scaling. Developers use Kubernetes to automate processes, balance containers, and orchestrate storage.

Both OpenShift and Kubernetes have a scalable architecture, allowing fast, large-scale development, management, and deployment. They have the same license (Apache License 2.0).

However, there are several differences between OpenShift and Kubernetes.

Support:

  • Kubernetes has a large developer community and supports multiple languages and frameworks.
  • OpenShift has a smaller community, mostly limited to Red Hat.

Updates:

  • Kubernetes has four releases per year on average and supports concurrent updates.
  • OpenShift usually has three releases per year and doesn’t support concurrent updates.

Networking:

  • Kubernetes has no native networking solution, but supports network plug-ins.
  • OpenShift has a networking solution, Open vSwitch, with three plug-ins.

Templates:

  • Kubernetes provides flexible, intuitive Helm templates.
  • OpenShift templates are less flexible and user-friendly.

Security:

  • Kubernetes lacks authentication/authorization capabilities, requiring manual implementation.
  • OpenShift uses strict security policies and enables security by default.

Enabling Gloo Mesh and Gloo Gateway with Red Hat OpenShift

Solo.io Gloo Mesh is the leading Istio Service Mesh for Enterprise deployments. Gloo Mesh can run on on-premises (private cloud) Red Hat OpenShift, or on public cloud Red Hat OpenShift software or any of the managed cloud services such as Red Hat OpenShift on AWS, Azure Red Hat OpenShift

Gloo Platform adds advanced features to Red Hat OpenShift, across both Istio service mesh and Kubernetes-native API-Gateway (Gloo Edge and Gloo Mesh). Many OpenShift customers choose to use the Solo Gloo products in place of the default OpenShift technologies, due to the advanced capabilities for routing, security and observability.

Learn more about how Gloo Mesh can enable Istio service mesh on OpenShift.

BACK TO TOP