Microsoft Azure Red Hat OpenShift
Star Fork Issue

master-20200831.2 - c837ae3


Azure Red Hat OpenShift Workshop

Azure Red Hat OpenShift is a fully managed Red Hat OpenShift service in Azure that is jointly engineered and supported by Microsoft and Red Hat. In this lab, you’ll go through a set of tasks that will help you understand some of the concepts of deploying and securing container based applications on top of Azure Red Hat OpenShift.

You can use this guide as an OpenShift tutorial and as study material to help you get started to learn OpenShift.

Some of the things you’ll be going through:

  • Creating a project on the Azure Red Hat OpenShift Web Console
  • Deploying a MongoDB container that uses Azure Disks for persistent storage
  • Restoring data into the MongoDB container by executing commands on the Pod
  • Deploying a Node JS API and frontend app from Git Hub using Source-To-Image (S2I)
  • Exposing the web application frontend using Routes
  • Creating a network policy to control communication between the different tiers in the application

You’ll be doing the majority of the labs using the OpenShift CLI, but you can also accomplish them using the Azure Red Hat OpenShift web console.


Azure subscription and Azure Red Hat OpenShift environment

If you have been provided with a Microsoft Hands-on Labs environment for this workshop through a registration link and an activation code, please continue to registration and activate the lab.


After you complete the registration, click Launch Lab

Launch lab

The Azure subscription and associated lab credentials will be provisioned. This will take a few moments. This process will also provision an Azure Red Hat OpenShift cluster.

Preparing lab

Once the environment is provisioned, a screen with all the appropriate lab credentials will be presented. Additionally, you’ll have your Azure Red Hat OpenShift cluster endpoint. The credentials will also be emailed to the email address entered at registration.


You can now skip the Create cluster section and jump to create project.


Azure Cloud Shell

You can use the Azure Cloud Shell accessible at once you login with an Azure subscription.

Head over to and sign in with your Azure Subscription details.

Select Bash as your shell.

Select Bash

Select Show advanced settings

Select show advanced settings

Set the Storage account and File share names to your resource group name (all lowercase, without any special characters). Leave other settings unchanged, then hit Create storage

Azure Cloud Shell

You should now have access to the Azure Cloud Shell

Set the storage account and fileshare names

OpenShift CLI (oc)

You’ll need to download the latest OpenShift CLI (oc) client tools for OpenShift 4.3. You can follow the steps below on the Azure Cloud Shell.

Note You’ll need to change the link below to the latest link you get from the page. GitHub release links

Please run following commands on Azure Cloud Shell to download and setup the OpenShift client.

cd ~

mkdir openshift

tar -zxvf openshift-client-linux.tar.gz -C openshift

echo 'export PATH=$PATH:~/openshift' >> ~/.bashrc && source ~/.bashrc

The OpenShift CLI (oc) is now installed.

In case you want to work from your own operating system, here are the links to the different versions of CLI:


GitHub Account

You’ll need a personal GitHub account. You can sign up for free here.

Basic concepts

Source-To-Image (S2I)

Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments.

How it works

For a dynamic language like Ruby, the build-time and run-time environments are typically the same. Starting with a builder image that describes this environment - with Ruby, Bundler, Rake, Apache, GCC, and other packages needed to set up and run a Ruby application installed - source-to-image performs the following steps:

  1. Start a container from the builder image with the application source injected into a known directory

  2. The container process transforms that source code into the appropriate runnable setup - in this case, by installing dependencies with Bundler and moving the source code into a directory where Apache has been preconfigured to look for the Ruby file.

  3. Commit the new container and set the image entrypoint to be a script (provided by the builder image) that will start Apache to host the Ruby application.

For compiled languages like C, C++, Go, or Java, the dependencies necessary for compilation might dramatically outweigh the size of the actual runtime artifacts. To keep runtime images slim, S2I enables a multiple-step build processes, where a binary artifact such as an executable or Java WAR file is created in the first builder image, extracted, and injected into a second runtime image that simply places the executable in the correct location for execution.

For example, to create a reproducible build pipeline for Tomcat (the popular Java webserver) and Maven:

  1. Create a builder image containing OpenJDK and Tomcat that expects to have a WAR file injected

  2. Create a second image that layers on top of the first image Maven and any other standard dependencies, and expects to have a Maven project injected

  3. Invoke source-to-image using the Java application source and the Maven image to create the desired application WAR

  4. Invoke source-to-image a second time using the WAR file from the previous step and the initial Tomcat image to create the runtime image

By placing our build logic inside of images, and by combining the images into multiple steps, we can keep our runtime environment close to our build environment (same JDK, same Tomcat JARs) without requiring build tools to be deployed to production.

Goals and benefits


Allow build environments to be tightly versioned by encapsulating them within a container image and defining a simple interface (injected source code) for callers. Reproducible builds are a key requirement to enabling security updates and continuous integration in containerized infrastructure, and builder images help ensure repeatability as well as the ability to swap runtimes.


Any existing build system that can run on Linux can be run inside of a container, and each individual builder can also be part of a larger pipeline. In addition, the scripts that process the application source code can be injected into the builder image, allowing authors to adapt existing images to enable source handling.


Instead of building multiple layers in a single Dockerfile, S2I encourages authors to represent an application in a single image layer. This saves time during creation and deployment, and allows for better control over the output of the final image.


Dockerfiles are run without many of the normal operational controls of containers, usually running as root and having access to the container network. S2I can be used to control what permissions and privileges are available to the builder image since the build is launched in a single container. In concert with platforms like OpenShift, source-to-image can enable admins to tightly control what privileges developers have at build time.


An OpenShift Route exposes a service at a host name, like, so that external clients can reach it by name. When a Route object is created on OpenShift, it gets picked up by the built-in HAProxy load balancer in order to expose the requested service and make it externally available with the given configuration. You might be familiar with the Kubernetes Ingress object and might already be asking “what’s the difference?”. Red Hat created the concept of Route in order to fill this need and then contributed the design principles behind this to the community; which heavily influenced the Ingress design. Though a Route does have some additional features as can be seen in the chart below.

routes vs ingress

NOTE: DNS resolution for a host name is handled separately from routing; your administrator may have configured a cloud domain that will always correctly resolve to the router, or if using an unrelated host name you may need to modify its DNS records independently to resolve to the router.

Also of note is that an individual route can override some defaults by providing specific configuraitons in its annotations. See here for more details:


An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a Docker image repository on a registry.

What are the benefits?

Using an ImageStream makes it easy to change a tag for a container image. Otherwise to change a tag you need to download the whole image, change it locally, then push it all back. Also promoting applications by having to do that to change the tag and then update the deployment object entails many steps. With ImageStreams you upload a container image once and then you manage it’s virtual tags internally in OpenShift. In one project you may use the dev tag and only change reference to it internally, in prod you may use a prod tag and also manage it internally. You don’t really have to deal with the registry!

You can also use ImageStreams in conjuction with DeploymentConfigs to set a trigger that will start a deployment as soon as a new image appears or a tag changes its reference.

See here for more details:
OpenShift Docs:
ImageStream and Builds:


A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process.

OpenShift Container Platform leverages Kubernetes by creating Docker-formatted containers from build images and pushing them to a container image registry.

Build objects share common characteristics: inputs for a build, the need to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time.

See here for more details:

Lab 1 - Go Microservices

Now that you have your environment provisioned and the prerequisites fulfilled, it is time to start working on the labs.

Application Overview

You will be deploying a ratings application on Azure Red Hat OpenShift.

Application diagram

The application consists of 3 components:

Component Link
A public facing API rating-api GitHub repo
A public facing web frontend rating-web GitHub repo
A MongoDB with pre-loaded data Data

Once you’re done, you’ll have an experience similar to the below.

Application Application Application

Create cluster

Create, access, and manage an Azure Red Hat OpenShift 4.3 Cluster

We will now create our own ARO cluster.

Before you begin

If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.75 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

Install the az aro extension

The az aro extension allows you to create, access, and delete Azure Red Hat OpenShift clusters directly from the command line using the Azure CLI.

Run the following command to install the az aro extension.

az extension add -n aro --index

If you already have the extension installed, you can update by running the following command.

az extension update -n aro --index

Register the resource provider

Next, you need to register the Microsoft.RedHatOpenShift resource provider in your subscription.

az provider register -n Microsoft.RedHatOpenShift --wait

Verify the extension is registered.

az -v

You should get an output similar to the below.

aro                                1.0.0

Get a Red Hat pull secret (optional)

A Red Hat pull secret enables your cluster to access Red Hat container registries along with additional content. This step is optional but recommended.

Obtain your pull secret by navigating to and clicking Download pull secret.

You will need to log in to your Red Hat account or create a new Red Hat account with your business email and accept the terms and conditions.

Keep the saved pull-secret.txt file somewhere safe - it will be used in each cluster creation.

Create a virtual network containing two empty subnets

Next, you will create a virtual network containing two empty subnets.

  1. Set the following variables.

    LOCATION=eastus                 # the location of your cluster
    RESOURCEGROUP=aro-rg            # the name of the resource group where you want to create your cluster
    CLUSTER=cluster                 # the name of your cluster
  2. Create a resource group

    An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, it is also where your resources run in Azure if you don’t specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.

     az group create --name $RESOURCEGROUP --location $LOCATION

    The following example output shows the resource group created successfully:

     "id": "/subscriptions/<guid>/resourceGroups/aro-rg",
     "location": "eastus",
     "managedBy": null,
     "name": "aro-rg",
     "properties": {
         "provisioningState": "Succeeded"
     "tags": null
  3. Create a virtual network.

    Azure Red Hat OpenShift clusters running OpenShift 4 require a virtual network with two empty subnets, for the master and worker nodes.

    Create a new virtual network in the same resource group you created earlier.

     az network vnet create \
     --resource-group $RESOURCEGROUP \
     --name aro-vnet \

    The following example output shows the virtual network created successfully:

     "newVNet": {
         "addressSpace": {
         "addressPrefixes": [
         "id": "/subscriptions/<guid>/resourceGroups/aro-rg/providers/Microsoft.Network/virtualNetworks/aro-vnet",
         "location": "eastus",
         "name": "aro-vnet",
         "provisioningState": "Succeeded",
         "resourceGroup": "aro-rg",
         "type": "Microsoft.Network/virtualNetworks"
  4. Add an empty subnet for the master nodes.

     az network vnet subnet create \
     --resource-group $RESOURCEGROUP \
     --vnet-name aro-vnet \
     --name master-subnet \
     --address-prefixes \
     --service-endpoints Microsoft.ContainerRegistry
  5. Add an empty subnet for the worker nodes.

     az network vnet subnet create \
     --resource-group $RESOURCEGROUP \
     --vnet-name aro-vnet \
     --name worker-subnet \
     --address-prefixes \
     --service-endpoints Microsoft.ContainerRegistry
  6. Disable subnet private endpoint policies on the master subnet. This is required to be able to connect and manage the cluster.

     az network vnet subnet update \
     --name master-subnet \
     --resource-group $RESOURCEGROUP \
     --vnet-name aro-vnet \
     --disable-private-link-service-network-policies true

Create the cluster

Run the following command to create a cluster. Optionally, you can pass a pull secret which enables your cluster to access Red Hat container registries along with additional content. Access your pull secret by navigating to the Red Hat OpenShift Cluster Manager and clicking Copy Pull Secret.

az aro create \
  --resource-group $RESOURCEGROUP \
  --name $CLUSTER \
  --vnet aro-vnet \
  --master-subnet master-subnet \
  --worker-subnet worker-subnet
  # --domain # [OPTIONAL] custom domain
  # --pull-secret '$(< pull-secret.txt)' # [OPTIONAL]

[!NOTE] It normally takes about 35 minutes to create a cluster.

[!IMPORTANT] If you choose to specify a custom domain, for example, the OpenShift console will be available at a URL such as, instead of the built-in domain https://console-openshift-console.apps.<random>.<location>

By default, OpenShift uses self-signed certificates for all of the routes created on *.apps.<random>.<location> If you choose to use custom DNS after connecting to the cluster, you will need to follow the OpenShift documentation to configure a custom CA for your ingress controller and a custom CA for your API server.

Connect to the cluster

You can log into the cluster using the kubeadmin user.

Run the following command to find the password for the kubeadmin user.

az aro list-credentials \
  --name $CLUSTER \
  --resource-group $RESOURCEGROUP

The following example output shows the password will be in kubeadminPassword.

  "kubeadminPassword": "<generated password>",
  "kubeadminUsername": "kubeadmin"

Save these secrets, you are going to use them to connect to the Web Portal

Create Project

Login to the web console

Each Azure Red Hat OpenShift cluster has a public hostname that hosts the OpenShift Web Console.

You can use command az aro list to list the clusters in your current Azure subscription.

az aro list -o table

Retrieve your cluster specific hostname. Replace <cluster name> and <resource group> by those specific to your environment.

az aro show -n <cluster name> -g <resource group> --query "consoleProfile" -o tsv

You should get back something like Add https:// to the beginning of that hostname and open that link in your browser. You’ll be asked to login with Azure Active Directory. Use the username and password provided to you in the lab.

After logging in, you should be able to see the Azure Red Hat OpenShift Web Console.

Azure Red Hat OpenShift Web Console

Retrieve the login command and token

Note Make sure you complete the prerequisites to install the OpenShift CLI on the Azure Cloud Shell.

Once you’re logged into the Web Console, click on the username on the top right, then click Copy login command.

Copy login command

Open the Azure Cloud Shell and paste the login command. You should be able to connect to the cluster.

Login through the cloud shell

Create a project

A project allows a community of users to organize and manage their content in isolation from other communities.

oc new-project workshop

Create new project


Deploy MongoDB

Create mongoDB from template

Azure Red Hat OpenShift provides a container image and template to make creating a new MongoDB database service easy. The template provides parameter fields to define all the mandatory environment variables (user, password, database name, etc) with predefined defaults including auto-generation of password values. It will also define both a deployment configuration and a service.

There are two templates available:

  • mongodb-ephemeral is for development/testing purposes only because it uses ephemeral storage for the database content. This means that if the database pod is restarted for any reason, such as the pod being moved to another node or the deployment configuration being updated and triggering a redeploy, all data will be lost.

  • mongodb-persistent uses a persistent volume store for the database data which means the data will survive a pod restart. Using persistent volumes requires a persistent volume pool be defined in the Azure Red Hat OpenShift deployment.

Hint You can retrieve a list of templates using the command below. The templates are preinstalled in the openshift namespace.

oc get templates -n openshift

Create a mongoDB deployment using the mongodb-persistent template. You’re passing in the values to be replaced (username, password and database) which generates a YAML/JSON file. You then pipe it to the oc create command.

oc process openshift//mongodb-persistent \
    -p MONGODB_USER=ratingsuser \
    -p MONGODB_PASSWORD=ratingspassword \
    -p MONGODB_DATABASE=ratingsdb \
    -p MONGODB_ADMIN_PASSWORD=ratingspassword | oc create -f -

If you now head back to the web console, you should see a new deployment for mongoDB.

MongoDB deployment

Verify if the mongoDB pod was created successfully

Run the oc status command to view the status of the new application and verify if the deployment of the mongoDB template was successful.

oc status

oc status

Retrieve mongoDB service hostname

Find the mongoDB service.

oc get svc mongodb

oc get svc

The service will be accessible at the following DNS name: mongodb.workshop.svc.cluster.local which is formed of [service name].[project name].svc.cluster.local. This resolves only within the cluster.

You can also retrieve this from the web console. You’ll need this hostname to configure the rating-api.

MongoDB service in the Web Console


Deploy Ratings API

The rating-api is a NodeJS application that connects to mongoDB to retrieve and rate items. Below are some of the details that you’ll need to deploy this.

Fork the application to your own GitHub repository

To be able to setup CI/CD webhooks, you’ll need to fork the application into your personal GitHub repository.


Use the OpenShift CLI to deploy the rating-api

Note You’re going to be using source-to-image (S2I) as a build strategy.

oc new-app<your GitHub username>/rating-api --strategy=source

Create rating-api using oc cli

Configure the required environment variables

Create the MONGODB_URI environment variable. This URI should look like mongodb://[username]:[password]@[endpoint]:27017/ratingsdb. You’ll need to replace the [usernaame] and [password] with the ones you used when creating the database. You’ll also need to replace the [endpoint] with the hostname acquired in the previous step

Hit Save when done.

Create a MONGODB_URI environment variable

It can also be done with CLI

oc set env deploy/rating-api MONGODB_URI=mongodb://ratingsuser:ratingspassword@mongodb.workshop.svc.cluster.local:27017/ratingsdb

Verify that the service is running

If you navigate to the logs of the rating-api deployment, you should see a log message confirming the code can successfully connect to the mongoDB. For that, in the deployment’s details screen, click on Pods tab, then on one of the pods

Verify mongoDB connection

Retrieve rating-api service hostname

Find the rating-api service.

oc get svc rating-api

The service will be accessible at the following DNS name over port 8080: rating-api.workshop.svc.cluster.local:8080 which is formed of [service name].[project name].svc.cluster.local. This resolves only within the cluster.

Setup GitHub webhook

To trigger S2I builds when you push code into your GitHib repo, you’ll need to setup the GitHub webhook.

Retrieve the GitHub webhook trigger secret. You’ll need use this secret in the GitHub webhook URL.

oc get bc/rating-api -o=jsonpath='{.spec.triggers..github.secret}'

You’ll get back something similar to the below. Make note the secret key in the red box as you’ll need it in a few steps.

Rating API GitHub trigger secret

Retrieve the GitHub webhook trigger URL from the build configuration.

oc describe bc/rating-api

Rating API GitHub trigger url

Replace the <secret> placeholder with the secret you retrieved in the previous step to have a URL similar to You’ll use this URL to setup the webhook on your GitHub repository.

In your GitHub repository, select Add Webhook from SettingsWebhooks.

Paste the URL output (similar to above) into the Payload URL field.

Change the Content Type from GitHub’s default application/x-www-form-urlencoded to application/json.

Click Add webhook.

GitHub add webhook

You should see a message from GitHub stating that your webhook was successfully configured.

Now, whenever you push a change to your GitHub repository, a new build will automatically start, and upon a successful build a new deployment will start.


Deploy Ratings frontend

The rating-web is a NodeJS application that connects to the rating-api. Below are some of the details that you’ll need to deploy this.

  • rating-web on GitHub:
  • The container exposes port 8080
  • The web app connects to the API over the internal cluster DNS, using a proxy through an environment variable named API

Fork the application to your own GitHub repository

To be able to setup CI/CD webhooks, you’ll need to fork the application into your personal GitHub repository.


Use the OpenShift CLI to deploy the rating-web

Note You’re going to be using source-to-image (S2I) as a build strategy.

oc new-app<your GitHub username>/rating-web --strategy=source

Create rating-web using oc cli

Configure the required environment variables

Create the API environment variable for rating-web Deployment Config. The value of this variable is going to be the hostname/port of the rating-api service.

Instead of setting the environment variable through the Azure Red Hat OpenShift Web Console, you can set it through the OpenShift CLI.

oc set env deploy rating-web API=http://rating-api:8080

Expose the rating-web service using a Route

Expose the service.

oc expose svc/rating-web

Find out the created route hostname

oc get route rating-web

You should get a response similar to the below.

Retrieve the created route

Notice the fully qualified domain name (FQDN) is comprised of the application name and project name by default. The remainder of the FQDN, the subdomain, is your Azure Red Hat OpenShift cluster specific apps subdomain.

Try the service

Open the hostname in your browser, you should see the rating app page. Play around, submit a few votes and check the leaderboard.

rating-web homepage

Setup GitHub webhook

To trigger S2I builds when you push code into your GitHib repo, you’ll need to setup the GitHub webhook.

Retrieve the GitHub webhook trigger secret. You’ll need use this secret in the GitHub webhook URL.

oc get bc/rating-web -o=jsonpath='{.spec.triggers..github.secret}'

You’ll get back something similar to the below. Make note the secret key in the red box as you’ll need it in a few steps.

Rating Web GitHub trigger secret

Retrieve the GitHub webhook trigger URL from the build configuration.

oc describe bc/rating-web

Rating Web GitHub trigger url

Replace the <secret> placeholder with the secret you retrieved in the previous step to have a URL similar to You’ll use this URL to setup the webhook on your GitHub repository.

In your GitHub repository, select Add Webhook from SettingsWebhooks.

Paste the URL output (similar to above) into the Payload URL field.

Change the Content Type from GitHub’s default application/x-www-form-urlencoded to application/json.

Click Add webhook.

GitHub add webhook

You should see a message from GitHub stating that your webhook was successfully configured.

Now, whenever you push a change to your GitHub repository, a new build will automatically start, and upon a successful build a new deployment will start.

Make a change to the website app and see the rolling update

Go to the<your GitHub username>/rating-web/blob/master/src/App.vue file in your repository on GitHub.

Edit the file, and change the background-color: #999; line to be background-color: #0071c5.

Commit the changes to the file into the master branch.

GitHub edit app

Immediately, go to the Builds tab in the OpenShift Web Console. You’ll see a new build queued up which was triggered by the push. Once this is done, it will trigger a new deployment and you should see the new website color updated.

Webhook build

New rating website


Create Network Policy

Now that you have the application working, it is time to apply some security hardening. You’ll use network policies to restrict communication to the rating-api.

Switch to the Cluster Console

Switch to the Cluster Console page. Switch to project workshop. Click Create Network Policy. Cluster console page

Create network policy

You will create a policy that applies to any pod matching the app=rating-api label. The policy will allow ingress only from pods matching the app=rating-web label.

Use the YAML below in the editor, and make sure you’re targeting the workshop project.

kind: NetworkPolicy
  name: api-allow-from-web
  namespace: workshop
      app: rating-api
    - from:
        - podSelector:
              app: rating-web

Create network policy

Click Create.


Lab 2 - ARO Internals

Application Overview


Note In order to simplify the deployment of the app (which you will do next) we have included all the objects needed in the above YAMLs as “all-in-one” YAMLs. In reality though, an enterprise would most likely want to have a different yaml file for each Kubernetes object.

About OSToy

OSToy is a simple Node.js application that we will deploy to Azure Red Hat OpenShift. It is used to help us explore the functionality of Kubernetes. This application has a user interface which you can:

  • write messages to the log (stdout / stderr)
  • intentionally crash the application to view self-healing
  • toggle a liveliness probe and monitor OpenShift behavior
  • read config maps, secrets, and env variables
  • if connected to shared storage, read and write files
  • check network connectivity, intra-cluster DNS, and intra-communication with an included microservice

OSToy Application Diagram

OSToy Diagram

Familiarization with the Application UI

  1. Shows the pod name that served your browser the page.
  2. Home: The main page of the application where you can perform some of the functions listed which we will explore.
  3. Persistent Storage: Allows us to write data to the persistent volume bound to this application.
  4. Config Maps: Shows the contents of configmaps available to the application and the key:value pairs.
  5. Secrets: Shows the contents of secrets available to the application and the key:value pairs.
  6. ENV Variables: Shows the environment variables available to the application.
  7. Networking: Tools to illustrate networking within the application.
  8. Shows some more information about the application.

Home Page

Learn more about the application

To learn more, click on the “About” menu item on the left once we deploy the app.

ostoy About

Application Deployment

Retrieve login command

If not logged in via the CLI, click on the dropdown arrow next to your name in the top-right and select Copy Login Command.

CLI Login

Then go to your terminal and paste that command and press enter. You will see a similar confirmation message if you successfully logged in.

$ oc login --token=hUXXXXXX
Logged into "" as "okashi" using the token provided.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * aro-shifty

Create new project

Create a new project called “OSToy” in your cluster.

Use the following command

oc new-project ostoy

You should receive the following response

$ oc new-project ostoy
Now using project "ostoy" on server "".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-25-centos7~

to build a new example application in Ruby.

Equivalently you can also create this new project using the web UI by selecting “Application Console” at the top then clicking on “+Create Project” button on the right.

UI Create Project

Download YAML configuration

Download the Kubernetes deployment object yamls from the following locations to your Azure Cloud Shell, in a directory of your choosing (just remember where you placed them for the next step).

Feel free to open them up and take a look at what we will be deploying. For simplicity of this lab we have placed all the Kubernetes objects we are deploying in one “all-in-one” yaml file. Though in reality there are benefits to separating these out into individual yaml files.



Deploy backend microservice

The microservice application serves internal web requests and returns a JSON object containing the current hostname and a randomly generated color string.

In your command line deploy the microservice using the following command:

oc apply -f ostoy-microservice-deployment.yaml

You should see the following response:

$ oc apply -f ostoy-microservice-deployment.yaml
deployment.apps/ostoy-microservice created
service/ostoy-microservice-svc created

Deploy the front-end service

The frontend deployment contains the node.js frontend for our application along with a few other Kubernetes objects to illustrate examples.

If you open the ostoy-fe-deployment.yaml you will see we are defining:

  • Persistent Volume Claim
  • Deployment Object
  • Service
  • Route
  • Configmaps
  • Secrets

In your command line deploy the frontend along with creating all objects mentioned above by entering:

oc apply -f ostoy-fe-deployment.yaml

You should see all objects created successfully

$ oc apply -f ostoy-fe-deployment.yaml
persistentvolumeclaim/ostoy-pvc created
deployment.apps/ostoy-frontend created
service/ostoy-frontend-svc created created
configmap/ostoy-configmap-env created
secret/ostoy-secret-env created
configmap/ostoy-configmap-files created
secret/ostoy-secret created

Get route

Get the route so that we can access the application via oc get route

You should see the following response:

NAME           HOST/PORT                                                      PATH      SERVICES              PORT      TERMINATION   WILDCARD
ostoy-route             ostoy-frontend-svc   <all>                   None

Copy above and paste it into your browser and press enter. You should see the homepage of our application.

Home Page

Logging and Metrics

Assuming you can access the application via the Route provided and are still logged into the CLI (please go back to part 2 if you need to do any of those) we’ll start to use this application. As stated earlier, this application will allow you to “push the buttons” of OpenShift and see how it works. We will do this to test the logs.

Click on the Home menu item and then click in the message box for “Log Message (stdout)” and write any message you want to output to the stdout stream. You can try “All is well!”. Then click “Send Message”.

Logging stdout

Click in the message box for “Log Message (stderr)” and write any message you want to output to the stderr stream. You can try “Oh no! Error!”. Then click “Send Message”.

Logging stderr

View logs directly from the pod

Go to the CLI and enter the following command to retrieve the name of your frontend pod which we will use to view the pod logs:

$ oc get pods -o name

So the pod name in this case is ostoy-frontend-679cb85695-5cn7x. Then run oc logs ostoy-frontend-679cb85695-5cn7x and you should see your messages:

$ oc logs ostoy-frontend-679cb85695-5cn7x
ostoy-frontend-679cb85695-5cn7x: server starting on port 8080
Redirecting to /home
stdout: All is well!
stderr: Oh no! Error!

You should see both the stdout and stderr messages.

View logs using Azure Monitor Integration

One can use the native Azure service, Azure Monitor, to view and keep application logs along with metrics. This lab assumes that the cluster was already configured to use Azure Monitor for application logs at cluster creation. If you want more information on how to connect this for a new or existing cluster see the docs here: (

Access the azure portal at (

Click on “Monitor”.


Click Logs in the left menu.

Note: if you are asked to select a scope select the Log Analytics scope for your cluster

container logs

Expand “ContainerInsights”.

Double click “ContainerLog”.

Then click the “Run” button at the top.

container logs

In the bottom pane you will see the results of the application logs returned. You might need to sort, but you should see the two lines we outputted to stdout and stderr.

container logs

View Metrics using Azure Monitor Integration

Click on “Containers” in the left menu under Insights.


Click on your cluster that is integrated with Azure Monitor.


You will see metrics for your cluster such as resource consumption over time and pod counts. Feel free to explore the metrics here.


For example, if you want to see how much resources our OSTOY pods are using click on the “Containers” tab.

Enter “ostoy” into the search box near the top left.

You will see the 2 pods we have, one for the front-end and one for the microservice and the relevant metric. Feel free to select other options to see min, max or other percentile usages of the pods. You can also change to see memory consumption

container metrics

Exploring Health Checks

In this section we will intentionally crash our pods as well as make a pod non-responsive to the liveness probes and see how Kubernetes behaves. We will first intentionally crash our pod and see that Kubernetes will self-heal by immediately spinning it back up. Then we will trigger the health check by stopping the response on the /health endpoint in our app. After three consecutive failures, Kubernetes should kill the pod and then recreate it.

It would be best to prepare by splitting your screen between the OpenShift Web UI and the OSToy application so that you can see the results of our actions immediately.


But if your screen is too small or that just won’t work, then open the OSToy application in another tab so you can quickly switch to the OpenShift Web Console once you click the button. To get to this deployment in the OpenShift Web Console go to:

Applications > Deployments > click the number in the “Last Version” column for the “ostoy-frontend” row

Deploy Num

Go to the OSToy app, click on Home in the left menu, and enter a message in the “Crash Pod” tile (ie: “This is goodbye!”) and press the “Crash Pod” button. This will cause the pod to crash and Kubernetes should restart the pod. After you press the button you will see:

Crash Message

Quickly switch to the Deployment screen. You will see that the pod is red, meaning it is down but should quickly come back up and show blue.

Pod Crash

You can also check in the pod events and further verify that the container has crashed and been restarted.

Pod Events

Keep the page from the pod events still open from the previous step. Then in the OSToy app click on the “Toggle Health” button, in the “Toggle Health Status” tile. You will see the “Current Health” switch to “I’m not feeling all that well”.

Pod Events

This will cause the app to stop responding with a “200 HTTP code”. After 3 such consecutive failures (“A”), Kubernetes will kill the pod (“B”) and restart it (“C”). Quickly switch back to the pod events tab and you will see that the liveness probe failed and the pod as being restarted.

Pod Events2

Persistent Storage

In this section we will execute a simple example of using persistent storage by creating a file that will be stored on a persistent volume in our cluster and then confirm that it will “persist” across pod failures and recreation.

Inside the OpenShift web UI click on Storage in the left menu. You will then see a list of all persistent volume claims that our application has made. In this case there is just one called “ostoy-pvc”. You will also see other pertinent information such as whether it is bound or not, size, access mode and age.

In this case the mode is RWO (Read-Write-Once) which means that the volume can only be mounted to one node, but the pod(s) can both read and write to that volume. The default in ARO is for Persistent Volumes to be backed by Azure Disk, but it is possible to chose Azure Files so that you can use the RWX (Read-Write-Many) access mode. (See here for more info on access modes)

In the OSToy app click on Persistent Storage in the left menu. In the “Filename” area enter a filename for the file you will create. (ie: “test-pv.txt”)

Underneath that, in the “File Contents” box, enter text to be stored in the file. (ie: “Azure Red Hat OpenShift is the greatest thing since sliced bread!” or “test” :) ). Then click “Create file”.

Create File

You will then see the file you created appear above under “Existing files”. Click on the file and you will see the filename and the contents you entered.

View File

We now want to kill the pod and ensure that the new pod that spins up will be able to see the file we created. Exactly like we did in the previous section. Click on Home in the left menu.

Click on the “Crash pod” button. (You can enter a message if you’d like).

Click on Persistent Storage in the left menu

You will see the file you created is still there and you can open it to view its contents to confirm.

Crash Message

Now let’s confirm that it’s actually there by using the CLI and checking if it is available to the container. If you remember we mounted the directory /var/demo_files to our PVC. So get the name of your frontend pod

oc get pods

then get an SSH session into the container

oc rsh <podname>

then cd /var/demo_files

if you enter ls you can see all the files you created. Next, let’s open the file we created and see the contents

cat test-pv.txt

You should see the text you entered in the UI.

$ oc get pods
NAME                                  READY     STATUS    RESTARTS   AGE
ostoy-frontend-5fc8d486dc-wsw24       1/1       Running   0          18m
ostoy-microservice-6cf764974f-hx4qm   1/1       Running   0          18m

$ oc rsh ostoy-frontend-5fc8d486dc-wsw24
/ $ cd /var/demo_files/

/var/demo_files $ ls
lost+found   test-pv.txt

/var/demo_files $ cat test-pv.txt 
Azure Red Hat OpenShift is the greatest thing since sliced bread!

Then exit the SSH session by typing exit. You will then be in your CLI.


In this section we’ll take a look at how OSToy can be configured using ConfigMaps, Secrets, and Environment Variables. This section won’t go into details explaining each (the links above are for that), but will show you how they are exposed to the application.

Configuration using ConfigMaps

ConfigMaps allow you to decouple configuration artifacts from container image content to keep containerized applications portable.

Click on Config Maps in the left menu.

This will display the contents of the configmap available to the OSToy application. We defined this in the ostoy-fe-deployment.yaml here:

kind: ConfigMap
apiVersion: v1
  name: ostoy-configmap-files
  config.json:  '{ "default": "123" }'

Configuration using Secrets

Kubernetes Secret objects allow you to store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it, verbatim, into a Pod definition or a container image.

Click on Secrets in the left menu.

This will display the contents of the secrets available to the OSToy application. We defined this in the ostoy-fe-deployment.yaml here:

apiVersion: v1
kind: Secret
  name: ostoy-secret
type: Opaque

Configuration using Environment Variables

Using environment variables is an easy way to change application behavior without requiring code changes. It allows different deployments of the same application to potentially behave differently based on the environment variables, and OpenShift makes it simple to set, view, and update environment variables for Pods/Deployments.

Click on ENV Variables in the left menu.

This will display the environment variables available to the OSToy application. We added three as defined in the deployment spec of ostoy-fe-deployment.yaml here:

        name: ostoy-configmap-env
        key: ENV_TOY_CONFIGMAP
  - name: ENV_TOY_SECRET
        name: ostoy-secret-env
        key: ENV_TOY_SECRET

The last one, MICROSERVICE_NAME is used for the intra-cluster communications between pods for this application. The application looks for this environment variable to know how to access the microservice in order to get the colors.

Networking and Scaling

In this section we’ll see how OSToy uses intra-cluster networking to separate functions by using microservices and visualize the scaling of pods.

Let’s review how this application is set up…

OSToy Diagram

As can be seen in the image above we see we have defined at least 2 separate pods, each with its own service. One is the frontend web application (with a service and a publicly accessible route) and the other is the backend microservice with a service object created so that the frontend pod can communicate with the microservice (across the pods if more than one). Therefore this microservice is not accessible from outside this cluster, nor from other namespaces/projects (due to ARO’s network policy, ovs-networkpolicy). The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled “Intra-cluster Communication”.


Click on Networking in the left menu. Review the networking configuration.

The right tile titled “Hostname Lookup” illustrates how the service name created for a pod can be used to translate into an internal ClusterIP address. Enter the name of the microservice following the format of which we created in our ostoy-microservice.yaml which can be seen here:

apiVersion: v1
kind: Service
  name: ostoy-microservice-svc
    app: ostoy-microservice
  type: ClusterIP
    - port: 8080
      targetPort: 8080
      protocol: TCP
    app: ostoy-microservice

In this case we will enter: ostoy-microservice-svc.ostoy.svc.cluster.local

We will see an IP address returned. In our example it is This is the intra-cluster IP address; only accessible from within the cluster.

ostoy DNS


OpenShift allows one to scale up/down the number of pods for each part of an application as needed. This can be accomplished via changing our replicaset/deployment definition (declarative), by the command line (imperative), or via the web UI (imperative). In our deployment definition (part of our ostoy-fe-deployment.yaml) we stated that we only want one pod for our microservice to start with. This means that the Kubernetes Replication Controller will always strive to keep one pod alive.

If we look at the tile on the left we should see one box randomly changing colors. This box displays the randomly generated color sent to the frontend by our microservice along with the pod name that sent it. Since we see only one box that means there is only one microservice pod. We will now scale up our microservice pods and will see the number of boxes change.

To confirm that we only have one pod running for our microservice, run the following command, or use the web UI.

[okashi@ok-vm ostoy]# oc get pods
NAME                                   READY     STATUS    RESTARTS   AGE
ostoy-frontend-679cb85695-5cn7x       1/1       Running   0          1h
ostoy-microservice-86b4c6f559-p594d   1/1       Running   0          1h

Let’s change our microservice definition yaml to reflect that we want 3 pods instead of the one we see. Download the ostoy-microservice-deployment.yaml and save it on your local machine.

Open the file using your favorite editor. Ex: vi ostoy-microservice-deployment.yaml.

Find the line that states replicas: 1 and change that to replicas: 3. Then save and quit.

It will look like this

        app: ostoy-microservice
    replicas: 3

Assuming you are still logged in via the CLI, execute the following command:

oc apply -f ostoy-microservice-deployment.yaml

Confirm that there are now 3 pods via the CLI (oc get pods) or the web UI (Overview > expand “ostoy-microservice”).

See this visually by visiting the OSToy app and seeing how many boxes you now see. It should be three.

UI Scale

Now we will scale the pods down using the command line. Execute the following command from the CLI:

oc scale deployment ostoy-microservice --replicas=2

Confirm that there are indeed 2 pods, via the CLI (oc get pods) or the web UI.

See this visually by visiting the OSToy App and seeing how many boxes you now see. It should be two.

Lastly let’s use the web UI to scale back down to one pod. In the project you created for this app (ie: “ostoy”) in the left menu click Overview > expand “ostoy-microservice”. On the right you will see a blue circle with the number 2 in the middle. Click on the down arrow to the right of that to scale the number of pods down to 1.

UI Scale

See this visually by visiting the OSToy app and seeing how many boxes you now see. It should be one. You can also confirm this via the CLI or the web UI



In this section we will explore how the Horizontal Pod Autoscaler (HPA) can be used and works within Kubernetes/OpenShift.

As defined in the Kubernetes documentation:

Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization.

We will create an HPA and then use OSToy to generate CPU intensive workloads. We will then observe how the HPA will scale up the number of pods in order to handle the increased workloads.

1. Create the Horizontal Pod Autoscaler

Run the following command to create the autoscaler. This will create an HPA that maintains between 1 and 10 replicas of the Pods controlled by the ostoy-microservice DeploymentConfig created. Roughly speaking, the HPA will increase and decrease the number of replicas (via the deployment) to maintain an average CPU utilization across all Pods of 80% (since each pod requests 50 millicores, this means average CPU usage of 40 millicores)

oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10

2. View the current number of pods

In the OSToy app in the left menu click on “Autoscaling” to access this portion of the workshop.

HPA Menu

As was in the networking section you will see the total number of pods available for the microservice by counting the number of colored boxes. In this case we have only one. This can be verified through the web UI or from the CLI.

You can use the following command to see the running microservice pods only: oc get pods --field-selector=status.phase=Running | grep microservice

HPA Main

3. Increase the load

Now that we know that we only have one pod let’s increase the workload that the pod needs to perform. Click the link in the center of the card that says “increase the load”. Please click only ONCE!

This will generate some CPU intensive calculations. (If you are curious about what it is doing you can click here).

Note: The page may become slightly unresponsive. This is normal; so be patient while the new pods spin up.

4. See the pods scale up

After about a minute the new pods will show up on the page (represented by the colored rectangles). Confirm that the pods did indeed scale up through the OpenShift Web Console or the CLI (you can use the command above).

Note: The page may still lag a bit which is normal.

5. Review resources in Azure Monitor

After confirming that the autoscaler did spin up new pods, revisit Azure Monitor like we did in the logging section. By clickin on the containers tab we can see the resource consumption of the pods and see that three pods were created to handle the load.

HPA Metrics

Lab 3 - CI/CD

CI/CD Demo Overview


With Continuous Integration and Continuous Delivery, you can accelerate the development and deliver of your microservices. Azure Red Hat OpenShift provides seamless supports to open source DevOps & CICD tools, like Jenkins, Nexus, GitHub and SonarQube.

The OpenShift CI/CD Demo

In this lab, you’ll deploy a complete CI/CD demo on Azure Red Hat OpenShift.

Following components will be deployed onto your Azure Red Hat OpenShift cluster.

  • A sample Java application, the source code is available at
  • A sample OpenShift Pipeline, a pre-defined CI/CD pipeline on ARO.
  • Jenkins, open source CI/CD engine and automation server.
  • Nexus, software artifact repository.
  • SonarQube, code quality platform.
  • Gogs, a self-hosted Git service.
  • PostgreSQL, open source RDBMS database.
  • Eclipse Che, web based Eclipse IDE.

The following diagram shows the steps included in the deployment pipeline:

CI/CD Demo Diagram

On every pipeline execution, the code goes through the following steps:

  • Code is cloned from Gogs, built, tested and analyzed for bugs and bad patterns
  • The WAR artifact is pushed to Nexus Repository manager
  • A container image (tasks:latest) is built based on the Tasks application WAR artifact deployed on WildFly
  • The Tasks app container image is pushed to the internal image registry
  • The Tasks container image is deployed in a fresh new container in DEV project
  • If tests successful, the pipeline is paused for the release manager to approve the release to STAGE
  • If approved, the DEV image is tagged in the STAGE project
  • The staged image is deployed in a fresh new container in the STAGE project

To learn more about this CI/CD demo, you can visit this GitHub Repo.

Deploy CI/CD Demo

Clone the GitHub Repo

To deploy the CI/CD demo, you need to download the required deployment files from GitHub repository

On your Azure Cloud Shell, clone the OpenShift CI/CD demo repository. Currently ARO supports OpenShift 4.3, you’ll need to checkout branch ocp-4.3.

git clone -b ocp-4.3 cicd

Deploy the demo

Now you can proceed to deploy the demo by running the script file, which is under the folder scripts. In the following command, we specified to install Eclipse Che, and setting the project name suffix as aro.

./cicd/scripts/ deploy --enable-che --ephemeral --project-suffix aro

Verify the deployment

After the deployment is completed, you can verify the newly created resources.

Run the following command to review the list of projects.

oc get project

You’ll see following projects are created by the demo provision script.

$ oc get project
cicd-aro    CI/CD           Active
dev-aro     Tasks - Dev     Active
openshift                   Active
stage-aro   Tasks - Stage   Active

Confirm the status of all related containers is Running or Completed.

$ oc get pod -n cicd-aro

If the deployment finished successfully, you will see something similar to the following output.

NAME                        READY     STATUS      RESTARTS   AGE
che-2-l4q5k                 1/1       Running     0          2m
cicd-demo-installer-w46dp   0/1       Completed   0          1m
gogs-1-krr7x                1/1       Running     2          2m
gogs-postgresql-1-fftjn     1/1       Running     0          2m
jenkins-2-mqpwv             1/1       Running     0          3m
nexus-2-8wvwx               0/1       Running     0          2m
nexus-2-deploy              1/1       Running     0          2m
sonardb-1-4b6kg             1/1       Running     0          2m
sonarqube-1-844q7           1/1       Running     0          2m

From the Azure Red Hat OpenShift web console, you can see the newly created project and resources as well.

Run CI/CD Pipeline

Run the CI/CD Pipeline

A CI/CD pipeline is created by the demo provision script. Please review the pipeline and trigger an execution.

On OpenShift Web Console, Navigate to project cicd-aro. From the side navigation menu, select menu item Builds, then selectBuild Configs.

You will see there is a pipeline, named tasks-pipeline. Click on the name of the pipeline, then you will see the pipeline overview page. In the tab Configuration, you will see the definition of the pipeline.

Pipeline Definition

To run the pipeline, click on button Actionsand select Start Build from the dropdown. Then Azure Red Hat OpenShift will start a new execution instance for that pipeline.

Start Build

Monitor the pipeline

After triggering a pipeline execution, please monitor the execution on the web console.

You will see the execution result of each stage of the pipeline.

Pipeline Execution

For detail information, you can click on the link View Log of a execution instance to review the real time log output. After clicking View Log, you will be navigated to the Jenkins login page. Login with OpenShift credentials and grant all the required permissions to Jenkins, then you will see the log output in Jenkins.

Pipeline Logs

Approve pipeline task

You may define steps which required user intervention. You can either approve the request in Azure Red Hat OpenShift web console, or in Jenkins. Please allow the pipeline to promote the deployment from project Dev to Stage.

When the pipeline execution runs to stage ‘Promote to STAGE?’ You will see the pipeline is paused and asking for your input.

Approve Pipeline Task

Click on the link Input Required , and you will be navigated to Jenkins. Click Back to Project button and the click on the Promote buttion in Jenkins to resume the pipeline build.

Approve Pipeline Task<\br>


Verify the results

After the pipeline execution is completed, please review the execution result on the web console. You also can login to the Jenkins console to check out the detail.

If everything works fine, you will be albe to see all pipeline stages completed successfully, as it shown below. The sample application was built and deployed into project Dev and Stage.

CiCD Result

You can checkout the status of the application pods in project Dev and Stage. Following are the commands and sample output.

$ oc get pod -n dev-aro
NAME            READY     STATUS      RESTARTS   AGE
tasks-1-build   0/1       Completed   0          9m
tasks-3-59x64   1/1       Running     0          8m
$ oc get pod -n stage-aro
tasks-4-v8fzt   1/1       Running   0          7m

You can also get the Route of your deployment in project Dev or Stage, then access the deployment with the URL, in a web browser.

$ oc get route -n dev-aro
NAME      HOST/PORT                                                  PATH      SERVICES   PORT      TERMINATION   WILDCARD
tasks             tasks      8080                    None

Cloud Native IDE

To provide your development team a cloud native experience, you can leverge the power of a cloud native integrated development environment (IDE). Eclipse Che is an open-source web based IDE. It supports multi-users and wide range of programming languages and frameworks.

Launch Eclipse Che

Please launch Eclipse Che and create a workspace for the demo application.

Get the Route for Eclipse Che with following command.

oc get route -n cicd-aro che

You will see some output similar as following.

NAME      HOST/PORT                                                 PATH      SERVICES   PORT      TERMINATION   WILDCARD
che             che-host   http                    None                                                                                                                                              

To access Eclipse Che web console, open the URL which is indicated by HOST/PORT in a web browser.

On the Eclipse Che, select the Blank stack, and click button CREATE & OPEN to create a blank workspace.

Eclipse Che UI

It will take some time for Eclipse Che to setup the workspace. Wait until the workspace is ready, then you can proceed to the next task.

Check Out Source Code

After the Eclipse Che workspace is ready, please check out the source code of the demo application on Eclipse Che.

Check out the Route URL of Gogs server. Gogs is a light-weight self-hosted Git service.

$ oc get route -n cicd-aro gogs

Please login to Gogs as user gogs with password gogs. Copy the address of the repository openshift-tasks.

Gogs Repo

Back to Eclipse Che, click Import Project from menu Workspace to import the source code from the remote repository. Make sure select GIT as the version control system, paste the repository URL which you copied from Gogs into the filed URL. Check the Branch checkbox, and put eap-7 in the textbox for branch. Click button import, in the next screen, select Maven for project configuration.

Check out source code

Change and Deploy

Please make some changes to the demo application, and commit the changes to the remote Git repository.

Open file src > main > webapp > index.jsp. Change line 7 and line 45 from OpenShift Tasks Demo to Azure Red Hat OpenShift Tasks Demo. Then save the changes.

Before you can commit your changes to Git, you need to setup your Git profile. From top menu Profile, select menu item Preferences. In the Preferences dialog, click Committer which is under the section Git. Put in user name gogs and email Click button save, then button close, to save the changes and close the dialog.

Code change

Next, you need to commit the changes. Click menu item Commit from menu Git which is at the top. Put ‘Update title’ in the comment text area. Click button commit to commit the changes.

In the Terminal window, which is down at the button in the Eclipse Che window, run following commands to push the changes to remote Gogs Git repository. You will be asked to enter the user name and password for the repository, which are both gogs.

$ cd openshift-tasks/
$ git push origin

Verify the update

After pushing the commit to the remote repository, the CI/CD pipeline will be triggered automatically. You will see a new pipeline execution has been launched for the new commit. After the execution finished successfully, you will be able to see your changes on the web page of the application.

From this example you can see, it’s easy to build up a CI/CD pipeline with Open Source tool chains and Azure Red Hat OpenShift. With container & cloud based CI/CD, application development and delivery are significantly accelerated.


The following people have contributed to this workshop, thanks!