M4: Difference between revisions
Jump to navigation
Jump to search
Line 19: | Line 19: | ||
!bootstrap | !bootstrap | ||
|A temporary machine that runs minimal Kubernetes and deploys the OpenShift Container Platform control plane. | |A temporary machine that runs minimal Kubernetes and deploys the OpenShift Container Platform control plane. | ||
|- | |||
!Cluster Version Operator (CVO) | |||
|An Operator that checks with the OpenShift Container Platform Update Service to see the valid updates and update paths based on current component versions and information in the graph. | |||
|- | |||
!compute nodes | |||
|Nodes that are responsible for executing workloads for cluster users. Compute nodes are also known as worker nodes. | |||
|- | |||
!containers | |||
|Lightweight and executable images that consist of software and all of its dependencies. Because containers virtualize the operating system, you can run containers anywhere, such as data centers, public or private clouds, and local hosts. | |||
|- | |||
!container orchestration engine | |||
|Software that automates the deployment, management, scaling, and networking of containers. | |||
|- | |||
!control groups (cgroups) | |||
|Partitions sets of processes into groups to manage and limit the resources processes consume. | |||
|- | |||
!control plane | |||
|A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the life cycle of containers. Control planes are also known as control plane machines. | |||
|- | |||
!deployment | |||
|A Kubernetes resource object that maintains the life cycle of an application. | |||
|- | |||
!Dockerfile | |||
|A text file that contains the user commands to perform on a terminal to assemble the image. | |||
|- | |||
!Ignition | |||
|A utility that RHCOS uses to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. | |||
|- | |||
!mirror registry | |||
|A registry that holds the mirror of OpenShift Container Platform images. | |||
|- | |||
!namespaces | |||
|A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. | |||
|- | |||
!node | |||
|A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. | |||
|- | |||
!OpenShift CLI (<span class="code">oc</span>) | |||
|A command-line tool to run OpenShift Container Platform commands on the terminal. | |||
|- | |||
!OpenShift Update Service (OSUS) | |||
|For clusters with internet access, Red Hat Enterprise Linux (RHEL) provides over-the-air updates by using an OpenShift update service as a hosted service located behind public APIs. | |||
|- | |||
!OpenShift image registry | |||
|A registry provided by OpenShift Container Platform to manage images. | |||
|- | |||
!pod | |||
|One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. | |||
|- | |||
!private registry | |||
|OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their private container images. | |||
|- | |||
!public registry | |||
|OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their public container images. | |||
|- | |||
!Source-to-Image (S2I) image | |||
|An image created based on the programming language of the application source code in OpenShift Container Platform to deploy applications. | |||
|- | |||
!worker node | |||
|Nodes that are responsible for executing workloads for cluster users. Worker nodes are also known as compute nodes. | |||
|} | |} | ||
Revision as of 18:51, 9 September 2025
Alteeve Wiki :: How To :: M4 |
![]() |
Warning: This is little more than a collection of notes at this point. Do not consider anything here to be complete or accurate. |
How to build an offline/ait-gapped, minimal, highly available Open Shift cluster.
Overview
This explains what Open Shift is and key components in it.
Architecture
Term | Description |
---|---|
bootstrap | A temporary machine that runs minimal Kubernetes and deploys the OpenShift Container Platform control plane. |
Cluster Version Operator (CVO) | An Operator that checks with the OpenShift Container Platform Update Service to see the valid updates and update paths based on current component versions and information in the graph. |
compute nodes | Nodes that are responsible for executing workloads for cluster users. Compute nodes are also known as worker nodes. |
containers | Lightweight and executable images that consist of software and all of its dependencies. Because containers virtualize the operating system, you can run containers anywhere, such as data centers, public or private clouds, and local hosts. |
container orchestration engine | Software that automates the deployment, management, scaling, and networking of containers. |
control groups (cgroups) | Partitions sets of processes into groups to manage and limit the resources processes consume. |
control plane | A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the life cycle of containers. Control planes are also known as control plane machines. |
deployment | A Kubernetes resource object that maintains the life cycle of an application. |
Dockerfile | A text file that contains the user commands to perform on a terminal to assemble the image. |
Ignition | A utility that RHCOS uses to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. |
mirror registry | A registry that holds the mirror of OpenShift Container Platform images. |
namespaces | A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. |
node | A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. |
OpenShift CLI (oc) | A command-line tool to run OpenShift Container Platform commands on the terminal. |
OpenShift Update Service (OSUS) | For clusters with internet access, Red Hat Enterprise Linux (RHEL) provides over-the-air updates by using an OpenShift update service as a hosted service located behind public APIs. |
OpenShift image registry | A registry provided by OpenShift Container Platform to manage images. |
pod | One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. |
private registry | OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their private container images. |
public registry | OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their public container images. |
Source-to-Image (S2I) image | An image created based on the programming language of the application source code in OpenShift Container Platform to deploy applications. |
worker node | Nodes that are responsible for executing workloads for cluster users. Worker nodes are also known as compute nodes. |
Setting Up A Test Environment
This is a guide to setting up a bare iron machine to run open shift and kcli.
Install
whoami
# digimer
sudo dnf -y install libvirt libvirt-daemon-driver-qemu qemu-kvm tar
sudo usermod -aG qemu,libvirt $(id -un)
sudo newgrp libvirt
sudo systemctl enable --now libvirtd
sudo dnf -y copr enable karmab/kcli
sudo dnf -y install kcli </syntaxhighlight>
Configure
sudo kcli create pool -p /var/lib/libvirt/images default
Creating pool default...
sudo setfacl -m u:$(id -un):rwx /var/lib/libvirt/images
sudo virsh net-destroy default
Network default destroyed
sudo virsh net-undefine default
Network default has been undefined
kcli create network -c 192.168.0.0/16 default
Network default deployed
Create the config;
kcli create host kvm -H 127.0.0.1 local
Using local as hostname
Host local created
![]() |
Note: Use 'pull-secret', not rhn stuff. |
cat ~/openshift-config.yml
info: Madi's Test Plan on os-02
cluster: mk-anvil-02
domain: 'digimer.ca'
version: stable
tag: 4.19
ctlplanes: 3
workers: 3
memory: 16384
numcpus: 16
pull_secret: /home/digimer/pull-secret.txt
Make sure there's a key for root and for the user running 'kcli';
if [ -e ~/.ssh/id_ed25519.pub ]; then echo key exists; else echo key needed; ssh-keygen -f ~/.ssh/id_ed25519 -P ""; fi
sudo kcli create kube openshift --paramfile ~/openshift-config.yml mk-anvil-02
Deploying on client local
Deploying cluster mk-anvil-02
Using stable version
Using 192.168.255.253 as api_ip
Downloading openshift-install from https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable-4.19
Move downloaded openshift-install somewhere in your PATH if you want to reuse it
Using installer version 4.19.7
Grabbing image rhcos-9.6.20250523-0-openstack.x86_64.qcow2 from url https://rhcos.mirror.openshift.com/art/storage/prod/streams/rhel-9.6/builds/9.6.20250523-0/x86_64/rhcos-9.6.20250523-0-openstack.x86_64.qcow2.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
40 1196M 40 480M 0 0 25.6M 0 0:00:46 0:00:18 0:00:28 25.7M
....
Go have a coffee or a nap, this will take a while...
More info;
https://kcli.readthedocs.io/en/latest/#openshift-cluster-creation
vim mytest.yml
kcli create kube openshift --paramfile mytest.yml mk-anvil-01
Notes
- Bare iron OS is hardly relevant, it gets rebuilt.
defaultimg=""
ctlplanes="1"
kcli create plan --inputfile "$(dirname $0)/deployers/kcli-plan.yml" --threaded --param image=$defaultimg --param ctlplanes=$controllers --param workers=$workers $extregparam "$1"
parameters:
info: kubesan kcli test plan
cluster: kubesan-test
ctlplanes: 3
workers: 3
image: fedora40
kubesan-test: # replace with 'an-anvil-01'
type: kube
ctlplanes: {{ ctlplanes }}
workers: {{ workers }}
image: {{ image }} # remove this
domain: ''
<add pull-secret file>
mycluster:
type: cluster
kubetype: openshift
okd: true
ctlplanes: 3
workers: 3
Test mk-anvil cluster:
parameters:
info: kubesan kcli test plan
cluster: mk-anvil
ctlplanes: 3
workers: 3
image: fedora40
mk-anvil:
type: kube
ctlplanes: {{ ctlplanes }}
workers: {{ workers }}
image: {{ image }}
domain: ''
registry: true
cmds:
- yum -y install podman lvm2-lockd sanlock
- sed -i "s|# use_watchdog = 1|use_watchdog = 0|" /etc/sanlock/sanlock.conf
- >-
sed -i "
s|# validate_metadata = \"full\"|validate_metadata = \"none\"|;
s|# multipath_component_detection = 1|multipath_component_detection = 0|;
s|# md_component_detection = 1|md_component_detection = 0|;
s|# backup = 1|backup = 0|;
s|# archive = 1|archive = 0|;
s|# use_lvmlockd = 0|use_lvmlockd = 1|;
s|# thin_check_options = \[.*\]|thin_check_options = \[ \"-q\", \"--clear-needs-check-flag\", \"--skip-mappings\" \]|;
s|# io_memory_size = 8192|io_memory_size = 65536|;
s|# reserved_memory = 8192|reserved_memory = 0|
" /etc/lvm/lvm.conf
{%for node in cluster|kubenodes(ctlplanes, workers) %}
- if [ "$(hostname)" == "{{ node }}" ]; then sed -i "s|# host_id = 0|host_id = {{ loop.index }}|" /etc/lvm/lvmlocal.conf; fi
{%endfor%}
- systemctl enable --now podman lvmlockd sanlock
# TODO: paramaterize shared storage
kubesan-test-shared-1.img:
type: disk
thin: false
size: 5
pool: default
vms: {{ cluster|kubenodes(ctlplanes, workers) }}
References
- RHN OpenShift
- kcli os deploy
- Deploying Red Hat OpenShift Operators in a disconnected environment
- kcli
- example
- Aman's link
Any questions, feedback, advice, complaints or meanderings are welcome. | |||
Alteeve's Niche! | Alteeve Enterprise Support | Community Support | |
© 2025 Alteeve. Intelligent Availability® is a registered trademark of Alteeve's Niche! Inc. 1997-2025 | |||
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |