M4

From Alteeve Wiki
Jump to navigation Jump to search

 Alteeve Wiki :: How To :: M4

Warning: This is little more than a collection of notes at this point. Do not consider anything here to be complete or accurate.

How to build an offline/ait-gapped, minimal, highly available Open Shift cluster.

Setting Up A Test Environment

This is a guide to setting up a bare iron machine to run open shift and kcli.

Install

whoami
# digimer
sudo dnf -y install libvirt libvirt-daemon-driver-qemu qemu-kvm tar
sudo usermod -aG qemu,libvirt $(id -un)
sudo newgrp libvirt
sudo systemctl enable --now libvirtd
sudo dnf -y copr enable karmab/kcli

sudo dnf -y install kcli </syntaxhighlight>

Configure

sudo kcli create pool -p /var/lib/libvirt/images default
Creating pool default...
sudo setfacl -m u:$(id -un):rwx /var/lib/libvirt/images
sudo virsh net-destroy default
Network default destroyed
sudo virsh net-undefine default
Network default has been undefined
kcli create network  -c 192.168.0.0/16 default
Network default deployed

Create the config;

kcli create host kvm -H 127.0.0.1 local
Using local as hostname
Host local created
Note: Use 'pull-secret', not rhn stuff.
cat ~/.kcli/config.yml
default:
  autostart: false
  client: local
  cloudinit: true
  cpuhotplug: false
  cpumodel: host-model
  diskinterface: virtio
  disks:
  - default: true
    size: 20
  disksize: 20
  diskthin: true
  enableroot: true
  guestagent: true
  guestid: guestrhel764
  host: 127.0.0.1
  insecure: true
  keep_networks: false
  memory: 2048
  memoryhotplug: false
  nested: true
  nets:
  - default
  networkwait: 0
  notify: false
  notifymethods:
  - pushbullet
  numcpus: 4
  pool: default
  privatekey: false
  protocol: ssh
  reservedns: false
  reservehost: false
  reserveip: false
  rhnregister: true
  rhnserver: https://subscription.rhsm.redhat.com
  rhnunregister: false
  rng: false
  sharedkey: false
  start: true
  storemetadata: false
  tempkey: false
  tpm: false
  tunnel: false
  tunneldir: /var/www/html
  tunnelport: 22
  tunneluser: root
  type: kvm
  user: root
  vmrules_strict: false
  vnc: true
  wait: false
  waittimeout: 0
local:
  host: 127.0.0.1
  pool: default
  protocol: ssh
  type: kvm
  user: root


Notes

  • Bare iron OS is hardly relevant, it gets rebuilt.
defaultimg=""
ctlplanes="1"

kcli create plan --inputfile "$(dirname $0)/deployers/kcli-plan.yml" --threaded --param image=$defaultimg --param ctlplanes=$controllers --param workers=$workers $extregparam "$1"

parameters:
  info: kubesan kcli test plan
  cluster: kubesan-test
  ctlplanes: 3
  workers: 3
  image: fedora40


kubesan-test:   # replace with 'an-anvil-01'
  type: kube
  ctlplanes: {{ ctlplanes }}
  workers: {{ workers }}
  image: {{ image }}     # remove this
  domain: ''             
  <add pull-secret file>

mycluster:
  type: cluster
  kubetype: openshift
  okd: true
  ctlplanes: 3
  workers: 3

Test mk-anvil cluster:

parameters:
  info: kubesan kcli test plan
  cluster: mk-anvil
  ctlplanes: 3
  workers: 3
  image: fedora40

mk-anvil:
  type: kube
  ctlplanes: {{ ctlplanes }}
  workers: {{ workers }}
  image: {{ image }}
  domain: ''
  registry: true
  cmds:
    - yum -y install podman lvm2-lockd sanlock
    - sed -i "s|# use_watchdog = 1|use_watchdog = 0|" /etc/sanlock/sanlock.conf
    - >-
      sed -i "
      s|# validate_metadata = \"full\"|validate_metadata = \"none\"|;
      s|# multipath_component_detection = 1|multipath_component_detection = 0|;
      s|# md_component_detection = 1|md_component_detection = 0|;
      s|# backup = 1|backup = 0|;
      s|# archive = 1|archive = 0|;
      s|# use_lvmlockd = 0|use_lvmlockd = 1|;
      s|# thin_check_options = \[.*\]|thin_check_options = \[ \"-q\", \"--clear-needs-check-flag\", \"--skip-mappings\" \]|;
      s|# io_memory_size = 8192|io_memory_size = 65536|;
      s|# reserved_memory = 8192|reserved_memory = 0|
      " /etc/lvm/lvm.conf
{%for node in cluster|kubenodes(ctlplanes, workers) %}
    - if [ "$(hostname)" == "{{ node }}" ]; then sed -i "s|# host_id = 0|host_id = {{ loop.index }}|" /etc/lvm/lvmlocal.conf; fi
{%endfor%}
    - systemctl enable --now podman lvmlockd sanlock
# TODO: paramaterize shared storage
kubesan-test-shared-1.img:
  type: disk
  thin: false
  size: 5
  pool: default
  vms: {{ cluster|kubenodes(ctlplanes, workers) }}


References

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Alteeve Enterprise Support Community Support
© 2025 Alteeve. Intelligent Availability® is a registered trademark of Alteeve's Niche! Inc. 1997-2025
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.