Build an M3 Anvil! Cluster
Alteeve Wiki :: How To :: Build an M3 Anvil! Cluster |
Warning: This is a work in progress document. While this header is here, please do not consider this article complete or accurate. |
First off, what is an Anvil!?
In short, it's a system designed to keep servers running through an array of failures, without need for an internet connection.
Think about ship-board computer systems, remote research facilities, factories without dedicated IT staff, un-staffed branch offices and so forth. Where most hosted solutions expect for technical staff to be available in short order, and Anvil! is designed to continue functioning properly for weeks or months with faulty components.
In these cases, the Anvil! system will predict component failure and mitigate automatically. It will adapt to changing threat conditions, like cooling or power loss, including automatic recovery from full power loss. It is designed around the understanding that a fault condition may not be repaired for weeks or months, and can do automated risk analysis and mitigation.
That's an Anvil! cluster!
An Anvil! cluster is designed so that any component in the cluster can fail, be removed and a replacement installed without needing a maintenance window. This includes power, network, compute and management systems.
Components
The minimum configuration needed to host servers on an Anvil! is this;
Management Layer | |
---|---|
Striker Dashboard 1 | Striker Dashboard 2 |
Anvil! Node | |
Node 1 | |
Subnode 1 | Subnode 2 |
Foundation Pack 1 | |
Ethernet Switch 1 | Ethernet Switch 2 |
Switched PDU 1 | Switched PDU 2 |
UPS 1 | UPS 2 |
With this configuration, you can host as many servers as you would like, limited only by the resources of Node 1 (itself made of a pair of physical nodes with your choice of processing, RAM and storage resources).
Scaling
To add capacity for hosted servers, individual nodes can be upgraded (online!), and/or additional nodes can be added. There is no hard limit on how many nodes can be in a given cluster.
Each 'Foundation Pack' can handle as many nodes as you'd like, though for reasons we'll explain in more detail later, it is recommended to run two to four nodes per foundation pack.
Management Layer; Striker Dashboards
The management layer, the Striker dashboards, have no hard limit on how many Node Blocks they can manage. All node-blocks record their data to the Strikers (to offload processing and storage loads). There is a practical limit to how many node blocks can use the Strikers, but this can be accounted for in the hardware selected for the dashboards.
Nodes
An Anvil! cluster uses one or more nodes, with each node being a pair of matched physical subnodes configured as a single logical unit. The power of a given node block is set by you and based on the loads you expect to place on it.
There is no hard limit on how many node blocks exist in an Anvil! cluster. Your servers will be deployed across the node blocks and, when you want to add more servers than you currently have resource for, you simple add another node block.
Foundation Packs
A foundation pack is the power and ethernet layer that feeds into one or more node blocks. At it's most basic, it consists of three pairs of equipment;
- Two stacked (or VLT-domain'ed) ethernet switches.
- Two switched PDUs (network-switched power bars
- Two UPSes.
Each UPS feeds one PDU, forming two separate "power rails". Ethernet switches and all sub-nodes are equipped with redundant PSUs, with one PSU fed by either power rail.
In this way, any component in the foundation pack can fault, and all equipment will continue to have power and ethernet resources available. How many Anvil! node-pairs can be run on a given foundation pack is limited only by the sizing of the selected foundation pack equipment.
Configuration
Note: This is SUPER basic and minimal at this stage. |
Striker Dashboards
Striker dashboards are often described as "cheap and cheerful", generally being a fairly small and inexpensive device, like a Dell Optiplex 3090, Intel NUC, or similar.
You can choose any vendor you wish, but when selecting hardware, be mindful that all Scancore data is stored in PostgreSQL databases running on each dashboard. As such, we recommend an Intel Core i5 or AMD Ryzen 5 class CPU, 8 GiB or more of RAM, a ~250 GiB SSD (mixed use, decent IOPS) storage and two ethernet ports.
Striker Dashboards host the Striker web interface, and act as a bridge between your IFN network and the Anvil! cluster's BCN management network. As such, they must have a minimum of two ethernet ports.
Node Pairs
An Anvil! Node Pair is made up of two identical physical machines. These two machines act as a single logical unit, providing fault tolerance and automated live migrations of hosted servers to mitigate against predicted hardware faults.
Each sub-node (a single hardware node) must have;
- Redundant PSUs
- Six ethernet ports (eight recommended). If six, use 3x dual-port. If eight, 2x quad port will do.
- Redundant storage (RAID level 1 (mirroring) or level 5 or 6 (striping with parity). Sufficient capacity and IOPS to host the servers that will run on each pair.
- IPMI (out-of-band) management ports. Strongly recommend on a dedicated network interface.
- Sufficient CPU core count and core speed for expected hosted servers.
- Sufficient RAM for the expected servers (note that the Anvil! reserves 8 GiB).
Disaster Recovery (DR) Host
Optionally, a "third node" of a sort can be added to a node-pair. This is called a DR Host, and should (but doesn't have to be) identical to the node pair hardware it is extending.
A DR (disaster recovery) Host acts as a remotely hosted "third node" that can be manually pressed into service in a situation where both nodes in a node pair are destroyed. A common example would be a DR Host being in another building on a campus installation, or on the far side of the building / vessel.
A DR host can in theory be in another city, but storage replication speeds and latency need to be considered. Storage replication between node pairs is synchronous, where replication to DR can be asynchronous. However, consideration of storage loads are required to insure that storage data can keep up with the rate of data change.
Foundation Pack Equipment
The Anvil! is, fundamentally, hardware agnostic. That said, the hardware you select must be configured to meet the Anvil! requirements.
As we are hardware agnostic, we've created three linked pages. As we validate hardware ourselves, we will expand hardware-specific configuration guides. If you've configured foundation pack equipment not in the pages below, and you are willing, we would love to add your configuration instructions to our list.
Striker, Node and DR Host Configuration
In UEFI (BIOS), configure;
- Striker Dashboards to power on after power loss in all cases.
- Configure Subnodes to stay powered off after power loss in all cases.
- Configure any machines with redundant PSUs to balance the load across PSUs (don't use "hot spare" where only one PSU is active carrying the full load)
If using RAID
- If you have two drives, configure RAID level 1 (mirroring)
- If using 3 to 8 drives, configure RAID level 5 (striping with N-1 parity)
- If using 9+ drives, configure RAID level 6 (striping with N-2 parity)
Note that a server on a given node-pair will have it's data mirrored, effectively creating a sort of RAID level 11 (mirror of mirrors), 15 (mirror of N-1 stripes) or 16 (mirror of N-2 stripes). This is why we're comfortable pushing RAID level 5 to 8 disks.
Installation of Base OS
For all three machine types; (striker dashboards, node-pair sub-node, dr host), begin with a minimal RHEL 9 or Alma Linux 9 install.
Note: This tutorial assumes an existing understanding of installed RHEL 9. If you are new to RHEL, you can setup a free Red Hat account, and then follow their installation guide. |
Base OS Install
Note: Every effort has been made in the development of the Anvil! to ensure it will work with localisations. However, parsing of command output has been tested with Canadian and American English. As such, it is recommended that you install using one of these localisations. If you use a different localisation, and run into any problems, please let us know and we will try to add support. |
Localisation
Choose your localisation;
Main Install Menu
Once the localisation is selected, you will see the main installation screen.
The order things are configured generally doesn't matter, though it's a good habit to configure the network before configuring storage. Doing it in this order means that the volume group name will be based on the host name, making it unique among the cluster. This is a minor thing, but in the case of a future data recovery, can be helpful to identify the source of the data.
Disable kdump
Disable kdump; This prevents kernel dumps if the OS crashes, but it means the host will recover faster. If you want to leave kdump enabled, that is fine, but be aware of the slower recovery times. Note that a subnode getting fenced will be forced off, and so kernel dumps won't be collected regardless of this configuration.
Network & Host Name
Note: Networking in the Anvil! cluster can be a little complex at first. If you haven't already, please review Anvil! Networking. |
Set the host name for the machine. It's useful to do this before configuring storage, so that the volume group name includes the host's short host name. This doesn't effect the operation of the Anvil! system, but it can assist with debugging down the road.
This configuration is to make the machine accessible. The network will be reconfigured later during the Striker, Node or DR configuration stage. As such, you can configure only one interface if you prefer. The key is to have a way to access the system to complete the configuration later.
Note: If you want to configure the BCN IP as well, be sure to click on Routes and click to check Use this connection only for resources on its network. |
Time & Date
Note: If your site restricts access to NTP time servers, please be sure to configure 'chrony' to sync with your time servers! It is very important that all machines in the Anvil! cluster have the same concept of time! |
Setting the timezone is very much specific to you and your install. The most important part is that the time zone is set consistently across all machines in the Anvil! cluster.
Software Selection
All machines can start with a Minimal Install. On Strikers, if you'd prefer to use Server With GUI, that is fine, but it is not needed at this step. The anvil-striker RPM will pull in the graphical interface.
Note: If you select a graphical install on a Striker Dashboard, create a user called admin and set a password for that user. |
Installation Destination
Note: It is strongly suggested to set the host name before configuring storage. |
Note: This is where the installation of a Striker dashboard will differ from an Anvil! Node's sub-node or DR host |
In this example, there is a single hard drive that will be configured. It's entirely valid to have a dedicated OS drive, and using a second drive for hosting servers. If you're planning to use a different storage plan, then you can ignore this stage. The key requirement is that there is unused space sufficiently large to host the servers you plan to run on a given node or DR host.
Striker Dashboards | Anvil! Subnodes and DR Hosts |
---|---|
Click on "Click here to create them automatically". This will create the base storage configuration, which we will adapt.
Striker Dashboards | Anvil! Subnodes and DR Hosts |
---|---|
In all cases, the auto-created /home logical volume will be deleted.
- For Striker dashboards, after deleting /home, assign the freed space to the / partition. To do this, select the / partition, and set the Desired Capacity to some much larger size than is available (like 1TiB), and click on Update Setting. The size will change to the largest valid value.
- For Anvil! subnodes and DR hosts, simply delete the /home partition, and do not give the free space to /. The space freed up by deleting /home will be used later for hosting servers.
Note: For nodes and DR, the default "/" partition will be 50 or 70 GiB. You might want to increase this, especially if you plan to have a lot of different install ISOs for various hosted OSes. You might want to consider 100 GiB or more, depending on you free space and the expected server requirements. |
Striker Dashboards | Anvil! Subnodes and DR Hosts |
---|---|
Optionally;
If you plan to have two or more Storage Groups, for example, in your node will have a high speed storage array and a bulk storage array, you might want to rename the volume group. This can help you keep track of the storage and its purpose.
- For this example, we'll rename the Striker VG from rhel_an-striker01 to an-striker01_vg0. Strikers rarely ever have a second array, so it could have been left alone, but personally I prefer adding the generic _vg0 suffix.
- For the node, we'll rename the VG from rhel_an-a01n01 to an-a01n01_hs0, with the _hs0 being meant to indicate that it's the first high-speed array. An alternative could be _bs0 if the OS is installed on a cheap, bulk storage array in the node will host a NAS.
In the end, the naming convention, if you bother to use one at all, is entirely up to you and your convenience.
Striker Dashboards | Anvil! Subnodes and DR Hosts |
---|---|
From this point forward, the rest of the OS install is the same for all systems.
Optional; Connect to Red Hat
If you are installing RHEL 9, as opposed to Alma Linux 9, you can register the server during installation. If you don't do this, the Anvil! will give you a chance to register the server during the installation process also.
Note: If you already selected the Software Selection, you will need to select it again after registering with Red Hat. |
Root Password
Set the root user password. Be sure to also check to enable Allow root to SSH login with password.
Begin Installation
With everything selected, click on Begin Installation. When the install has completed, reboot into the minimal install.
Post OS Install Configuration
Setting up the Alteeve repos is the same, but after that, the steps start to diverge depending on which machine type we're setting up in the Anvil! cluster.
Installing the Alteeve Repo
Note: Our repo pulls in a bunch of other packages that will be needed shortly. |
There are two Alteeve repositories that you can install; Community and Enterprise. Which is used is selected after the repository RPM is installed. Lets install the repo RPM, and then we will discuss the differences before we select one.
dnf install https://alteeve.com/an-repo/m3/alteeve-release-latest.noarch.rpm
Updating Subscription Management repositories.
Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) 18 MB/s | 26 MB 00:01
Last metadata expiration check: 0:00:02 ago on Tue 16 Jul 2024 08:05:55 PM.
alteeve-release-latest.noarch.rpm 30 kB/s | 13 kB 00:00
Dependencies resolved.
==================================================================================================================================
Package Architecture Version Repository Size
==================================================================================================================================
Installing:
alteeve-release noarch 0.1-5 @commandline 13 k
Installing dependencies:
annobin x86_64 12.31-2.el9 rhel-9-for-x86_64-appstream-rpms 1.0 M
cpp x86_64 11.4.1-3.el9 rhel-9-for-x86_64-appstream-rpms 11 M
dwz x86_64 0.14-3.el9 rhel-9-for-x86_64-appstream-rpms 130 k
efi-srpm-macros noarch 6-2.el9_0 rhel-9-for-x86_64-appstream-rpms 24 k
fonts-srpm-macros noarch 1:2.0.5-7.el9.1 rhel-9-for-x86_64-appstream-rpms 29 k
gcc x86_64 11.4.1-3.el9 rhel-9-for-x86_64-appstream-rpms 32 M
<...snip...>
rust-srpm-macros noarch 17-4.el9 rhel-9-for-x86_64-appstream-rpms 11 k
sombok x86_64 2.4.0-16.el9 rhel-9-for-x86_64-appstream-rpms 51 k
systemtap-sdt-devel x86_64 5.0-4.el9 rhel-9-for-x86_64-appstream-rpms 77 k
unzip x86_64 6.0-56.el9 rhel-9-for-x86_64-baseos-rpms 186 k
zip x86_64 3.0-35.el9 rhel-9-for-x86_64-baseos-rpms 270 k
Installing weak dependencies:
perl-CPAN-DistnameInfo noarch 0.12-23.el9 rhel-9-for-x86_64-appstream-rpms 17 k
perl-Encode-Locale noarch 1.05-21.el9 rhel-9-for-x86_64-appstream-rpms 21 k
perl-Term-Size-Any noarch 0.002-35.el9 rhel-9-for-x86_64-appstream-rpms 16 k
perl-TermReadKey x86_64 2.38-11.el9 rhel-9-for-x86_64-appstream-rpms 40 k
perl-Unicode-LineBreak x86_64 2019.001-11.el9 rhel-9-for-x86_64-appstream-rpms 129 k
Transaction Summary
==================================================================================================================================
Install 267 Packages
Total size: 116 M
Total download size: 116 M
Installed size: 344 M
Is this ok [y/N]:
Downloading Packages:
(1/266): ghc-srpm-macros-1.5.0-6.el9.noarch.rpm 19 kB/s | 9.0 kB 00:00
(2/266): lua-srpm-macros-1-6.el9.noarch.rpm 22 kB/s | 10 kB 00:00
(3/266): libthai-0.1.28-8.el9.x86_64.rpm 383 kB/s | 211 kB 00:00
(4/266): perl-Algorithm-Diff-1.2010-4.el9.noarch.rpm 473 kB/s | 51 kB 00:00
(5/266): perl-Archive-Zip-1.68-6.el9.noarch.rpm 832 kB/s | 116 kB 00:00
<...snip...>
(263/266): pkgconf-pkg-config-1.7.3-10.el9.x86_64.rpm 156 kB/s | 12 kB 00:00
(264/266): make-4.3-8.el9.x86_64.rpm 4.0 MB/s | 541 kB 00:00
(265/266): zip-3.0-35.el9.x86_64.rpm 1.4 MB/s | 270 kB 00:00
(266/266): gcc-11.4.1-3.el9.x86_64.rpm 13 MB/s | 32 MB 00:02
----------------------------------------------------------------------------------------------------------------------------------
Total 8.2 MB/s | 116 MB 00:14
Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) 3.5 MB/s | 3.6 kB 00:00
Importing GPG key 0xFD431D51:
Userid : "Red Hat, Inc. (release key 2) <security@redhat.com>"
Fingerprint: 567E 347A D004 4ADE 55BA 8A5F 199E 2F91 FD43 1D51
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Is this ok [y/N]:
Key imported successfully
Importing GPG key 0x5A6340B3:
Userid : "Red Hat, Inc. (auxiliary key 3) <security@redhat.com>"
Fingerprint: 7E46 2425 8C40 6535 D56D 6F13 5054 E4A4 5A63 40B3
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Is this ok [y/N]:
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : perl-Digest-1.19-4.el9.noarch 1/267
Installing : perl-Digest-MD5-2.58-4.el9.x86_64 2/267
Installing : perl-FileHandle-2.03-481.el9.noarch 3/267
<...snip...>
rust-srpm-macros-17-4.el9.noarch sombok-2.4.0-16.el9.x86_64
systemtap-sdt-devel-5.0-4.el9.x86_64 unzip-6.0-56.el9.x86_64
zip-3.0-35.el9.x86_64
Complete!
Selecting a Repository
There are two released version of the Anvil! cluster. There are pros and cons to both options;
Community Repo
The Community repository is the free repo that anyone can use. As new builds pass our CI/CD test infrastructure, the versions in this repository are automatically built.
This repository always has the latest and greatest from Alteeve. We use Jenkins and a suite of proprietary test suite to ensure that the quality of the releases is excellent. Of course, Alteeve is a company of humans, and there's always a small chance that a bug could get through. Our free community repository is community supported, and it's our wonderful users who help us improve and refine our Anvil! platform.
Enterprise Repo
The Enterprise repository is the paid-access repository. The releases in the enterprise repo are "cherry picked" by Alteeve, and subjected to more extensive testing and QA. This repo is designed for businesses who want the most stable releases.
Using this repo opens up the option of active monitoring of your Anvil! cluster by Alteeve, also!
If you choose to get the Enterprise repo, please contact us and we will provide you with a custome repository key.
Configuring the Alteeve Repo
To configure the repo, we will use the alteeve-repo-setup program that was just installed.
You can see a full list of options, including the use of the --key <uuid> to enable to Enterprise Repo. For this tutorial, we will configure the community repo.
alteeve-repo-setup
You have not specified an Enterprise repo key. This will enable the community
repository. We work quite hard to make it as stable as we possibly can, but it
does lead Enterprise.
Proceed? [y/N]:
Writing: [/etc/yum.repos.d/alteeve-anvil.repo]...
Repo: [rhel-9] created successfuly.
RHEL 9 Additional Repos
If you are using RHEL 9 proper, you will now need to enable additional repositories.
On all systems;
subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms
Repository 'codeready-builder-for-rhel-9-x86_64-rpms' is enabled for this system.
This is needed for fencing to work, which Striker uses to reboot subnodes after a power of thermal event.
subscription-manager repos --enable rhel-9-for-x86_64-highavailability-rpms
Repository 'rhel-9-for-x86_64-highavailability-rpms' is enabled for this system.
Installing Anvil! Packages
This is the step where, from a software perspective, Anvil! cluster systems differentiate to become Striker Dashboards, Anvil! subnodes, and DR hosts. Which a given machine becomes depends on which RPM is installed. The three RPMs that set a machine's role are;
Striker Dashboards: | anvil-striker |
---|---|
Anvil! Subnodes: | anvil-node |
DR Hosts: | anvil-dr |
Striker Dashboards; Installing anvil-striker
Now we're ready to install!
Note: Given the install of the OS was minimal, these RPMs pull in a lot of RPMs. The output below is truncated. |
Thus, let's install the RPMs on our systems.
dnf install anvil-striker
Updating Subscription Management repositories.
Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) 16 MB/s | 38 MB 00:02
Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs) 7.0 MB/s | 8.3 MB 00:01
Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) 2.8 MB/s | 2.2 MB 00:00
Dependencies resolved.
==================================================================================================================================
Package Arch Version Repository Size
==================================================================================================================================
Installing:
anvil-striker noarch 2.92-1.115.66ee.el9 anvil-community-rhel-9 3.6 M
Installing dependencies:
ModemManager-glib x86_64 1.20.2-1.el9 rhel-9-for-x86_64-baseos-rpms 337 k
NetworkManager-initscripts-updown noarch 1:1.46.0-8.el9_4 rhel-9-for-x86_64-baseos-rpms 23 k
SDL2 x86_64 2.26.0-1.el9 rhel-9-for-x86_64-appstream-rpms 683 k
<...snip...>
redhat-backgrounds noarch 90.4-2.el9 rhel-9-for-x86_64-appstream-rpms 5.2 M
telnet x86_64 1:0.17-85.el9 rhel-9-for-x86_64-appstream-rpms 66 k
tracker-miners x86_64 3.1.2-4.el9_3 rhel-9-for-x86_64-appstream-rpms 942 k
Transaction Summary
==================================================================================================================================
Install 705 Packages
Total download size: 536 M
Installed size: 2.0 G
Is this ok [y/N]:
Downloading Packages:
(1/705): bpg-dejavu-sans-fonts-2017.2.005-20.el9.noarch.rpm 575 kB/s | 180 kB 00:00
(2/705): bpg-fonts-common-20120413-20.el9.noarch.rpm 410 kB/s | 20 kB 00:00
(3/705): anvil-core-2.92-1.115.66ee.el9.noarch.rpm 2.2 MB/s | 956 kB 00:00
(4/705): htop-3.2.2-2.el9.x86_64.rpm 1.5 MB/s | 185 kB 00:00
(5/705): anvil-striker-2.92-1.115.66ee.el9.noarch.rpm 7.2 MB/s | 3.6 MB 00:00
<...snip...>
(702/705): fence-agents-vmware-rest-4.10.0-62.el9_4.4.noarch.rpm 246 kB/s | 17 kB 00:00
(703/705): fence-agents-scsi-4.10.0-62.el9_4.4.noarch.rpm 166 kB/s | 22 kB 00:00
(704/705): fence-agents-vmware-soap-4.10.0-62.el9_4.4.noarch.rpm 280 kB/s | 18 kB 00:00
(705/705): fence-agents-wti-4.10.0-62.el9_4.4.noarch.rpm 204 kB/s | 17 kB 00:00
----------------------------------------------------------------------------------------------------------------------------------
Total 12 MB/s | 536 MB 00:43
Anvil Community Repository (rhel-9) 1.6 MB/s | 1.6 kB 00:00
Importing GPG key 0xD548C925:
Userid : "Alteeve's Niche! Inc. repository <support@alteeve.ca>"
Fingerprint: 3082 E979 518A 78DD 9569 CD2E 9D42 AA76 D548 C925
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-Alteeve-Official
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Running scriptlet: npm-1:8.19.4-1.16.20.2.8.el9_4.x86_64 1/1
Preparing : 1/1
Installing : atk-2.36.0-5.el9.x86_64 1/705
Installing : libtirpc-1.3.3-8.el9_4.x86_64 2/705
Installing : libwayland-client-1.21.0-1.el9.x86_64 3/705
Installing : libpng-2:1.6.37-12.el9.x86_64 4/705
Installing : libjpeg-turbo-2.0.90-7.el9.x86_64 5/705
<...snip...>
xorg-x11-xauth-1:1.1-10.el9.x86_64 xorg-x11-xinit-1.4.0-11.el9.x86_64
xorriso-1.5.4-4.el9.x86_64 yajl-2.1.0-22.el9.x86_64
yum-utils-4.3.0-13.el9.noarch zenity-3.32.0-8.el9.x86_64
zlib-devel-1.2.11-40.el9.x86_64
Complete!
Done!
Anvil! Subnode; Installing anvil-node
dnf install anvil-node
Updating Subscription Management repositories.
Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs) 6.8 MB/s | 8.3 MB 00:01
Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) 17 MB/s | 38 MB 00:02
Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) 2.1 MB/s | 2.2 MB 00:01
Dependencies resolved.
==================================================================================================================================
Package Arch Version Repository Size
==================================================================================================================================
Installing:
anvil-node noarch 2.92-1.115.66ee.el9 anvil-community-rhel-9 25 k
Installing dependencies:
NetworkManager-initscripts-updown noarch 1:1.46.0-8.el9_4 rhel-9-for-x86_64-baseos-rpms 23 k
SDL2 x86_64 2.26.0-1.el9 rhel-9-for-x86_64-appstream-rpms 683 k
adobe-source-code-pro-fonts noarch 2.030.1.050-12.el9.1 rhel-9-for-x86_64-baseos-rpms 836 k
akmod-drbd x86_64 9.2.8-1.el9 anvil-community-rhel-9 1.7 M
akmods noarch 0.5.7-9.el9 anvil-community-rhel-9 29 k
anvil-core noarch 2.92-1.115.66ee.el9 anvil-community-rhel-9 956 k
<...snip...>
xorriso-1.5.4-4.el9.x86_64
yajl-2.1.0-22.el9.x86_64
yum-utils-4.3.0-13.el9.noarch
zlib-devel-1.2.11-40.el9.x86_64
zstd-1.5.1-2.el9.x86_64
Complete!
Disaster Recovery Host; Installing anvil-dr
dnf install anvil-dr
Updating Subscription Management repositories.
Red Hat CodeReady Linux Builder for RHEL 9 x86_64 (RPMs) 5.1 MB/s | 8.3 MB 00:01
Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) 17 MB/s | 38 MB 00:02
Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) 3.1 MB/s | 2.2 MB 00:00
Dependencies resolved.
==================================================================================================================================
Package Arch Version Repository Size
==================================================================================================================================
Installing:
anvil-dr noarch 2.92-1.115.66ee.el9 anvil-community-rhel-9 8.5 k
Installing dependencies:
NetworkManager-initscripts-updown noarch 1:1.46.0-8.el9_4 rhel-9-for-x86_64-baseos-rpms 23 k
SDL2 x86_64 2.26.0-1.el9 rhel-9-for-x86_64-appstream-rpms 683 k
adobe-source-code-pro-fonts noarch 2.030.1.050-12.el9.1 rhel-9-for-x86_64-appstream-rpms 836 k
akmod-drbd x86_64 9.2.8-1.el9 anvil-community-rhel-9 1.7 M
<...snip...>
xorriso-1.5.4-4.el9.x86_64
yajl-2.1.0-22.el9.x86_64
yum-utils-4.3.0-13.el9.noarch
zlib-devel-1.2.11-40.el9.x86_64
zstd-1.5.1-2.el9.x86_64
Complete!
Configuring the Striker Dashboards
Note: The admin user will automatically be created by anvil-daemon after anvil-striker is installed. You don't need to create this account manually. |
There are no default passwords on Anvil! systems. So the first step is to set the password for the admin user.
passwd admin
Changing password for user admin.
Enter the password you want to use twice.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Now we can switch to the graphical interface!
Striker Dashboard
Note: You will not be able to log into the Striker dashboard until after doing initial configuration via the browser as the admin user has no default password. If you want to use the browser on the Striker itself, please set the admin password first. |
Using a machine connected to the same network as the Striker dashboard, enter the URL http://<striker_ip_address>. This will load the initial Striker configuration page where we'll start configuring the dashboard.
Note: At this time, https:// is not yet supported, so please use http:// from a machine on the local network. |
Fields;
Organization name | This is a descriptive name for the given Anvil! cluster. Generally this is a company, organization or site name. |
---|---|
Domain name | This is the domain name that will be used when setting host names in the cluster. Generally this is the domain of the organization who use the Anvil! cluster. |
Prefix | This is a short, generally 2~5 characters, descriptive prefix for a given Anvil! cluster. It is used as the leading part of a machine's short host name. |
Striker # | Most Anvil! clusters have two Striker dashboards. This sets the sequence number, so the first is '1' and the second is '2'. If you have multiple Anvil! clusters, the first Striker on the second Anvil! cluster would be '3', etc. |
Host name | This is the host name for this striker. Generally it's in the format of <prefix>-striker0<sequence>.<domain>. |
Admin Password | This is the cluster's password, and should generally be the same password set when you set the password for the 'admin' user. |
Confirm Password | Re-enter the password to confirm the typo you entered doesn't have a typo. |
Note: It is strongly recommended that you keep the "Organization Name", "Domain Name", "Prefix" and "Admin Passwords" the same on Strikers that will be peered together. |
Network Configuration
One of the "trickier" parts of configuring an Anvil! machine is figuring out which physical network interface are to be used for which job.
There are found networks used in Anvil! clusters. If you're not familiar with the networks, you may want to read this first;
The middle section shows the existing network interfaces, enp1s0 and enp2s0 in the example above. In the screenshot above, both interfaces show their State as "On". This shows that both interfaces are physically connected to a network switch.
Which physical interface you want to use for a given task will be up to you, but above is an article explaining how Alteeve does it, and why.
In this example, the Striker has two interfaces, one plugged into the BCN, and one plugged into the IFN. To find which one is which, we'll unplug the cable going to the interface connected to the BCN, and we'll watch to see which interface state changes to "Down".
When we unplug the network cable going to the BCN interface, we can see that the enp2s0 interface has changed to show that it is "Down". Now that we know which interface is used to connect to the BCN, we can reconnect the network cable.
To configure the enp2s0 interface for use as the "Back-Channel Network 1" interface, click or press on the lines to the left of the interface name, and drag it, into the Link 1 box.
This striker dashboard only has two network interfaces, so we know that obviously enp1s0 must be the IFN interface.
Note: If you want redundancy on the Striker dashboard, you can use four interfaces, and you will see two links per network. Unplug the cables connected to the first switch, and they become Link 1. The interface for the second switch will go to Link 2. Configuring bonds this way will be covered when we configure subnodes. |
Once you're sure that the IPs you want are set, click on Initialize to send the new configuration.
If you're happy with the values set, click Initialize again, and it will send
Note: Please be patient, the system will take a minute to pick up the job, reconfigure the network and reboot to apply the new config. |
After a moment, the Striker will reboot!
First Login
After the Striker dashboard reboots, you will now be able to log into the Striker dashboard's desktop if you wish. There is no need to use the desktop though, as most all tasks can be done using the web interface or the command line tools via an ssh session.
Once the Striker dashboard has rebooted, reload the http://<striker_ip_address> to get the login page.
Note: There is no default password on the Anvil! cluster! If you lose a password, please see the Red Hat article on password recovery. |
Login with the user name admin and use the password you set earlier.
The main page looks really spartan at first, but later this is where you'll find all the servers hosted on you Anvil! node(s).
Note: Now repeat this process for the other Striker dashboard before proceeding. |
Peering Striker Dashboards
Note: For redundancy, we recommend having two Striker dashboards. Repeat the steps above to configure the other dashboard. Once both are configured, we will peer them together. Once peered, you can use either dashboard, they will be kept in sync and one will always be redundant for the other. |
With two configured strikers ready, lets peer them. For this example, we'll be working on an-striker01, and peering with an-striker02 which has the BCN IP address 10.201.4.2.
Click on the Striker logo at the top left, which opens the Striker menu.
Click on the Configure menu item.
Here we see the menu for configuring (or reconfiguring) the Striker dashboard.
Click on the title of Configure Striker Peers to expand the peer menu.
Click on the "+" icon on the top-right to open the new peer menu.
In almost all cases, the Striker being peered is on the same BCN as the other. As such, we'll connect to the peer striker's BCN IP address, 10.201.4.2 in this example.
Once filled out, click Add and the job to peer will be saved. Click on the red 'X' at the top-right to return to the main form.
Give it a minute or two, and the peering should be complete.
After a couple minutes, you will see the peer striker appear in the list. At this point, the two Strikers now operate as one. Should one ever fail, the other can be used in its place.
Configuring the First Node
An Anvil! node is a pair of subnodes acting as one machine, providing full redundancy to hosted servers.
The process of configuring a node is;
- Initialize each subnode
- Configure backing UPSes/PDUs
- Create an "Install Manifest"
- Run the Manifest
Initializing Subnodes
To initialize a node (or DR host), we need to enter it's current IP address and current root password. This is used to allow striker to log into it and update the subnode's configuration to use the databases on the Striker dashboards. Once this is done, all further interaction with the subnodes will be via the database.
Click on the Striker logo to open the menu, and then click on Anvil.
Enter the IP address use set (or was set by DHCP) when installing an-a01n01, and enter the root user password you set. There is no default password. Then click on "Test access".
Note: If you're not sure what the IP of the subnode is, you can run the command "ip -c --brief addr" to get a list of the current IPs. |
The IP address is the one that you assigned during the initial install of the OS on the subnode. If you left it as DHCP, you can check to see what the IP address is using the ip addr list command on the target subnode. In the example above, we see the IP is set to 192.168.10.1.
Note: There are two options below, Subnode and Disaster Recovery (DR) Host. We're working on creating an Anvil! node, which is a matched pair of subnodes, so we'll select that. We'll look at DR hosts later. |
The current host name of the target is shown, to help you confirm that you connected to the host you suspected. You can change it here if you want, but generally it's not recommended unless you're correcting a bad host name from the initial OS install.
Lastly, if you want to setup an Alteeve Enterprise key and didn't do so during the OS install, you can set it here.
We're happy with this, click on Prepare Host.
The plan to initialize is present, if you're happy, click on Prepare. The job will be saved, and you can then repeat until your subnodes are initialized.
You'll see that the job has been saved! Click on the "X" to close this and return to the Hosts menu.
Note: It could take a minute or three for the subnode to connect to the database. Please be patient. |
After a few moments, the subnode will appear.
Repeat this process to initialize the remaining subnode(s) and DR host(s).
Once you've got the subnodes (and DR host, if applicable) initialized, we can move on to mapping the network.
Network Mapping
With the two subnodes and a DR host initialized, we can now configure their networks!
Full Anvil! subnodes must use redundant networks, specifically, active-backup bonds. They must have a network connections in the Back-Channel, Internet-Facing and Storage Networks. In our example here, we're going to also create a connection in the Migration Network as well, as we've got 8 interfaces to work with.
Note: The network mapping function provides a way to map physical network ports to MAC addresses. This works by unplugging a cable, and seeing the corresponding network interface name go offline. This works over a network, and can sometimes not work when the interface being unplugged was in use. Another way to do this is to run 'anvil-monitor-network' at the console of the node while unplugging and plugging in network interfaces. |
With our subnodes initialized, we can now click on Prepare Network. This opens up a list of subnodes and DR hosts. In our case, there are three unconfigured hosts; an-a01n01, an-a01n02 and the DR host an-a01dr01.
We'll start with the subnode an-a01n01. Click on it's name on the left to select it for configuration.
This works the same way as how we configured the Striker, except for two things. First, and most obvious, there are a lot more interfaces to work with.
Second, and more importantly, when we unplug the cable that the Striker dashboard is using to talk to the node, we will lose communication for a while. If that happens, you will not see the interface go down. When this happens, plug the interface back in, and move on to the next interface. When you're done mapping the network, only one interface and one slot will be left.
{{note|1=If you're having trouble with this step, it's possible that one interface is being used to connect to the striker and another is used between the striker and the subnode. If so, then two interfaces will not show when unplugged. If this is happening to you, try doing this
Lets start! Here, the network interface we want to make "BCN 1 - Link 1" is unplugged. With the cable unplugged, we can see that the enp3s0 interface is down. So we know that is the link we want. Click and drag it to the "Back-Channel Network 1" -> "Link 1" slot.
Next we see that enp4s0 is unplugged. So now we can drag that to the "Back-Channel Network 1" -> "Link 2" slot.
Now, we ran into the issue mentioned above. When we unplugged the cable to the interface we want to make "Internet-Facing Network, Link 1", we lost connection to the subnode, and could not see the interface go offline. For now, we'll skip it, plug it back in, and unplug the interface we want to be "Internet-Facing Network, Link 2".
Next we see that enp2s0 is unplugged. So now we can drag that to the "Internet-Facing Network 1" -> "Link 2" slot.
Next, we'll unplug the cable going to the network interface we want to use as "Storage Network, Link 1". We see that enp5s0 is down, so we'll drag that to the "Internet-Facing Network 1" -> "Link 1" slot.
Next, we'll unplug the cable going to the network interface we want to use as "Storage Network, Link 2". We see that enp6s0 is down, so we'll drag that to the "Internet-Facing Network 1" -> "Link 2" slot.
Note: In the near future, the Migration Network panel will auto-display when 8+ interfaces exist. |
We need to add the "Migration Network" panel, and to do that, click on the "+" icon at the lower left. It will add the panel, and then click on the "Network Name" to select "Migration Network 1".
Now we can get back to mapping, and here we've unplugged the network cable going to the "Migration Network, Link 1" and we see enp7s0 is down. So we can drag that into the "Migration Network 1" -> "Link 1" slot.
Lastly, we unplugged the network cable going to the "Migration Network, Link 2" and we see enp8s0 is down. So we can drag that into the "Migration Network 1" -> "Link 2" slot.
With that, the only interface still not mapped is ens1p0, so we can confirm that it is the interface we couldn't map earlier, and so it can be dragged into the "Internet-Facing Network" -> "Link 1" slot.
That's it, all our network interfaces have now been identified!
Assigning IP Addresses
The last step is to assign IP addresses. The subnet prefixes for all but the IFN are set, so you can add the IPs you want. Given this is Anvil! node 1, the IPs we want to set are:
host | Back-Channel Network 1 | Internet-Facing Network 1 | Storage Network 1 | Migration Network 1 |
---|---|---|---|---|
an-a01n01 | 10.201.10.1/16 | 192.168.10.1/16 | 10.101.10.1/16 | 10.199.10.1/16 |
an-a01n02 | 10.201.10.2/16 | 192.168.10.2/16 | 10.101.10.2/16 | 10.199.10.2/16 |
an-a01dr01 | 10.201.10.3/16 | 192.168.10.3/16 | 10.101.10.3/16 | n/a |
Note: You don't need to specify which network is the default gateway. The IP you enter will be matched against the IP and subnet masks automatically to determine which network the gateway belongs to. |
Note: You can specify multiple DNS servers using a comma to separate them, like 8.8.8.8,8.8.4.4. The order they're written will become the order they're searched. |
Enter all the IP addresses and their subnet masks, the gateway and the DNS servers to use.
Double-check everything, and when you're happy, click on Prepare Network. You will be given a summary of what will be done.
After a few moments, you should see the mapped machine, an-a01n01 in this case, reboot.
Warning: With so many interfaces, Network Manager can sometimes take a couple of minutes after booting to bring up all of the bonds and bridges. If you don't see all of the interfaces (or they're not all up), please wait. The 'anvil-daemon' also checks for networks that didn't start at boot and tries to bring them up. |
You do not need to log into it or do anything else at this point. If you're curious or want to confirm though, you can log into an-a01n01 and running nmcli connection show and / or ip -c --brief addr to see the new configuration.
Note: Map the network on the other subnode(s) and DR host(s). Once all are mapped, we're ready to move on! |
Adding Fence Devices
Before we begin, lets review why fence devices are so important. Here's an article talking about fencing and why it's important;
If you've working with High-Availability Clusters before, you may have heard that you need to have a third "quorum" node for vote tie breaking. We humbly argue this is wrong, and explain why here;
In our example Anvil! cluster, the subnodes have IPMI BMCs, and those will be the primary fence method. IPMI are special in that the Anvil! cluster can auto-detect and auto-configure BMCs. So even though we're going to configure fence devices here, we do not need to configure them manually.
So our secondary method of fencing is using APC branded PDUs. These are basically smart power bars which, via a network connection, can be logged into and have individual outlets (ports) powered down or up on command. So, if for some reason the IPMI fence failed, the backup would be to log into a pair of PDUs (one per PSU on the target subnode) and cut the power. This way, we can sever all power to the target node, forcing it off.
Warning: PDUs do not have a mechanism for confirming that down-stream devices are off. So it's critical that the subnodes are actually plugged into the given PDU and outlet that we configure them to be on! |
At this point, we're not going to configure nodes and ports. The purpose of this stage of the configuration is simply to define which PDUs exist. So lets add a pair.
We're going to be configuring a pair of APC AP7900B PDUs.
Clicking on the Manage Fence Devices section of the top bar will open the Fence menu.
Click on the + icon on the right to open the new fence device menu.
Note: There are two fence agents for APC PDUs. Be sure to select 'scan_apc_snmp'! |
Every fence agent has a series of parameters that are required, and some that are optional. The required fields are displayed first, and the optional fields can be displayed by clicking on Optional Parameters to expand the list of those parameters.
In our example, we'll first add an-pdu01 as the name, we will leave action as reboot, and we'll set the IP address to 10.201.2.1.
We do not need to set any of the optional parameters, but above we show what the menu looks like. Note how mousing over a field will display it's help (as provided by the fence agent itself).
Click on Add and the summary of what you entered. Check it over, and if you're happy, click on Add to save it.
The new device is saved!
Go back up and change the name and IP, and we can save the second PDU.
As before, review and confirm the new values are correct.
And saved! Now click Cancel to close the menu.
Now we can see the two new fence devices that we added!
Adding UPSes
The UPSes provide emergency power in the case mains power is lost. As important though, Scancore uses the power status to monitor for power problems before they turn to power outages.
We're going to be configuring a pair of APC SMT1500RM2U UPSes.
Clicking on the Manage UPSes tab brings up the UPSes menu.
Click on + to add a new UPS.
Note: At this time, only APC brand UPSes are supported. If you would like a certain brand to be supported, please contact us and we'll try to add support. |
Select the APC and then enter the name and IP address. For our first UPS, we'll use the name an-ups01 and the IP address 10.201.3.1.
Click on Add and confirm that what you entered it accurate.
Click on Add again to save the new UPS.
Edit the name and IP for the second UPS.
Confirm the data for the second UPS is accurate, the click Add again to save.
The second UPS has been saved!
Press Cancel to close the form and you will see your new UPSes!
Creating an 'Install Manifest'
Before we start, we need to discuss what an "Install Manifest" is.
In the simplest form, an Install Manifest is a "recipe" that defines how a node is to be configured. It links specific machines to be subnodes, sets their host names and IPs, and links subnodes to fence devices and UPSes.
So lets start.
Click on the "Install Manifest" tab to open the manifest menu.
The manifest menu has several fields. Lets look at them;
Fields;
Prefix | This is a prefix, 1 to 5 characters long, used to identify where the node will be used. It's a short, descriptive string attached to the start of short host names. |
---|---|
Domain Name | This is the domain name used to describe the owner / operator of the node. |
Sequence | Starting at '1', this indicated the node's sequence number. This will be used to determine the IP addresses assigned to the subnodes. See Anvil! Networking for how this works in detail. |
Networks Group | The first three network fields define the subnets, and are not where actual IPs are assigned. Below the default three networks is a + button to add additional networks. |
DNS | This is where the DNS IP(s) can be assigned. You can specify multiple by using a comma-separated list, with no spaces. (ie: 8.8.8.8,8.8.4.4). |
NTP | If you run your own NTP server, particularly if you restrict access to outside time servers, define your NTP servers here, as a comma-separated list if there are multiple. |
MTU | If you are certain that your network supports MTUs over 1500, you can set the desired MTU here. NOTE: This is not well tested yet, and it's use is not yet advised. It is planned to add per-network MTU support later. See Issue #435. |
Subnode 1 and 2 | This section is where you can set the IP address, a selection of which UPSes power the subnodes, and where needed, fence device ports used to isolate the subnode in an emergency. |
Below this are the networks being used on this Anvil! node. The BCN, SN and IFN are always defined. In our case, we also have a MN, so we'll click on the + to add it.
Note: In our example, we're using APC PDUs. It's critical that the ports entered into the fence device port correspond accurately to the numbered outlets on the PDUs! The ports specified must map to the outlets powering the subnode. |
Click Add, and you'll see a summary of the manifest. Review the summary carefully.
If you're happy, click Add. You'll see the saved message, and then you can click on the red X to close the menu.
Now we see the new an-anvil-01 install manifest!
Running an Install Manifest
Now we're ready to create our first Anvil! node!
To build an Anvil! node, we use the manifest to take the config we want, and apply them to the physical subnodes we prepared earlier. This will take those two unassigned subnodes and tie them together into an Anvil! node.
Click on the play icon to the left of the manifest you want to run. In our case, there's only an-anvil-01. This will open the manifest menu.
Fields;
Description | This is a free-form field where you can describe this node. |
---|---|
(Confirm) Password | Enter the password you want to set for this node. In general, you should use the same password on all nodes. |
Subnode 1/2 | There are two drop-down boxes where you can choose with subnodes will be in this node. All machines initialized as "nodes" (that is, have anvil-node installed) will be available. If you are building a new node, as we are here, select the two newly initialized nodes. If you're rebuilding a node, select the two subnodes already in the node to rebuild. If you're replacing a subnode, select the surviving subnode and the replacement subnode. |
Note: When running a manifest to rebuild a failed node, the existing node with servers will not be modified, save for updating access information for the replacement node. This is designed to run on a production node. That said, understanding nothing is perfect, when a maintenance window is available, rebuilding during that window is recommended. |
This form is pretty simple, because most of our work was done in previous steps. Simply enter a description, the password to set, and select the two subnodes we prepared earlier.
Verify that you're about to run the right manifest against the right subnodes. When done, click Run.
The job to run the manifest and assemble the two subnods into a node is now saved. It will take a few minutes for the job to complete, but it should now be underway.
Something we've not looked at yet, but that is useful here, is the job progress icon. At the top-right, there's a clipboard icon with a blue dot. Clicking on it shows the progress of running jobs. This shows all jobs that are running, or that finished in the last five minutes.
In the example above, the join anvil node 1 and join anvil node 2 are already complete!
We're now ready to provision servers!
The New Anvil! Node Tile
When the manifest completes the new node build, it will show up as a tile on the dashboard. Return the the Dashboard and within a few minutes, the new node tile will appear.
You only need one node to start provisioning servers. If you add more nodes in the future, they will appear as additional tiles. Generally you do not need to interact with the nodes, as they should simply add resources to those available for new or existing servers.
A Note on Server Allocation
If you have two or more nodes, where a new server will run will depends on a few criteria;
- If the requested RAM or disk space for the new server is more than what is available on any given node, it won't be run there.
- If your new server could run on two or more nodes, you can manually choose which it will run on.
- If you do not choose which node to run a server on, the node with the least existing servers will be chosen to spread the load out.
Reclaiming Free Space on Subnodes and DR Hosts
Note: This function will be moved into the web interface. For now, this requires using the command line tool. |
In the node tile, we see that Storage shows Total free 0 B.
If you recall during the OS installation stage, we deleted the /home directory. When we did this, the install shrank the volume group to be just big enough for the requested /root partition. Now we need to reclaim this space.
Note: Manipulating the partition table of a server always includes some risk. By default, the Anvil! doesn't automatically reclaim the space for this reason. |
Given these subnodes are brand new and there is no data on the machines yet, we can safely grow the backing disk. If, however, your subnodes do have data, be sure you've got a good backup before proceeding. This is a low-risk step, but not a zero-risk step.
We will need to repeat this step on both subnodes and on the DR host. For this tutorial, we'll show the process on an-a01n01.
an-a01n01 |
---|
anvil-manage-host --auto-grow-pv
Searching for free space to grow PVs into.
[ Warning ] - Auto-growing the LVM physical volumes could, in some case, leave the system unbootable.
The steps that will taken are;
- LVM Physical volumes will be found.
- For each found, 'parted' is used to see if there is > 1GiB of free space available.
- If so, and if no other partitions are after it, it will be grown to use the free space.
- The PV itself will then be resized to use the new space
This is generally used just after initializing a new subnode or DR host. If this host has real data
on it, please proceed with caution.
The partition table will be backed up, and if the partition resize fails, the partition table will be
reloaded automatically. If this host has real data, ensure a complete backup is available before
proceeding.
Proceed? [y/N]
Confirm; y
Thank you, proceeding.
Enabling maintenance mode.
Found: [162.67 GiB] free space after the PV partition: [/dev/vda:3]! Will grow the partition to use the free space.
- [ Note ] - The original partition table for: [/dev/vda] has been saved to: [/tmp/vda.partition_table_backup]
If anything goes wrong, we will attempt to recover automatically. If needed, you can try
recovering with: [/usr/sbin/sfdisk /dev/vda < /tmp/vda.partition_table_backup --force]
The partition: [/dev/vda3] appears to have been grown successfully. The new partition scheme is:
====
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 268435456000B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17408B 1048575B 1031168B Free Space
1 1048576B 630194175B 629145600B fat32 EFI System Partition boot, esp
2 630194176B 1703935999B 1073741824B xfs
3 1703936000B 268435439103B 266731503104B lvm
====
The resize appears to have been successful. The physical volume: [/dev/vda3] details are now:
====
--- Physical volume ---
PV Name /dev/vda3
VG Name an-a01n01_hs0
PV Size 248.41 GiB / not usable 1.98 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 63593
Free PE 41645
Allocated PE 21948
PV UUID tGRZxV-i1e1-k7M4-0gRl-Qb6Q-QP9g-cFvL9J
====
The physical volume: [/dev/vda3] has been resized!
Disabling maintenance mode.
|
Repeat this on the other subnode(s) and DR host(s). When done, after about a minute, Scancore will update the available free space. This will cause the tile to update showing the space now available for servers.
Provision Servers
The fundamental goal has been to build servers, and now with our first node built, we're ready to do so!
Uploading Install Media
Before we can build our first server, we need to upload the install media the OS will install from.
The vast majority of Anvil! users run windows, so we're going to install a Windows Server 2022 machine. For this, we'll also need the latest virtio drivers.
Getting the Windows 2022 ISO (DVD image) will require purchasing or getting an evaluation version from Microsoft at the link above.
Getting the latest stable virtio drivers can be downloaded from here, which is version 0.1.240 at the time of writing.
With these two files downloaded on your local computer, we're ready to upload them to the cluster.
Click on the Striker logo to open the menu, and then click on the Files option.
Click on the + on the right to upload the first file.
We're also going to want the latest virtio drivers. To improve performance of Windows servers, the hypervisor will emulate hardware that is designed for the best performance in a virtual environment. Specifically, it emulates a storage controller (like a SCSI or RAID controller) called virtio-block, and a network controller called virtio-net.
Given the storage controller is emulated, and that the virtio drivers are not on the install ISO, we'll need to provide the driver during the OS install. So we'll also upload the virtio-win-0.1.240.iso file at the same time.
Clicking on 'Browse', your operating system's file browser. Use it to find the Windows ISO you will upload. Click on the first (or only) file you want to upload. Then press and hold the 'ctrl' key, and click to select any additional files. Once all are selected, click on 'open' (or your OS's selection button) to begin the upload.
Once you select the files to upload, you will see what's going to be uploaded. If you're happy, click Upload.
As the file uploads, you'll the see the progress. How long this takes depends on the file size and the bandwidth between your computer and the Striker dashboard.
The progress bar changes colour when the file is finished uploading. When the files are done uploading, click on the 'X' to close the menu.
Note: After the upload completes, it could take a minute or three for the file to appear. |
This is because the md5sum for the uploaded files are calculated. How long this takes depends on the size of the file and and the speed of the dashboard's processor. Once calculated, then the file is added to the database and it will start sync'ing around the Anvil! cluster.
So please be patient.
After the file is actually uploaded, it needs to be added to the cluster's shared storage. This starts by calculating an md5sum sum of the file, adding it to the database, and then it starts sending it out to the various machines in the cluster.
Once the sums are calculated, you will see the files appear in the file list. Repeat if needed to upload the ISOs of other OSes you plan to install.
Provisioning Servers
Now, finally, we're at the most fun part!
Building 'srv01-primary-dc'
Lets build our first server. It won't actually do anything, but lets pretend it's going to be an Active Directory server.
Click on the Striker logo and select Dashboard.
At this point, there's nothing to see yet, so the dashboard is pretty bare. Click on the + on the right of the search bar to start provisioning (building) the first server.
This is where you get to decide what resources are allocated to your new server.
Fields;
Server name | This is the name of the server. It can be any string of letters, numbers, hyphens and underscore characters. It's not at all required, but we generally use the format 'srvXX-<description>', and we will use that here. This works for us because the srvXX- prefix is incremented constantly, which means that if a new server is replacing an old server, the descriptive part can be the same without conflict. |
---|---|
CPU Cores | This is how many CPU cores you want to allocate to this server. Please see the note below! |
Memory | This is the amount of RAM to allocate to this server |
Disk Size | This is the size of the (first) drive on the server. If you want a second drive, you can add it after the server has been created. |
Storage Group | This allows you to choose which storage group to use when provisioning this new server. In most cases, there is only one storage group. In some cases, a node might have two storage arrays, for example, one that is smaller, more high performance and anther which is large, but slower. |
Install ISO | This is the ISO uploaded earlier to use for the OS installation. |
Driver ISO | This is the optional driver disk needed to complete the OS install. Generally this is not needed for Linux servers, and is required for the virtio drivers for Windows servers. |
Anvil Node | If your Anvil! cluster has multiple nodes, this is allows you to manually choose which node will host the new server. |
Optimize for OS | Important: This gives you the chance to optimize the hypervisor, that is the component that emulates the hardware the server will run on, for the OS you plan to install. Selecting this properly can significantly help (or hurt) the performance of your server. |
Warning: The Anvil! does not allow over-provisioning of resources, except for the CPU core count. This means that it is up to you to be mindful of how much CPU resources servers can consume. If too many servers try to place too high of a load on the CPU cores, it's possible (though unlikely) to cause the back-end cluster stack to start to time out, leading to subnode ejection. We recommend allocating the fewest number of cores you think a server will need, and increasing the number as servers prove the need for them. Note also that the the number of available cores is the sum of real cores plus hyperthreaded cores. It is strongly advised to NOT allocate to a single server more than N-2 of the host node's real cores. That is to say, if your nodes have 16 real cores, the recommended core limit for a single VM is 14. This way, if something causes that server to consume all available CPU power, some CPU will be available for the host and other servers. |
For our first server, we're going to create a server called srv01-primary-dc. We'll give it 4 cores, 16 GiB of RAM and 100 GiB of disk space. Once you're happy with your selection, press Provision.
You will see a summary of the server that is about to be created. Confirm that all is well, and then click on Privision to confirm.
The job has been saved.
Note: It can take a minute or two before the server appears on the dashboard. Please be patient while the server is provisioned. |
It will take a minute or two for the node to prepare and provision the server, but after a minute, it will show up on the main dashboard page.
"Ding!"
The server is ready!
Click on the server and it will open up to in-browser terminal access to the new server.
This access is just like having a monitor plugged into a real physical machine. You are essentially using a keyboard and mouse plugged into the server. No network connection is required on the guest (as there can't be at this stage of the OS install), and you will be able to watch the full boot up and shutdown sequence of the server, without any client being installed.
It's not needed yet, but it's important to show that you can click on the keyboard icon on the top-right to send special key combinations, like "<ctrl> + <alt> + " to the guest. This will be needed later to login after the OS is installed.
The button bar has five options;
The five buttons are, from left to right;
Full Screen | This makes the server's display full screen. Press <esc> to exit full screen. | |
---|---|---|
Keyboard | Send special key commands. | |
Power Control | This provides the ability to power off, power on and force power off a server. | |
Dashboard | Return to the dashboard | |
Server Manager | Note Implemented yet; This will take you to the server manager. In a future release, this will be the default menu shown when selecting a server from the dashboard. It will also be where you can manipulate a server's name, system resource allocation and storage configuration. For now, these features are available via the anvil-rename-server, anvil-manage-server-system, and anvil-manage-server-storage tools. Please see their man pages for usage. |
Note: From this point on, we're doing a standard Windows Server OS install. We're not showing all the steps, but we will show key steps. |
Recall that we included the "virtio" driver disk? This is the step where that comes in. For optimal performance, the hypervisor creates a "virtio-block" storage controller that is optimized for performance in a virtual environment. Microsoft doesn't include the driver for this natively, so when we get to the storage section, it initially sees no hard drive.
Click on the "Load Driver" button on the lower left.
Click on "Browse".
Click to expand "CD Drive (E:) virtio-win-0.1.240" drive.
Note: This example is for Windows 2022 server, if you're installing a different version, choose the directory that best applies. Note that "amd64" really just means "64-bit". Use that regardless of the make of your physical CPU if you're installing a 64-bit Windows OS, as most are these days. |
Select the "E:\amd64\2k22". and then click "OK".
The "Red Hat VirtIO SCSI controller (E:\amd64\2k22\viostor.inf)" option will appear, and this is the storage controller driver we want. Click on "Next" on the lower-right corner.
The driver will load, and after a moment, the "Drive 0 Unallocated Space" will appear. At this point, you can proceed with the OS install as you normally would.
This screen shot isn't particularly important from the OS install process, but it shows how you're able to see the server even during the boot process. It helps show that we're talking to the (virtual) video card, not any software on the guest.
Now we're at the login screen! Click on the keyboard icon we mentioned earlier, and choose ctrl + alt + del. to sent that key combination to bring up the login prompt.
Warning: This step is very important! Windows will ignore power button events when it things it has powered down the display. It does this to prevent someone accidentally powering off a computer that the user things is already off. This blocks Scancore's ability to gracefully shut down the server when needed! |
Windows, by default, turns off the power to the display (monitor) after ten minutes. This is a problem as Windows will then ignore power button events. The Anvil! has no agents that run inside the guest (we treat your servers as black boxes), so the only way we can request a power off is by (virtually) pressing the power button.
It is very important that Windows shuts down when the power button is pressed!
Imagine the scenario where power has been lost, and the UPSes powering the node the server is on are about to die. Scancore will press the power button to gracefully shut the server down. If windows ignores this, there's nothing more the Anvil! can do, and the node will stay up and running until the power runs out, resulting in a hard shut down.
Optional Windows Install Steps
From here on, the steps shown are optional. Depending on your needs and policies, you may well choose to only do some of the things here. Consider this section a list of suggestions.
Depending on your needs, you may wish to select the High Performance power configuration. Of course, this will consume more electricity and generate more heat, but it's good to know this is an option for servers that are more performance-oriented.
If you browse to E:\virtio-win-gt-x64 and run it, you will be able to install additional drivers. This will allow you to change the desktop resolution to a higher resolution, in particular.
Installing all the additional virtio driver pack.
If you browse to E:\virtio-win-guest-tools and run it, you will be able to install the windows guest tools.
And that's it! From this point on, proceed with your windows server setup as you normally would.
Administrative Tasks
This section covers common tasks.
Configuring Alerts
The Anvil! is designed to run without humans, and as such, Scancore's primary purpose is to make its own decisions. Secondarily though, it is also an alert system. The way alerts are delivered is by email (local delivery/relay for offline systems works fine).
Alerts are configured in three steps;
- Configure the mail server to send emails to.
- Configure alert recipients
- Optional; Configure "Alert Overrides"
Configure Mail Servers
The first step is to configure where alert emails should be delivered to. If you configure multiple, they will be cycled through as needed. If the active mail server doesn't respond, the Anvil! will reconfigure to use the next one. When all have been tried, it loops back to the first in the list. If the mail server(s) can't be connected to, then alerts will sit in queue and be sent when one starts working again.
Click on the Striker logo on the top left to open the menu, then click on 'Mail'.
There are two sections, 'Manage mail servers' and 'Manage mail recipients'. We'll start with 'Manage mail servers', so click on the '+' to add the first mail server.
The details to enter here will depend on the mail server you plan to deliver email to. In our case, we'll setup our alert.alteeve.com mail server.
When you click 'Add', it will ask you to confirm that the mail server is configured correctly.
Advanced Functions
Command Line Tools
The Anvil! cluster is built around a large collection of command line tools. Most of the Striker UI functions work by creating jobs, which in turn run these command line tools for you behind the scenes.
This was a conscious decision to allow experienced administrators the ability to work on their Anvil! clusters, even over high latency, low bandwidth connections.
Any questions, feedback, advice, complaints or meanderings are welcome. | |||
Alteeve's Niche! | Enterprise Support: Alteeve Support |
Community Support | |
© Alteeve's Niche! Inc. 1997-2024 | Anvil! "Intelligent Availability®" Platform | ||
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions. |