2x5 Scalable Cluster Tutorial

From Alteeve Wiki
Jump to navigation Jump to search

 AN!Wiki :: How To :: 2x5 Scalable Cluster Tutorial

Overview

This paper has one goal;

  • Creating a 2-node, high-availability cluster hosting Xen virtual machines.

Technologies We Will Use

We will introduce and use the following technologies:

  • RHCS, Red Hat Cluster Services version 3, aka "Cluster 3", running on Red Hat Enterprise Linux 6.0 x86_64.
    • RHCS implements:
      • cman; The cluster manager.
      • corosync; The cluster engine, implementing the totem protocol and cpg and other core cluster services.
      • rgmanager; The resource group manager handles restoring and failing over services in the cluster, including our Xen VMs.
  • Fencing devices needed to keep a cluster safe.
    • Two fencing types are discussed;
      • IPMI; The most common fence method used in servers.
      • Node Assassin; A home-brew fence device ideal for learning or as a backup to IPMI.
  • Xen; The virtual server hypervisor.
    • Converting the host OS into the special access dom0 virtual machine.
    • Provisioning domU VMs.
  • Putting all cluster-related daemons under the control of rgmanager.
    • Making the VMs highly available.

Prerequisites

It is expected that you are already comfortable with the Linux command line, specifically bash, and that you are familiar with general administrative tasks in Red Hat based distributions. You will also need to be comfortable using editors like vim, nano, gedit, kate or similar. This paper uses vim in examples. Simply substitute your favourite editor in it's place.

You are also expected to be comfortable with networking concepts. You will be expected to understand TCP/IP, multicast, broadcast, subnets and netmasks, routing and other relatively basic networking concepts. Please take the time to become familiar with these concepts before proceeding.

Where feasible, as much detail as is possible will be provided. For example, all configuration file locations will be shown and functioning sample files will be provided.

Platform

Red Hat Cluster Service version 3, also known as "Cluster Stable 3" or "RHCS 3", entered the server distribution world with the release of RHEL 6. It is used by downstream distributions like CentOS and Scientific Linux. This tutorial should be easily adapted to any Red Hat derivative distribution. It is expected that most users will have 64-bit CPUs, thus, we will use the x86_64 distribution and packages.

If you are on an 32-bit system, you should be able to follow along fine. Simply replace x86_64 with i386 or i686 in package names. Be aware though that issues arising from the need for PAE will not be discussed.

If you do not have a Red Hat Network account, you can download CentOS or another derivative of the same release, currently 6.0.

Note: When last checked, down-stream distributions have not yet been released. It is expected that they will be available around mid to late December.

Focus and Goal

Clusters can serve to solve three problems; Reliability, Performance and Scalability.

This paper will build a cluster designed to be more reliable, also known as a High-Availability cluster or simply HA Cluster. At the end of this paper, you should have a fully functioning two-node cluster capable of hosting a "floating" virtual servers. That is, VMs that exist on one node that can be easily moved to the other node with minimal or no down time.

Base System Setup

This paper is admittedly long-winded. There is a "cheat-sheet" version planned, but it will be written only after this main tutorial is complete. Please be patient! Clustering is not inherently difficult, but there are a lot of pieces that need to work together for anything to work. Grab a coffee or tea and settle in.

Hardware

We will need two physical servers each with the following hardware:

  • One or more multi-core CPUs with Virtualization support.
  • Three network cards; At least one should be gigabit or faster.
  • One or more hard drives.
  • You will need some form of a fence device. This can be an IPMI-enabled server, a Node Assassin, a fenceable PDU or similar.

This paper uses the following hardware, and would suggest these be "minimum specifications":

  • ASUS M4A78L-M
  • AMD Athlon II x2 250
  • 2GB Kingston DDR2 KVR800D2N6K2/4G (4GB kit split between the two nodes)
  • 1x Intel 82540 PCI NICs
  • 1x D-Link DGE-560T
  • Node Assassin

This is not an endorsement of the above hardware. I bought what was within my budget that would serve the purposes of creating this document. What you purchase shouldn't matter, so long as the minimum requirements are met.

Note: I use three physical NICs, but you can get away with fewer by using VLANs or by simply re-using a given interface. Neither appealed to me given the minimal cost of add-in network cards and the relative complexity of VLANs. If you wish to alter your network setup, please do so.

Pre-Assembly Information

With multiple NICs, it is quite likely that the mapping of physical devices to logical ethX devices may not be ideal. This is a particular issue if you decide to network boot your install media.

There is no requirement, from a clustering point of view, that any given network card be mapped to any given ethX device. However, you will be jumping between servers fairly often and having various setups adds one more level of complexity. For this reason, I strongly recommend you follow this section.

Before you assemble your servers, record their network cards' MAC addresses. I like to keep simple text files like these:

cat an-node01.mac
90:E6:BA:71:82:EA	eth0	# Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller
00:21:91:19:96:53	eth1	# D-Link System Inc DGE-560T PCI Express Gigabit Ethernet Adapter
00:0E:0C:59:46:E4	eth2	# Intel Corporation 82540EM Gigabit Ethernet Controller
cat an-node02.mac
90:E6:BA:71:82:D8	eth0	# Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller
00:21:91:19:96:5A	eth1	# D-Link System Inc DGE-560T PCI Express Gigabit Ethernet Adapter
00:0E:0C:59:45:78	eth2	# Intel Corporation 82540EM Gigabit Ethernet Controller

This will prove very handy later.

OS Install

There is no hard and fast rule on how you install the host operating systems. Ultimately, it's a question of what you prefer. There are some things that you should keep in mind though.

  • Balance the desire for tools against the reality that all programs have bugs.
    • Bugs could be exploited to gain access to your server. If the host is compromised, all of the virtual servers are compromised.
  • The host operating system, known as dom0 in Xen, should do nothing but run the hypervisor.
  • If you install a graphical interface, like Xorg and Gnome, consider disabling it.
    • This paper takes this approach and will cover disabling the graphical interface.

Below is the kickstart script used by the nodes for this paper. You should be able to adapt it easily to suit your needs. All options are documented.

Post OS Install

There are a handful of changes we will want to make now that the install is complete. Some of these are optional and you may skip them if you prefer. However, the remainder of this paper assumes these changes have been made. If you used the kickstart script, then some of these steps will have already been completed.

Disable selinux

Given the complexity of clustering, we will disable selinux to keep it from being more complex. Obviously, this introduces security issues that you may not be comfortable with.

To disable selinux, edit /etc/selinux/config and change SELINUX=enforcing to SELINUX=permissive. You will need to reboot in order for the changes to take effect, but don't do it yet as some changes to come may also need a reboot.

Change the Default Run-Level

This is an optional step intended to improve performance.

If you don't plan to work on your nodes directly, it makes sense to switch the default run level from 5 to 3. This prevents the window manager, like Gnome or KDE, from starting at boot. This frees up a fair of memory and system resources and reduces the possible attack vectors.

To do this, edit /etc/inittab, change the id:5:initdefault: line to id:3:initdefault: and then switch to run level 3:

vim /etc/inittab
id:3:initdefault:
init 3

Make Boot Messages Visible

This is another optional step that disables the rhgb (Red Hat Graphical Boot) and quiet kernel arguments. These options provide the nice boot-time splash screen. I like to turn them off though as they also hide a lot of boot messages that can be helpful.

To make this change, edit the grub menu and remove the rhgb quiet arguments from the kernel /vmlinuz... line.

vim /boot/grub/menu.lst

Change:

title Red Hat Enterprise Linux (2.6.32-71.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-71.el6.x86_64 ro root=UUID=ef8ebd1b-8c5f-4bc8-b683-ead5f4603fec rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto rhgb quiet
        initrd /initramfs-2.6.32-71.el6.x86_64.img

To:

title Red Hat Enterprise Linux (2.6.32-71.el6.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-71.el6.x86_64 ro root=UUID=ef8ebd1b-8c5f-4bc8-b683-ead5f4603fec rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto
        initrd /initramfs-2.6.32-71.el6.x86_64.img

Setup Inter-Node Networking

This is the first stage of our network setup. Here we will walk through setting up the three networks between our two nodes. Later we will revisit networking to tie the virtual machines together.

Warning About Managed Switches

WARNING: Please pay attention to this warning! The vast majority of cluster problems end up being network related. The hardest ones to diagnose are usually multicast issues.

If you use a managed switch, be careful about enabling Multicast IGMP Snooping or Spanning Tree Protocol. They have been known to cause problems by not allowing multicast packets to reach all nodes. This can cause somewhat random break-downs in communication between your nodes, leading to seemingly random fences and DLM lock timeouts. If your switches support PIM Routing, be sure to use it!

If you have problems with your cluster not forming, or seemingly random fencing, try using a cheap unmanaged switch. If the problem goes away, you are most likely dealing with a managed switch configuration problem.

Network Layout

This setup expects you to have three physical network cards connected to three independent networks. Each network serves a purpose:

  • Network connected to the Internet and thus has untrusted traffic.
  • Storage network used for keeping data between the nodes in sync.
  • Back-channel network used for secure internode communication.

These are the networks and names that will be used in this tutorial. Please note that, inside VMs, device names will not match the list below. This table is valid for the operating systems running the hypervisors, known as dom0 in Xen or as the host in other virtualized environments.

Network Description Short Name Device Name Suggested Subnet NIC Properties
Back-Channel Network BCN eth0 192.168.1.0/24 NICs with IPMI piggy-back must be used here.
Second-fastest NIC should be used here.
If using a PXE server, this should be a bootable NIC.
Storage Network SN eth1 192.168.2.0/24 Fastest NIC should be used here.
Internet-Facing Network IFN eth2 192.168.3.0/24 Remaining NIC should be used here.

Take note of these concerns when planning which NIC to use on each subnet. These issues are presented in the order that they must be addressed in:

  1. If your nodes have IPMI piggy-backing on a normal NIC, that NIC must be used on BCN subnet. Having your fence device accessible on a subnet that can be remotely accessed can pose a major security risk.
  2. The fastest NIC should be used for your SN subnet. Be sure to know which NICs support the largest jumbo frames when considering this.
  3. If you still have two NICs to choose from, use the fastest remaining NIC for your BCN subnet. This will minimize the time it takes to perform tasks like hot-migration of live virtual machines.
  4. The final NIC should be used for the IFN subnet.

Node IP Addresses

Obviously, the IP addresses you give to your nodes should be ones that suit you best. In this example, the following IP addresses are used:

  Internet-Facing Network (IFN) Storage Network (SN) Back-Channel Network (BCN)
an-node01 192.168.1.71 192.168.2.71 192.168.3.71
an-node02 192.168.1.72 192.168.2.72 192.168.3.72

Disable The NetworkManager Daemon

Some cluster software will not start with NetworkManager running! This is because NetworkManager is designed to be a highly-adaptive, easy to use network configuration system that can adapt to frequent changes in a network. For workstations and laptops, this is wonderful. For clustering, this can be disastrous. We need to ensure that, once set, the network will not change.

Disable NetworkManager from starting with the system.

chkconfig NetworkManager off
chkconfig --list NetworkManager
NetworkManager 	0:off	1:off	2:off	3:off	4:off	5:off	6:off

The second command shows us that NetworkManager is now disabled in all run-levels.

Enable the network Daemon

The first step is to map your physical interfaces to the desired ethX name. There is an existing tutorial that will show you how to do this.

There are a few ways to configure network in Fedora:

  • system-config-network (graphical)
  • system-config-network-tui (ncurses)
  • Directly editing the /etc/sysconfig/network-scripts/ifcfg-eth* files.

If you decide that you want to hand-craft your network interfaces, take a look at the tutorial above. In it are example configuration files that are compatible with this tutorial. There are also links to documentation on what options are available in the network configuration files.

WARNING: Do not proceed until your node's networking is fully configured! This may be a small sub-section, but it is critical that you have everything setup properly before going any further!

Update the Hosts File

Some applications expect to be able to call nodes by their name. To accommodate this, and to ensure that inter-node communication takes place on the back-channel subnet, we remove any existing hostname entries and then add the following to the /etc/hosts file:

Note: Any pre-existing entries matching the name returned by uname -n must be removed from /etc/hosts. There is a good chance there will be an entry that resolves to 127.0.0.1 which would cause problems later.

Obviously, adapt the names and IPs to match your nodes and subnets. The only critical thing is to make sure that the name returned by uname -n is resolvable to the back-channel subnet. I like to add a entries for all networks, but this is optional.

The updated /etc/hosts file should look something like this:

vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# Internet Facing Network
192.168.1.71    an-node01 an-node01.alteeve.com an-node01.ifn
192.168.1.72    an-node02 an-node02.alteeve.com an-node02.ifn

# Storage Network
192.168.2.71    an-node01.sn
192.168.2.72    an-node02.sn

# Back Channel Network
192.168.3.71    an-node01.bcn
192.168.3.72    an-node02.bcn

# Node Assassins
192.168.3.61    batou batou.alteeve.com
192.168.3.62    motoko motoko.alteeve.com

Now to test this, ping both nodes by their name, as returned by uname -n, and make sure the ping packets are sent on the back channel network (192.168.1.0/24).

ping -c 5 an-node01.alteeve.com
PING an-node01 (192.168.1.71) 56(84) bytes of data.
64 bytes from an-node01 (192.168.1.71): icmp_seq=1 ttl=64 time=0.399 ms
64 bytes from an-node01 (192.168.1.71): icmp_seq=2 ttl=64 time=0.403 ms
64 bytes from an-node01 (192.168.1.71): icmp_seq=3 ttl=64 time=0.413 ms
64 bytes from an-node01 (192.168.1.71): icmp_seq=4 ttl=64 time=0.365 ms
64 bytes from an-node01 (192.168.1.71): icmp_seq=5 ttl=64 time=0.428 ms

--- an-node01 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.365/0.401/0.428/0.030 ms
ping -c 5 an-node02.alteeve.com
PING an-node02 (192.168.1.72) 56(84) bytes of data.
64 bytes from an-node02 (192.168.1.72): icmp_seq=1 ttl=64 time=0.419 ms
64 bytes from an-node02 (192.168.1.72): icmp_seq=2 ttl=64 time=0.405 ms
64 bytes from an-node02 (192.168.1.72): icmp_seq=3 ttl=64 time=0.416 ms
64 bytes from an-node02 (192.168.1.72): icmp_seq=4 ttl=64 time=0.373 ms
64 bytes from an-node02 (192.168.1.72): icmp_seq=5 ttl=64 time=0.396 ms

--- an-node02 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4001ms
rtt min/avg/max/mdev = 0.373/0.401/0.419/0.030 ms

If you did name your other nodes in /etc/hosts, now is a good time to make sure that everything is working by pinging each interface by name and also pinging the fence devices.

From an-node01

ping -c 5 an-node02
ping -c 5 an-node02.ifn
ping -c 5 an-node02.sn
ping -c 5 an-node02.bcn
ping -c 5 batou
ping -c 5 batou.alteeve.com
ping -c 5 motoko
ping -c 5 motoko.alteeve.com

Then repeat the set of pings from an-node02 to the an-node01 networks and the fence devices.

From an-node02

ping -c 5 an-node01
ping -c 5 an-node01.ifn
ping -c 5 an-node01.sn
ping -c 5 an-node01.bcn
ping -c 5 batou
ping -c 5 batou.alteeve.com
ping -c 5 motoko
ping -c 5 motoko.alteeve.com

Be sure that, if your fence device uses a name, that you include entries to resolve it as well. You can see how I've done this with the two Node Assassin devices I use. The same applies to IPMI or other devices, if you plan to reference them by name.

Fencing will be discussed in more detail later on in this HowTo.

Disable Firewalls

In the spirit of keeping things simple, and understanding that this is a test cluster, we will flush netfilter tables and disable iptables and ip6tables from starting on our nodes.

chkconfig --level 2345 iptables off
/etc/init.d/iptables stop
chkconfig --level 2345 ip6tables off
/etc/init.d/ip6tables stop

What I like to do in production clusters is disable the IP address on the internet-facing interfaces on the dom0 machines. The only real connection to the interface is inside a VM designed to be a firewall running Shorewall. That VM will have two virtual interfaces connected to eth0 and eth2. With that VM in place, and with all other VMs only having a virtual interface connected to eth0, all Internet traffic is forced through the one firewall VM.

When you are finished building your cluster, you may want to check out the Shorewall tutorial below was written on Fedora 13, but it will work nearly perfectly on Red Hat Enterprise Linux 6.

Setup SSH Shared Keys

This is an optional step. Setting up shared keys will allow your nodes to pass files between one another and execute commands remotely without needing to enter a password. This is obviously somewhat risky from a security point of view. As such, it is up to you whether you do this or not. This is not meant to be a security-focused How-To, so please independently study the risks.

If you're a little new to SSH, it can be a bit confusing keeping connections straight in you head. When you connect to a remote machine, you start the connection on your machine as the user you are logged in as. This is the source user. When you call the remote machine, you tell the machine what user you want to log in as. This is the remote user.

You will need to create an SSH key for each source user, and then you will need to copy the newly generated public key to each remote machine's user directory that you want to connect to. In this example, we want to connect to either node, from either node, as the root user. So we will create a key for each node's root user and then copy the generated public key to the other node's root user's directory.

For each user, on each machine you want to connect from, run:

# The '2047' is just to screw with brute-forces a bit. :)
ssh-keygen -t rsa -N "" -b 2047 -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
08:d8:ed:72:38:61:c5:0e:cf:bf:dc:28:e5:3c:a7:88 root@an-node01.alteeve.com
The key's randomart image is:
+--[ RSA 2047]----+
|     ..          |
|   o.o.          |
|  . ==.          |
|   . =+.         |
|    + +.S        |
|     +  o        |
|       = +       |
|     ...B o      |
|    E ...+       |
+-----------------+

This will create two files: the private key called ~/.ssh/id_dsa and the public key called ~/.ssh/id_dsa.pub. The private must never be group or world readable! That is, it should be set to mode 0600.

The two files should look like:

Private key:

cat ~/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIEoQIBAAKCAQBlL42DC+NJVpJ0rdrWQ1rxGEbPrDoe8j8+RQx3QYiB014R7jY5
EaTenThxG/cudgbLluFxq6Merfl9Tq2It3k9Koq9nV9ZC/vXBcl4MC7pGSQaUw2h
DVwI7OCSWtnS+awR/1d93tANXRwy7K5ic1pcviJeN66dPuuPqJEF/SKE7yEBapMq
sN28G4IiLdimsV+UYXPQLOiMy5stmyGhFhQGH3kxYzJPOgiwZEFPZyXinGVoV+qa
9ERSjSKAL+g21zbYB/XFK9jLNSJqDIPa//wz0T+73agZ0zNlxygmXcJvapEsFGDG
O6tcy/3XlatSxjEZvvfdOnC310gJVp0bcyWDAgMBAAECggEAMZd0y91vr+n2Laln
r8ujLravPekzMyeXR3Wf/nLn7HkjibYubRnwrApyNz11kBfYjL+ODqAIemjZ9kgx
VOhXS1smVHhk2se8zk3PyFAVLblcsGo0K9LYYKd4CULtrzEe3FNBFje10FbqEytc
7HOMvheR0IuJ0Reda/M54K2H1Y6VemtMbT+aTcgxOSOgflkjCTAeeOajqP5r0TRg
1tY6/k46hLiBka9Oaj+QHHoWp+aQkb+ReHUBcUihnz3jcw2u8HYrQIO4+v4Ud2kr
C9QHPW907ykQTMAzhMvZ3DIOcqTzA0r857ps6FANTM87tqpse5h2KfdIjc0Ok/AY
eKgYAQKBgQDm/P0RygIJl6szVhOb5EsQU0sBUoMT3oZKmPcjHSsyVFPuEDoq1FG7
uZYMESkVVSYKvv5hTkRuVOqNE/EKtk5bwu4mM0S3qJo99cLREKB6zNdBp9z2ACDn
0XIIFIalXAPwYpoFYi1YfG8tFfSDvinLI6JLDT003N47qW1cC5rmgQKBgHAkbfX9
8u3LiT8JqCf1I+xoBTwH64grq/7HQ+PmwRqId+HyyDCm9Y/mkAW1hYQB+cL4y3OO
kGL60CZJ4eFiTYrSfmVa0lTbAlEfcORK/HXZkLRRW03iuwdAbZ7DIMzTvY2HgFlU
L1CfemtmzEC4E6t5/nA4Ytk9kPSlzbzxfXIDAoGAY/WtaqpZ0V7iRpgEal0UIt94
wPy9HrcYtGWX5Yk07VXS8F3zXh99s1hv148BkWrEyLe4i9F8CacTzbOIh1M3e7xS
pRNgtH3xKckV4rVoTVwh9xa2p3qMwuU/jMGdNygnyDpTXusKppVK417x7qU3nuIv
1HzJNPwz6+u5GLEo+oECgYAs++AEKj81dkzytXv3s1UasstOvlqTv/j5dZNdKyZQ
72cvgsUdBwxAEhu5vov1XRmERWrPSuPOYI/4m/B5CYbTZgZ/v8PZeBTg17zgRtgo
qgJq4qu+fXHKweR3KAzTPSivSiiJLMTiEWb5CD5sw6pYQdJ3z5aPUCwChzQVU8Wf
YwKBgQCvoYG7gwx/KGn5zm5tDpeWb3GBJdCeZDaj1ulcnHR0wcuBlxkw/TcIadZ3
kqIHlkjll5qk5EiNGNlnpHjEU9X67OKk211QDiNkg3KAIDMKBltE2AHe8DhFsV8a
Mc/t6vHYZ632hZ7b0WNuudB4GHJShOumXD+NfJgzxqKJyfGkpQ==
-----END RSA PRIVATE KEY-----

Public key:

cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAGUvjYML40lWknSt2tZDWvEYRs+sOh7yPz5FDHdBiIHTXhHuNjkRpN6dOHEb9y52BsuW4XGrox6t+X1OrYi3eT0qir2dX1kL+9cFyXgwLukZJBpTDaENXAjs4JJa2dL5rBH/V33e0A1dHDLsrmJzWly+Il43rp0+64+okQX9IoTvIQFqkyqw3bwbgiIt2KaxX5Rhc9As6IzLmy2bIaEWFAYfeTFjMk86CLBkQU9nJeKcZWhX6pr0RFKNIoAv6DbXNtgH9cUr2Ms1ImoMg9r//DPRP7vdqBnTM2XHKCZdwm9qkSwUYMY7q1zL/deVq1LGMRm+9906cLfXSAlWnRtzJYM= root@an-node01.alteeve.com

Copy the public key and then ssh normally into the remote machine as the root user. Create a file called ~/.ssh/authorized_keys and paste in the key.

From an-node01, type:

ssh root@an-node02
The authenticity of host 'an-node02 (192.168.1.72)' can't be established.
RSA key fingerprint is d4:1b:68:5f:fa:ef:0f:0c:16:e7:f9:0a:8d:69:3e:c5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'an-node02,192.168.1.72' (RSA) to the list of known hosts.
root@an-node02's password: 
Last login: Fri Oct  1 20:07:01 2010 from 192.168.1.102

You will now be logged into an-node02 as the root user. Create the ~/.ssh/authorized_keys file and paste into it the public key from an-node01. If the remote machine's user hasn't used ssh yet, their ~/.ssh directory will not exist.

cat ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAGUvjYML40lWknSt2tZDWvEYRs+sOh7yPz5FDHdBiIHTXhHuNjkRpN6dOHEb9y52BsuW4XGrox6t+X1OrYi3eT0qir2dX1kL+9cFyXgwLukZJBpTDaENXAjs4JJa2dL5rBH/V33e0A1dHDLsrmJzWly+Il43rp0+64+okQX9IoTvIQFqkyqw3bwbgiIt2KaxX5Rhc9As6IzLmy2bIaEWFAYfeTFjMk86CLBkQU9nJeKcZWhX6pr0RFKNIoAv6DbXNtgH9cUr2Ms1ImoMg9r//DPRP7vdqBnTM2XHKCZdwm9qkSwUYMY7q1zL/deVq1LGMRm+9906cLfXSAlWnRtzJYM= root@an-node01.alteeve.com

Now log out and then log back into the remote machine. This time, the connection should succeed without having entered a password!


 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.