RHCS v3 cluster.conf

From Alteeve Wiki
Jump to navigation Jump to search

 AN!Wiki :: How To :: RHCS v3 cluster.conf

NOTICE: Do not trust this document until all "Q." are answered and removed. NOTICE: This is a work in progress and likely contains errors and omissions.

In RHCS, the /etc/cluster/cluster.conf is the "main" configuration file for setting up the cluster and it's nodes and resources.

In cluster version 3, you can technically load cluster configurations from many places. Most options are available in cluster.conf though, so it's a logical place to set most values.

Format

The cluster.conf file is an XML formatted file that must validate against either cluster.rng (cluster 3) or cluster.ng (RHEL 5.x and older). If it fails to validate, the cluster will not use your file. Once you finish editing your cluster.conf file, test it via xmllint:

xmllint --relaxng /usr/share/cluster/cluster.rng /etc/cluster/cluster.conf

Change the path to and name of your cluster.[r]ng file above if needed. Do not try to use your new configuration until it validates.

The cluster.conf file should be in the format:

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="14">
	<...>
</cluster>

Tags may or may not have child elements. If a tag does not, then put all of the variables in one self-closing statement.

	<foo a="x" b="y" c="z" />

If the tag does accept child elements, then use a start and end tag with the child elements inside. The opening tag may or may not have attributes. This example shows two elements.

	<section foo="x" bar="y">
		<baz a="x" b="y" c="z" />
	</section>

Sections

There are multiple sections, most of which are optional and can be omitted if not used.

cluster; The Parent Element

All tags and elements must be inside the parent cluster tag.

It only has two attributes; name and config_version

Please see man 5 cluster.conf for more details.

name

This attribute names the cluster. The name you choose will be important, as you will use it elsewhere in your cluster. An example would be when creating a GFS2 partition.

  • No default.

config_version

This is the current version of the cluster.conf file. Every time you make a change, you must increment this value by one. The cluster software refers to this value when determining which configuration file to use and to push to other nodes.

Example

This names the cluster an-cluster and sets the version to 1. All other cluster configurations must be contained inside this start and end tag.

<?xml version="1.0"?>
<cluster name="an-cluster" config_version="1">
	<!-- All cluster configuration options go here. -->
</cluster>

cman; The Cluster Manager

The cman tag is used to define general cluster configuration options. For example, it sets the number of expected votes, whether the cluster is running in the special two-node state and so forth.

If you had no need for cman arguments, put in the self-closing tag.

	<cman />

two_node

This allows you to configure a cluster with only two nodes. Normally, the loss of quorum after one of two nodes fails prevents the remaining node from continuing (if both nodes have one vote.). The default is '0'. To enable a two-node cluster, set this to '1'. If this is enabled, you must also set 'expected_votes' to '1'.

  • Default is 0 (disabled)
  • Must be set to 0 or 1

expected_votes

This is used by cman to determine quorum. The cluster is "quorate" if the sum of votes of members is over half of the expected votes value. By default, cman sets the expected votes value to be the sum of votes of all nodes listed in cluster.conf. This can be overridden by setting an explicit expected_votes value. When using the two_node value to 1 then this must be set to 1 as well. Please see clusternode in the cluster section for more info. If you are using a quorum disk, please see the quorumd section as well.

Q. Does the automatic sum also calculate the votes assigned to the quorum disk?

upgrading

Set this to yes when you are performing a rolling updade of the cluster between major releases.

Q. Does this mean cman version, distro version, ...?

  • Default is no
  • Must be set to yes or no

disallowed

This option controls cman's "Disallowed" mode. Setting this to 1 may improve backwards compatibility.

Q. How and where exactly?

  • The default is 0, disabled.
  • Must be set to 0 or 1

quorum_dev_poll

This is the number of milliseconds after a qdisk poll before a quorum disk is considered dead.

The quorum disk daemon, qdisk, periodically sends "hello" messages to cman and ais, indicating that qdisk is present. If cman doesn't receive a "hello" message in the time set here, cman will declare qdisk dead and generates error messages indicating that the connection to the quorum device has been lost.

Please see the quorumd section for more information on using quorum disks.

Q. Is the default really 50 seconds or is that just the example used?

shutdown_timeout

This is number of milliseconds to wait for a service to respond during a shutdown.

Q. What happens after this time? Q. Does this refer to crm/pacemaker controlled services or any service?

ccsd_poll

No info yet.

debug_mask

No info yet.

  • Unknown Default
  • Unknown value restrictions

port

No info yet.

Q. Is this for the primary totem ring?

  • Unknown Default
  • Unknown value restrictions

cluster_id

No info yet.

  • Unknown Default
  • Unknown value restrictions

hash_cluster_id

Enable stronger hashing of cluster ID to avoid collisions.

Q. How? What is an example value?

  • Unknown Default
  • Unknown value restrictions

nodename

Local node name; this is set internally by cman-preconfig and should never be set unless you understand the repercussions of doing so. It is here for completeness only.

  • Unknown Default
  • Unknown value restrictions

broadcast

Enable cman broadcast. To enable, set this to yes.

Q. Under what conditions would this be enabled?

  • Default is no, disabled.
  • Must be yes or no

keyfile

No info yet.

  • Unknown Default
  • Unknown value restrictions

disable_openais

No info yet.

  • Unknown Default
  • Unknown value restrictions

multicast

This provides the ability for a user to specify a multicast address instead of using the multicast address generated by cman.

By default, cman forms the upper 16 bits of the multicast address with 239.192 and forms the lower 16 bits based on the cluster ID.

Q. Does this have to do with the totem ring? Q. What generates the cluster ID when it's not specified by the user?

  • See above for the default
  • Must be a valid IPv4 style multicast address

Madi: Test this, is 'addr' an attribute of 'multicast' or of 'cman'?

This element has one attribute; addr

addr

This is where you can define a multicast address. If you specify a multicast address, ensure that it is in the 239.192.0.0/16 network which cman uses. Using a multicast address outside this range is untested.

Q. Is this for the first totem ring?

  • Unknown Default
  • Unknown value restrictions

Example

This is a common scenario used in two-node clusters.

	<cman two_node="1" expected_votes="1" />

totem; Totem Ring and the Redundant Ring Protocol

This controls the OpenAIS message transport protocol.

Q. Does this also control corosync? Q. Are there specific arguments for either?

consensus

When the cluster tries to form, totem will wait this many millisecond for consensus. If this timeout is reached, the cluster will give up and attempt to form a new cluster configuration. If you set this too low, your cluster may fail to form when it otherwise could have. If you set this too high, it will delay error detection and recovery.

join

This tells the totem protocol how long to wait, in milliseconds, for a JOIN messages to come from each node. This must be lower than the consensus time. Setting this too low could cause a healthy node to fail joining the cluster. Setting it higher will slow down the assembly of the cluster when a node has failed.

Q. Is this really in milliseconds?

token

This sets the maximum amount of time, in milliseconds, the totem protocol will wait for a token. If this time elapses, the cluster will reformed which takes approximately 50 milliseconds. The reconfiguration time is, then, a sum of this value plus the reconfigure time.

  • The default value is 10000 (10 seconds).
  • This must be a natural number

fail_recv_const

No info

  • The default is 2500
  • Unknown values

token_retransmits_before_loss_const

This controls how many times the totem protocol will attempt to retransmit a token before giving up and forming a new configuration. If this is set, retransmit and hold will be calculated automatically using retransmits_before_loss and token.

  • Unknown default, appears to have none (no entry in objctl)
  • ? This must be a natural number

rrp_mode

This attribute specifies the redundant ring protocol mode. It can be set to active, passive, or none. Active replication offers slightly lower latency from transmit to delivery in faulty network environments but with less performance. Passive replication may nearly double the speed of the totem protocol if the protocol doesn't become cpu bound. The final option is 'none', in which case only one network interface is used to operate the totem protocol.

If only one interface directive is specified, none is automatically chosen. If multiple interface directives are specified, only active or passive may be chosen.

NOTE: Be sure to set this if you are using redundant rings! If you wish to use a redundant ring, it must be configured in each node's clusternode entry. See below for an example.

If a ring fails and then is restored, you must manually run the following to re-enable the ring.

corosync-cfgtool -r

Verify the state by then running:

corosync-cfgtool -s
  • Default is none
  • Valid options are active, passive, or none

secauth

This attribute specifies whether HMAC/SHA1 authentication should be used to authenticate all messages or not. It further specifies that all data should be encrypted with the sober128 encryption algorithm to protect data from eavesdropping.

If the totem ring is on a private, secure network, disabling this can improve performance. Please test to see if the extra performance is worth the reduced security.

Q. Is the default actually 'on'?

  • The default is on
  • Valid values are on and off

keyfile

No info

Q. In objctl, there is a value called 'totem.key=<cluster_name>'. Is this related?

  • Unknown default
  • Unknown valid values

Attribute: interface

The totem tag supports zero, one or two interface child tags. If you use these child tags, be sure to use start and end tags.

	<totem ...>
		<interface ...>
	</totem>

ringnumber

This set the ring number with 0 being the primary ring and 1 being the secondary ring. Currently, only two rings are supported.

  • No default value
  • Valid values are 0 and 1

bindnetaddr

This tells totem which network interface to use and match the subnet of your chosen interface. The final octal must be 0.

This can be an IPv6 address, however, you will be required to set the nodeid value above. Further, there will be no automatic interface selection within a specified subnet as there is with IPv4.

Q. With IPv6, how then is the given interface chosen?

  • No default value
  • See description for valid values

mcastaddr

This sets the multicast address used by the totem protocol on this ring. Avoid the 224.0.0.0/8 range as that is used for configuration. If you use an IPv6 address, be sure to specify a nodeid value above.

Q. Is there a default? Is it automatically calculated like in cman?

  • No default
  • Must be a valid IPv4 or IPv6 IP address

mcastport

This sets the UDP port used with the multicast address above.

Q. Can the port be below 1024?

broadcast

No info

Q. Can the port be below 1024?

  • Unknown default
  • Must be a valid broadcast address

Example

This is a simple example showing secauth being disabled and

        <totem secauth="off" rrp_mode="passive">
                <interface ringnumber="0" bindnetaddr="10.0.1.0" mcastaddr="239.192.122.47" mcastport="5405" />
                <interface ringnumber="1" bindnetaddr="10.0.0.0" mcastaddr="239.192.122.48" mcastport="5405" />
        </totem>

Note: Please see bugs 624289 and 624312.

quorumd; Quorum Daemon

In older versions of RHCS, a quorum partition was used to maintain quorum with the network acting as a fall back. This eventually faded out of fashion and quorum disk partitions were rarely used. Today, quorum partitions are still not required but they are coming back into fashion as a way to improve the reliability of a cluster in a multiple failed state and to provide more intelligent quorum.

Lets look at a couple of examples;

  1. If you have a four-node cluster and two nodes fail, the surviving two nodes will not have quorum because normal quorum requires a majority (n/2+1). In this case, your cluster would shut down when it could have kept going. Adding a quorum disk would have allowed the surviving two nodes to maintain quorum.
  1. If you have a four-node cluster and a network event occurred where only one node retained access to a critical network, you would want that one node to proceed and you would rather fence the three nodes that lost access. Under normal IP quorum, the opposite would happen because, by simple majority, the one good node would be fenced by the three other nodes. The quorumd daemon can have heuristics added. In this case, we would configure each node's quorumd to first check that critical network connection. The three nodes would see that they'd lost the link and remove themselves from the cluster. In this way, only the one good node would remain up and win quorum thanks to the votes assigned to the quorum disk.

In short, the quorum disk allows a much more fine grained control of quorum in corner-case failure states.

This section is not required and can be left out when you aren't using a quorum disk partition.

A quorum partition cannot be used in clusters greater than 16 nodes. This is due to the latency caused be clusters larger that 16 nodes causing unreliable quorum disks. With 17 or more nodes, you must use IP-based (totem protocol) quorum only.

A quorum disk must be a raw 10MB or larger (11MB recommended) partition on an iSCSI or SAN device. It is recommended that your nodes use multipath to access the quorum disk. You can not use a CLVM partition.

Q. On a 2-node DRBD partition, can a raw 10MB partition be used? This is probably irrelevant as there is the 'two_node' cman option, but might be useful for the heuristics in a split brain.

Reference; redhat article from 2007.

interval

This controls how often, in seconds, that the quorum daemon on a node will attempt to write it's status to the quorum disk and read the status of other nodes. The higher this value is, the less chance that a transient error will dissolve quorum. The longer it will take to detect and recover from a failure.

Please see the heuristic element below for heuristics intervals. Q. Is this accurate? Q. Does this control the heuristics or disk poll?

tko

If a node fails the heuristics checks and/or fails to contact the quorum disk after this many intervals, it will be declared dead and will be fenced (a "Technical Knock Out"). To determine how long this will actually take, multiple interval by tko and you will have the value in seconds.

If you are using Oracle RAC, be sure that this and the interval values are high enough to give the RAC a chance to react to a failure first. So if your RAC timeout is set to 60 seconds, and you are using the default interval of 2, it is recommended to set this to at least 35 (70 seconds).

Q. Is there a modern variant on the 'cman_deadnode_timeout' and, if so, does interval*tko still need to be lower? Q. There seems to be no default in objctl.

votes

This is the number of votes assigned to the quorum disk. This value should be the total number of votes of your cluster minus the minimum number of nodes your cluster can operate with. For example, if you have a four-node cluster that can operate with just one node, you would set this to 3 (4-1). This value must be set when using a quorum disk as there is no default.

Q. Is this true, or would the votes be calculated?

min_score

The minimum score for a node to be considered alive. If omitted or set to 0, the default function, floor((n+1)/2), is used, where n is the sum of the heuristics scores. The minimum score value must never exceed the sum of the heuristic scores. If set higher, it will be impossible for the heuristics tests to pass. If the resulting score is below this value, the node will reboot to try an return in a better state.

Q. Does it reboot after one failure?

device

The storage device the quorum daemon uses. The device must be the same on all nodes. It has no default and must be set unless you set label below. For example, if you created your quorum disk with the call:

mkqdisk -c /dev/sdi1 -l rac_qdisk

This will be set to /dev/sdi1. When possible, use set the label option below as it is more robust. If you use 'label' instead of this then the device does *not* need to be the same among nodes. In short, don't set this unless you have a good reason to.

Q. Is this true?

  • No default
  • Must be a valid device path

label

Specifies the quorum disk label created by the mkqdisk utility. If you look at the example given in the device</span< argument above, then this would be rac_qdisk. Setting this instead of device is preferable. If you set this, then device is in fact ignored.

If this field is used, the quorum daemon reads /proc/partitions and checks for qdisk signatures on every block device found, comparing the label against the value below. This is useful in configurations where the quorum device name differs among nodes.

  • No default
  • Must be a valid mkqdisk label

Example

No example yet

dlm; The Distributed Lock Manager

The distributed lock manager is used to protect shared resources from corruption by ensure that the nodes in a cluster work together in an organized fashion. This is particularly critical with clustered file systems like gfs2.

  • See man dlm_controld

protocol

This tells DLM to use automatically determine whether to use TCP or SCTP depending on the rrp_mode. You can force one protocol or the other by setting this to tcp or sctp. If rrp_mode is none, then tcp is used.

  • Default is detect.
  • Valid values are detect, tcp and sctp

timewarn

This specifies how many 100ths of a second (centiseconds) to wait before dlm emits a warning via netlink. This value is used for deadlock detection and only applies to lockspaces created with the DLM_LSFL_TIMEWARN flag.

Q. This should be explained better. It relies too heavily on assumed knowledge.

log_debug

Setting this to 1 will enable DLM debug messages.

Q. Do these messages go to /var/log/messages?

  • Default is 0 (disabled)
  • Valid values are 0 and 1

enable_fencing

This controls fencing recovery dependency. Set this to '0' to disable fencing dependency.

Q. Does this allow cman to start when no fence device is configured? Why would a user ever disable this?

  • Default is 1 (enabled)
  • Valid values are 0 and 1

enable_quorum

This controls quorum recovery dependency. Set this to 0 to disable quorum dependency.

Q. Does this mean that a non-quorum partition will attempt to continue functioning?

  • Default is 1 (enabled)
  • Value must be 0 or 1

enable_deadlk

The controls the deadlock detection code. To enable deadlock detection, set this to 1.

Q. Is this primarily a debugging tool?

  • Default is 0 (disabled)
  • Value must be 0 or 1

enable_plock

This controls the posix lock code for clustered file systems. This is required by cluster-aware file systems like GFS2, OCFS2 and similar. In some cases though, like Oracle RAC, plock is implemented internally and thus this needs to be disabled in the cluster. Also, plock can be expensive in terms of latency and bandwidth. Disabling this may help improve performance but should only be done if you are sure you do not need posix locking in your cluster. To disable it, set this to 0.

Unlike flock (file lock), which locks an entire file, plock allows for locking parts of a file. When a plock is set, the file system must know the start and length of the lock. In clustering, this information is sent between the nodes via cpg (the cluster process group), which is a small process layer on top of the totem protocol in corosync.

Messages are of the form take lock (pid, inode, start, length). Delivery of these messages are kept in the same order on all nodes (total order), which is a property of 'virtual synchrony'. For example, if you have three nodes; A, B and C, and each node sends two messages, cpg ensures that the message all arrive in the same order across all nodes. For example, the messages may arrive as c1,a1,a2,b1,b2,c2. The actual order doesn't matter though, just that it's a consistent order.

For more information on posix locks, see the fcntl man page and read the sections on F_SETLK and F_GETLK.

man fcntl

For more information on cpg, install the corosync development libraries (corosynclib-devel) and then read the cpg_overview man page.

yum install corosynclib-devel
man cpg_overview
  • Default is 1 (enabled)
  • Value must be 0 or 1

plock_rate_limit

This controls the rate of plock operations per second. Set a natural number to impose a limit. This might be needed if excessive plock messages are causing network load issues.

plock_ownership

This controls the plock ownership function. When enabled, performance gains may be seen where a given node repeatedly issues the same lock. This can affect backward compatibility with older versions of dlm. To disable it, set this to 0.

Q. Is this right? This should be explained better.

  • Default is 1 (enabled)
  • Value must be 0 or 1

drop_resources_time

This is the number of milliseconds to wait before dropping the cache of lock information. The lower this value, the better the performance but the more memory will be used.

NOTE: This value is ignored when plock_ownership is disabled.

Q. Is this true?

drop_resources_count

This is the number of cached items to attempt to drop each drop_resources_time milliseconds. The higher this number, the better the potential performance, but the more memory will be used.

NOTE: This value is ignored when plock_ownership is disabled.

Q. Is this right?

drop_resources_age

This is the number of milliseconds that a cached item is allowed to go unused before it is set to be dropped. The lower this value, the better the performance but the more memory will be used.

NOTE: This value is ignored when plock_ownership is disabled.

Q. Is this right?

Example

This example increases memory use to try and gain performance.

	<dlm protocol="detect" drop_resources_time="5000" drop_resources_count="20" drop_resources_age="5000" />

gfs_controld; GFS Control Daemon

There are several <gfs_controld...> tags that are still supported, but they have been deprecated in favour of the <dlm_controld...> tags.

If you wish to use these deprecated tags, please see the gfs_controld man page.

man 8 gfs_controld

enable_withdraw

The one remaining argument that is still current is enable_withdraw. When set to 1, the default, GFS will respond to a withdraw. To disable the response, set this to 0.

Q. What does the response actually do?

  • Default is 1 (enabled)
  • Value must be 0 or 1

clusternodes; Defining Cluster Nodes

To process

	<!-- Cluster Nodes -->
	<clusternodes>
		<!-- AN!Cluster Node 1 -->
		<!-- 
		The clusternode 'name' value must match the name returned by
		`uname -n`. The network interface with the IP address mapped to
		this name will be the network used by the totem ring. The totem
		ring is used for cluster communication and reconfiguration, so
		all nodes must use network interfaces on the same network for
		the cluster to form. For the same reason, this name must not
		resolve to the localhost IP address (127.0.0.1/::1).
		
		Optional <clusternode ...> arguments:
		- weight="#"; This sets the DLM lock directory weight. This is
		              a DLM kernel option.
		  Q. This needs better explaining.
		-->
		<clusternode name="an-node01.alteeve.com" nodeid="1">
			<!-- 
			By default, an initial totem ring will be created on
			the interface that maps to the name above. Under
			Corosync, this would have been "ring 0". 
			
			To set up a second totem ring. The 'name' must be
			resolvable to an IP address on the network card you
			want you second ring on. Further, all other nodes must
			be setup to use the same network as their second ring
			as well.
			NOTE: Currently broken, do not use until this warning
			NOTE: has been removed.
			-->
			<!--
			<altname name="an-node01-sn" port="6899" 
							mcast="239.94.1.1" />
			-->
			<!-- Fence Devices attached to this node. -->
			<fence>
				<!-- 
				The entries here reference devices defined
				below in the <fencedevices/> section. The
				options passed control how the device is
				called. When multiple devices are listed, they
				are tried in the order that the are listed
				here.
 
				The 'name' argument must match a 'name'
				argument in the '<fencedevice>' section below.
 
				The details must define how 'fenced' will fence
				*this* device.
 
				The 'method' name seems to be unpassed to the
				fence agent and is useful to the human reader
				only?
 
				All options here are passed as 'var=val' to the
				fence agent, one per line.
 
				Note that 'action' was formerly known as
				'option'. In the 'fence_na' agent, 'option'
				will be converted to 'action' if used.
				--> 
				<method name="node_assassin">
					<device name="batou" port="01"
							 action="reboot"/>
				</method>
			</fence>
		</clusternode>
 
		<!-- AN!Cluster Node 2 -->
		<clusternode name="an-node02.alteeve.com" nodeid="2">
			<altname name="an-node02-sn" port="6899"
							 mcast="239.94.1.1" />
			<fence>
				<method name="node_assassin">
					<device name="batou" port="02"
							 action="reboot"/>
				</method>
			</fence>
		</clusternode>
	</clusternodes>
	<!--
	The fence device is mandatory and it defined how the cluster will
	handle nodes that have dropped out of communication. In our case,
	we will use the Node Assassin fence device.
	-->
	<fencedevices>
		<!--
		This names the device, the agent (script) to controls it,
		where to find it and how to access it.
		-->
		<fencedevice name="batou" agent="fence_na" 
			ipaddr="batou.alteeve.com"  login="section9" 
			passwd="project2501" quiet="1"></fencedevice>
		<fencedevice name="motoko" agent="fence_na" 
			ipaddr="motoko.alteeve.com" login="section9" 
			passwd="project2501" quiet="1"></fencedevice>
		<!--
		If you have two or more fence devices, you can add the extra
		one(s) below. The cluster will attempt to fence a bad node
		using these devices in the order that they appear.
		-->
	</fencedevices>
 
	<!-- When the cluster starts, any nodes not yet in the cluster may be
	fenced. By default, there is a 6 second buffer, but this isn't very
	much time. The following argument increases the time window where other
	nodes can join before being fenced. I like to give up to one minute but
	the Red Hat man page suggests 20 seconds. Please do your own testing to
	determine what time is needed for your environment.
	-->
	<fence_daemon post_join_delay="60">
	</fence_daemon>

Examples

Examples of Fedora 13 cluster.conf configurations.

Examples of CentOS 5 cluster.conf configurations.

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.