CIB

From Alteeve Wiki
Jump to navigation Jump to search

 AN!Wiki :: How To :: CIB

The CIB, Cluster Information Base, is the XML file used by pacemaker to store it's configuration and state. This document is designed to explain it's parts.

Most of this document's content comes from Pacemaker Explained.

Overview

Before diving into the CIB XML, lets look at how it works at a high level.

How decisions are made

When a state changes, like a node joining the cluster or after a service fails, the follow steps happen;

  1. The node elected as the DC will review it's CIB.
  2. It will compare the new state against the state it wishes to be in given the changes.
  3. Next it decides what action or actions are needed to get to that state.
  4. With this plan in place, the DC determines which steps it can perform and which steps other nodes need to perform.
    1. Anything it can perform is passed down to the lrmd via it's local crmd.
    2. Anything that needs to be acted on by other nodes is sent over the cluster messaging and picked up by each node's crmd. These messages are then passed down to each nodes lrmd.
    3. The lrmd initiates the actual actions requested.
    4. All nodes report back to the DC the results of their action.
    5. The DC will determine, based on the results of the actions taken on the node(s), if any further action is required.

All actions that are taken, be they local to the DC or on a remote node, are executed in the same order across all nodes. This ordering is ensured thanks to a function of the cluster communication layer called "virtual synchrony", which is provided by corosync's use of the totem protocol.

Manual Control of Resource Location

To force a resource to a node in the cluster, temporarily assign it a preference value of INFINITY. To allow the cluster to again control that resource, remove or reset the preference value. Note that, depending on the stickiness of the resource, the resource may remain on the node that you pushed it to.

Base Configuration

An initial, empty CIB will look like this;

<cib generated="true" admin_epoch="0" epoch="0" num_updates="0" have-quorum="false">
	<configuration>
		<crm_config/>
		<nodes/>
		<resources/>
		<constraints/>
	</configuration>
	<status/>
</cib>


Editing the CIB when Pacemaker Is Not Running

Pacemaker stores it's CIB in /var/lib/heartbeat/crm. The CIB itself is cib.xml.

When Pacemaker starts, it checks that the cib.xml file is sane by checking it's sum against cib.xml.sum. You can manually check that the hash matches using cibadmin -5 -x cib.xml. If you edit the cib.xml, you will need to either delete the cib.xml.sum file completely, or replace it's contents via cibadmin -5 -x cib.xml > cib.xml.sig.

Pacemaker keeps a cib-XX.raw and cib-XX.raw.sig, where XX is an integer, as backups of the cluster configuration.

The cib.last file contains the current CIB version (one value higher than the last backup).

CIB XML Reference

This covers each element and attribute used in Pacemaker's CIB.

Element: cib

cib Attribute: generated

cib Attribute: admin_epoch

cib Attribute: epoch

cib Attribute: num_updates

cib Attribute: have-quorum

cib Child Element: configuration

configuration Child Element: crm_config

configuration Child Element: nodes

configuration Child Element: resources

configuration Child Element: constraints

cib Child Element: status

Glossary

Anti-Colocation

CIB - Cluster Information Base

The "cluster information base", crm for short, is a combination of the cluster's configuration and the current state of the cluster. It is stored internally in XML format and is automatically kept in sync amoung the nodes in the cluster.

Clones

Cluster Glue

Cluster Type - Active/Active

In this type of cluster, both/all nodes provide cluster services. Should one node fail, the other will continue providing services.

Cluster Type - Active/Passive

In this type of cluster, one node provides cluster services and the other node acts as a stand-by. Should the Active node fail, the Passive node will take over lost services.

Cluster Type - N + 1

In this type of cluster, one node acts as a spare to 2 or more nodes. If you are familiar with RAID level 5, this type of cluster is a similar concept. The cluster is designed to continue providing services should one node fail. However, if a second node fails, some or all cluster services may stop.

Cluster Type - N + M

This type of cluster is similar to N + 1, but can support M failed nodes before services start to fail. For example, if you require five nodes to provide all of your clustered services and have a total of seven nodes in your cluster, then your "N + M" is "5 + 2".

Cluster Type - N to 1

http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations

Cluster Type - N to N

http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations

clvmd - Clustered LVM

Colocation

When two or more resources must be together, such as a virtual IP address and a webserver, they can be grouped together using colocation.

Corosync

crmd - Cluster Resource Manager Daemon

DC - Designated Co-ordinator

The "designated co-ordinator", dc for short, is a node in the cluster that has been elected as the "master" node. This node is responsible foe making decisions and initiating actions in the cluster. Should the DC fail, a new one is immediately elected.

Directional

This is used to describe how resources move, particularly in an colocation constraints. For example, you may say that a web resource must exist along with a virtual IP resource. In this case, the "direction" of the web resource's migration depends on where the IP address went.

DLM - Distributed Lock Manager

Fence Device

fenced

Is this used in pacemaker/rhel?

Fencing

gfs2 - Global File System v2

INFINITY

When calculating the optimal configuration of a cluster, the policy engine looks at the cost of an action. Normally this is an integer value, but can also be "INFINITY". This is simply a way of telling the policy engine that the given cost of an action is an absolute, rather than a relative value. This is often seen when defining colocation and anti-colocation constraints.

Location Constraints

This is where the node on which a given resource runs is restricted in some manner. Generally, this takes the form of "Resource B requires resource A" (colocation), "Resource C must not be on the same node as Resource D" (anti-colocation) and so on.

lrmd - Local Resource Management Daemon

When a node needs to take an action, like start or stop a service, the cluster resource manager daemon (crmd)

Multi-State

Node

ntp - Network Time Protocol

In clustering, it is important to keep all nodes using the same time. This used to be absolutely critical, but changes have made this no longer so. Just the same, it is still strongly advised to keep all nodes using the same time to ensure that future log analysis is sane.

Ordering

Some resource, like apache, examine the system when it starts. It bases it's behaviour on what it finds at that time, such as which IP addresses to listen to for incoming connections. When using a virtual IP address then, it is important to ensure that the IP is available before the cluster starts the web server resource. This control is provided via "ordering constraints" of colocated resources.

Pacemaker

pcmk

This is simply a short form of "pacemaker".

pcs

This is the "Pacemaker Configuration System", the official command line tool used to configure and manipulate the cluster. Other management tools exist, such as ...

PE - PEngine - Policy Engine

Quorum

Quorate

Request Bucket

Resource Agent

Resource Driven

Resource Preference

The "preference" of a resource for a given node or nodes in the cluster is used when calculating the optimal condition of a cluster. This value is compared against the cost to relocate a running resource. If the preference for a node is greater than the cost to move it, the resource will relocate to the other node.

Resource Stickiness

Normally, pacemaker assumes that moving a resource invokes no downtime. Thus, by default, pacemaker will always move a resource to another node if the policy engine decides that doing so is optimal. In reality though, many resources go through a period of downtime during resource migration. To account for this, you can assign a numerical "cost" to move the resource. This is known as the resource's "stickiness".

Resources

Service Types

Services

ssh - Secure Shell

Score (?)

Each resource has a cost to migrate, default 0, and a value indicating it's preference for one node over another (also default of 0). These values are tallied up to create a "score"; a numerical representation used to determine if the resource is in an ideal state.

For example, if a node (re)joins a cluster, and a given resource has a "stickiness" value of 100 and a "preference" value of 50 for that node, the score is "-50" and the resource will stay where it is. However, if the preference was instead "110", the score would be "10" and the resource would relocate to the newly joined node.

Standby Mode

When a node is placed in "standby mode", any resources running on it will be relocated off. It will also prevent the node from being used in case a node is lost. It is generally used when you want to upgrade a node's software, without taking the node completely offline, for example. When a node is taken out of standby mode, the resource stickiness versus node prefernce values will be analyzed to determine if the resource should migrate back or stay where it is.

q. - Does the DRBD ocf keep a node in standby in Secondary?

STONITH - Shoot The Other Node In The Head

STONITH devices act like resources and are configured in the CIB, but the stonithd daemon handles sorting out how to go about fencing a target node.

stonithd - STONITH Daemon

totem protocol

XML

 

Any questions, feedback, advice, complaints or meanderings are welcome.
Alteeve's Niche! Enterprise Support:
Alteeve Support
Community Support
© Alteeve's Niche! Inc. 1997-2024   Anvil! "Intelligent Availability®" Platform
legal stuff: All info is provided "As-Is". Do not use anything here unless you are willing and able to take responsibility for your own actions.