logo

Installing & Configuring a RabbitMQ Cluster in CentOS 6

RabbitMQ (www.rabbitmq.com) is a robust messaging platform that works across multiple operating systems, with clients available for many development platforms. It can be clustered to provide high-availability services to your messaging requirements.

This post describes installation & configuration of a basic 2-node cluster built on-top of CentOS 6. It assumes that internet access is available from the OS.

Configuration of RabbitMQ (per node)

RabbitMQ requires Erlang () to run. This is contained within the EPEL Library (http://fedoraproject.org/wiki/EPEL), so we need to add this to the list of repository locations, and install prior to RabbitMQ:

cd /home
curl http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm > epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm
yum -y install erlang
rm -f epel-release-6-8.noarch.rpm

Now we download the RabbitMQ binary, public key and install:

cd /home
curl http://www.rabbitmq.com/releases/rabbitmq-server/v3.0.4/rabbitmq-server-3.0.4-1.noarch.rpm > rabbitmq-server-3.0.4-1.noarch.rpm
rpm --import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
yum -y install rabbitmq-server-3.0.4-1.noarch.rpm
rm -f rabbitmq-server-3.0.4-1.noarch.rpm

Finally, set the service to start on reboot, and start the RabbitMQ server service:

chkconfig rabbitmq-server on
service rabbitmq-server start

You can manage RabbitMQ through the command-line interface, but I much prefer the web-based GUI plugin. Let's install it, and restart the service:

rabbitmq-plugins enable rabbitmq_management
service rabbitmq-server restart

Now browse to http://[address_of_rabbitmq_node]:15672/.

Now repeat all of these steps on the second server of the cluster (or 'clone' if you're using virtualisation, and change hostnames accordingly)

Cluster Configuration

Once the two nodes are working individually, we need to configure the cluster. We're going to rely on automatic cluster configuration, rather than starting each node individually.

One thing that is vitally important (and I found out through sweat and tears), is that you must configure the RabbitMQ cluster using hostnames, not the address of each node. To ensure that this works properly, I find it best to create a single HOSTS file (/etc/hosts), and copy this across all the other nodes. If you have multiple adapters on your machine, set the address for each node to be the one for the network that you want the cluster to communicate across.

First, we need to ensure that the Erlang cookie has been copied from the first node across both machines in the cluster (or from the second node to the first - they need to be the same regardless). This will ensure that only nodes that have been authorised can act as part of the cluster. Below, I'm going to copy the cookie from the first node over the second. On node one:

scp /var/lib/rabbitmq/.erlang.cookie root@[hostname_of_rabbitmq_node2]:/var/lib/rabbitmq/
chmod 600 /var/lib/rabbitmq/.erlang.cookie

You'll need to run the chmod command above on the second node also, to ensure that permissions are set at the right level. Also, check that the new file is owned by the rabbitmq group and user (ls -l /var/lib/rabbitmq/.erlang.cookie). If not, this must be changed by using chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie on the second node.

Now to create an environmental configuration file on node one (/etc/rabbitmq/rabbitmq-env.conf). Copy this to the second node, but change the address to the second node:

NODENAME=rabbit@[hostname_of_rabbit_mq_node1]
CONFIG_FILE=/etc/rabbitmq/rabbitmq

Now we need to create our rabbitmq.config file (assuming that does not already exist). It will be located at /etc/rabbitmq/rabbitmq.config:

[{rabbit,
  [{cluster_nodes, {['rabbit@{hostname_of_rabbitmq_node1}', 'rabbit@{hostname_of_rabbitmq_node2}',], disc}}]}].

Now copy the file to the second node:

scp /etc/rabbitmq/rabbitmq.config root@[hostname_of_rabbitmq_node2]:/etc/rabbitmq/

Now start both server nodes:

service rabbitmq-server start

You'll see that the cluster is now running by running rabbitmqctl cluster_status. The output will be similar to:

Cluster status of node 'rabbit@{hostname_of_rabbitmq_node1}' ...
  [{nodes,[{disc,['rabbit@{hostname_of_rabbitmq_node1}',
    'rabbit@{hostname_of_rabbitmq_node2}']}]},
   {running_nodes,['rabbit@{hostname_of_rabbitmq_node2}',
    'rabbit@{hostname_of_rabbitmq_node1}']},
   {partitions,[]}]
...done.