How to replicate databases between two Percona XtraDB Clusters - asynchronous MySQL replication with Juju


I have recently been looking for a solution for Percona XtraDB Cluster databases replication across multiple geographically-distributed sites. My first approach wast to use Galera synchronous replication, but I have found that the official recommendation is to use MySQL asynchronous replication [1]. I have followed one of many available tutorials for setting up replication between two MySQL nodes, but I have quickly faced some issues. Well, setting up replication between one Percona cluster and the other one is not the same as setting up replication between one MySQL node and the other one. Therefore, I have decided to document all of those traps, so that you could avoid them during your deployment. Moreover, I have created a patch for the percona-cluster charm [2], so that you could fully automate your multi-site Percona deployment!

Deployment - manual

Assuming you have two Percona clusters (pxc1 and pxc2) already deployed, the following is a list of steps to follow in order to setup master-slave replication from pxc1 to pxc2:

1) Update my.cnf file on each unit in each cluster to contain the following settings:

  • pxc1:

bind-address =
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1
server_id = 1
binlog_do_db = example1
binlog_do_db = example2

  • pxc2:

bind-address =
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1
server_id = 2
binlog_do_db = example1
binlog_do_db = example2

Note that the value of the server_id setting must be different for each cluster, but it must be the same within the cluster [3]. Also note that you have to create separate binlog_do_db entry for each database to replicate.

Once done restart mysql service:

# systemctl restart mysql

2) Create replication user account on pxc1 cluster side by executing the following commands from any unit:

$ mysql -u root -p

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'' IDENTIFIED BY 'password';

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'' IDENTIFIED BY 'password';

mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'' IDENTIFIED BY 'password';

Note that you have to execute the GRANT command for each of the pxc2 cluster members.

Finally, check master status and note down the output as it will be used in point 3:

| File             | Position | Binlog_Do_DB      | Binlog_Ignore_DB |
| mysql-bin.000001 |      618 | example1,example2 |                  |
1 row in set (0.00 sec)

3) Setup replication on pxc2 cluster side by executing the following commands from any unit:

$ mysql -u root -p

mysql> STOP SLAVE;

mysql> CHANGE MASTER TO master_host='', master_port=3306, master_user='replication', master_password='password', master_log_file='mysql-bin.000001', master_log_pos=618;


Note that you have to use the file and position from point 2.

4) At this point you have master-slave replication working from pxc1 to pxc2. You can check it by logging to any unit of pxc1 and executing the following commands:

mysql> CREATE DATASE example1;

The database is now visible on all units of both pxc1 and pxc2 clusters:

| Database           |
| example1           |
| information_schema |
| mysql              |
| performance_schema |
| test               |
5 rows in set (0.01 sec)

If you want to setup master-master replication you have to follow steps 2 and 3, but execute the commands on the other side of the replication.

Deployment - with Juju

If you haven't heart about Juju yet, now is a perfect time. Juju is an application modelling tool which works on the bases of so called charms which encapsulate the entire logic to install, configure and maintain applications. You can find more information about it at [4].

Let's start with bootstrapping Juju controller on a local LXD [5] provider for testing purposes (if you can't install LXD on your computer, you can setup a VM with Ubuntu LTS and test from there):

$ juju bootstrap localhost lxd-controller

This will take a while. In meantime you can read about other providers (clouds) supported by Juju [6]. Once the controller is bootstrapped you can start modelling. First, download the branch I created (it hasn't been merged with the upstream code yet):

$ cd /tmp

$ git clone

$ cd /tmp/charm-percona-cluster

$ git checkout 1776171

Then deploy two different clusters:

$ juju deploy --num-units 3 /tmp/charm-percona-cluster pxc1

$ juju deploy --num-units 3 /tmp/charm-percona-cluster pxc2

configure them (VIPs should belong to the same subnet as lxdbr0 interface):

$ juju config pxc1 root-password="changeme"

$ juju config pxc2 root-password="changeme"

$ juju config pxc1 vip=""

$ juju config pxc1 vip=""

and deploy hacluster subordinate application for Pacemaker / Corosync management:

$ juju deploy hacluster hacluster-pxc1

$ juju deploy hacluster hacluster-pxc2

$ juju relate pxc1 hacluster-pxc1

$ juju relate pxc2 hacluster-pxc2

That's almost it! At this point you should have 2 different Percona clusters deployed. You can check the status by executing the following command:

$ juju status

Assuming that all units have turned to the active status, you can enable asynchronous MySQL replication between pxc1 and pxc2. First, configure clusters with mandatory settings:

$ juju config pxc1 databases-to-replicate="example1,example2"

$ juju config pxc2 databases-to-replicate="example1,example2"

$ juju config pxc1 cluster-id=1

$ juju config pxc1 cluster-id=2

Then enable replication from pxc1 to pxc2 by executing the following command:

$ juju relate pxc1:master pxc2:slave

Voila! Wanna enable master-master replication? Here you go:

$ juju relate pxc2:master pxc1:slave

Don't you believe it's working? Just check it! It works like a charm. This is why I like Juju so much.

P.S.: Don't hesitate to visit my next post about setting up circular asynchronous MySQL replication between multiple geographically-distributed Percona XtraDB Clusters.








No comments:

Post a Comment