MySQL NDB Cluster Replication Topologies (Part – IV)

In this blog, we will discuss, how to setup MySQL NDB Cluster replication through backup and restore method. This is bit tricky but interesting. The scenario here is, say user have a standalone cluster up and running, later there is a need to have a replication setup with an another empty cluster without shutting down the existing running cluster i.e. zero downtime. Then this backup and restore method will come in handy.

Let’s create two MySQL NDB Cluster with the following environment, Here, one will be termed as ‘source’ cluster and the other one will be termed as ‘replica’ cluster.
  • MySQL NDB Cluster version (Latest GA version)
  • 1 Management Node
  • 4 Data Nodes
  • 1 MySQLDs
  • Configuration slots for up to 4 additional API nodes
Replication schema diagram:

CLUSTER 'A'                                 CLUSTER 'B'

MySQL (test01)                         MySQL (test05)
    (source)                                         (replica)

test01: MySQL server host name of source cluster, IP: 100.103.21.66
test05: MySQL server host name of replica cluster, IP: 100.103.21.65

Step 1:

A) Let’s start both the cluster 'A' and Cluster 'B'

Here we will not call it now as a ‘source’ cluster or 'replica' cluster as we haven’t plan any replication setup yet. So let’s just start the normal cluster:


Start the second cluster (Cluster 'B'):


Now both the clusters are up and running. I am assuming that, user is aware of how to enable configuration variables so that MySQL server can be act as a ‘source’. If not, please refer to my previous blogs on replication.

B) Login as ‘root’ user in the MySQL server of Cluster 'A'. Then update the root account so that other host with ‘root’ user can connect to this source MySQL server.

From Cluster 'A' : (cluster-test01)

mysql> bin/mysql -u root -P"13010";
mysql> UPDATE mysql.user SET host='%' where user='root' and host='localhost';
mysql> flush privileges;


C) Now let’s create a database ‘test1’, table ‘t1’ and will insert rows into table ‘t1’ continuously from the Cluster 'A'.


Let’s insert rows into table ‘t1’. I am using a tool to insert rows, can also be done through any script.


D) Take a backup from the Cluster 'A' while the rows insertion are going on. If user is not aware of how to take backup and restore in NDB Cluster then please refer to my previous blog.

ndb_mgm > start backup


From the above image, we can see that backup has been completed successfully.

E) Let’s create one more table ‘t2’ and insert rows into it from the Cluster 'A'. The idea is to let the backup image contains part of the cluster data and after the full restore, we will get the remaining data through replication.


F) Let’s copy backup files from Cluster 'A' to Cluster 'B'

Since we have taken the backup of Cluster 'A', let’s scp (secure copy) backup folder from Cluster 'A' to Cluster 'B'. I have created a backup path i.e. ‘/export/home/saroj/mysql-tree/8.0.25/ndbd/node1/data4/’ on both Cluster 'A' and Cluster 'B'.

From Cluster 'A':

From first host: (cluster-test01):

From the below image, we can see that backup files have been created at the backup path ‘export/home/saroj/mysql-tree/8.0.25/ndbd/node1/data4/’.


Let’s start the scp of BACKUP folder to Cluster 'B' :

shell> cd /path/to/backup files/
shell> scp -r BACKUP username@cluster-test05.dbad3iad.mysql2iad.oraclevcn.com:/export/home/saroj/mysql-tree/8.0.25/ndbd/node1/data4/


Like above, do the remaining copy for BACKUP folder from Cluster 'A' to Cluster 'B'.

G) Now let’s start NDB restore on Cluster 'B':

Since Cluster 'B' have all the backup files from Cluster 'A' so let’s start the restore.

Note: Restore can be run on Cluster 'B' either one by one or all at a time (in parallel).

From first host of Cluster 'B' (cluster-test05), let’s run the restore command:

shell> cd /export/home/saroj/mysql-tree/8.0.25/mysql;
shell> bin/ndb_restore -n 33 -b 1 -m --disable-indexes --ndb-connectstring=cluster-test05:1186 --backup_path=/export/home/saroj/mysql-tree/8.0.25/ndbd/node1/data4/BACKUP/BACKUP-1


Start the restore on the second host:


Start the restore on the third host:


Start the restore on the last host:


Then finish the rebuild index:


H) From the Cluster 'B', let’s update the epoch info into the status table ‘ndb_apply_status’.

After the restore finished, we should know till when we have the epoch number this backup image contain so that we will get the corresponding row details from the Cluster 'A'. So it is necessary to update this info in the ‘ndb_apply_status’ table. This can be done by passing –restore-epoch or -e option to ndb_restore command.

shell> bin/ndb_restore -n 36 -b 1 -e --ndb-connectstring=cluster-test05:1186 --backup_path=/export/home/saroj/mysql-tree/8.0.25/ndbd/node1/data4/BACKUP/BACKUP-1


I) From Cluster 'B', get the epoch details from the mysql table.

mysql> bin/mysql -u root -P"13010";
mysql> use mysql;
mysql> select * from ndb_apply_status;

From the below image we can see that, epoch number has been updated into the status table.


J) From Cluster 'A', get the respective binary log file name and its position corresponding to the epoch number that we have got from the Cluster 'B' (previous step I) from the table 'ndb_binlog_index':

mysql> bin/mysql -u root -P"13010";
mysql> use mysql;
mysql> select * from ndb_binlog_index where epoch > 3745211482111 order by epoch limit 1\G


From the above image, we can see that, the select query fetch the binlog file name 'test01-binlog.000002' and the log position '175479530' from the ndb_binlog_index table.

K) From the Cluster 'B', let’s connect to the Cluster 'A' with the binlog file name and log position that we got from the previous step 'J'.

mysql> bin/mysql -u root -P"13010";
mysql> CHANGE REPLICATION SOURCE TO SOURCE_HOST='100.103.21.66', SOURCE_USER='root', SOURCE_PORT=13010, SOURCE_LOG_FILE='test01-binlog.000002', SOURCE_LOG_POS=175479530;
mysql> start replica;


L) Let’s check the state of the replication:

mysql> show replica status\G



From the above image, we can see that a replication has been successfully established between Cluster 'A' (Source) to Cluster 'B' (replica).

M) Let’s check the rows count from both Source (Cluster 'A') and Replica (Cluster 'B') cluster:

From Source cluster (Cluster 'A'):


From the above image, we can see that table ‘t1’ has 1605888 rows while table ‘t2’ has 761780 rows.

From Replica cluster (Cluster 'B'):


From the above image, both table ‘t1’ and ‘t2’ has same rows count that of source cluster. So this means, replica cluster is now successfully in-sync with the source cluster.

N) Let’s check the status of the replica cluster once again:

mysql> show replica status\G


From the above image, we can see that replica is waiting for more events from the source cluster.

So we saw that, at no point of time, we have brought down the source cluster (Cluster 'A') and establish a replication with replica cluster (Cluster 'B') rather all the operations we performed while the source cluster was online.

This concludes our discussion on fourth replication topology in MySQL NDB Cluster.

Comments

Post a Comment

Popular posts from this blog

MySQL NDB Cluster Backup & Restore In An Easy Way

MySQL NDB Cluster Installation Through Docker

Security Configuration For MySQL NDB Cluster Replication