MySQL NDB Cluster Replication Topologies (Part – II)

In the previous blog, we were able to setup a MySQL NDB Cluster replication between one source and one replica cluster. In this blog, we will discuss about replication between one source and three replica clusters.

Note: With MySQL version (8.0.21), we have started changing the term “master” to “source”, the term “slave” to “replica”. So in this blog we will refer these terms ‘source’ and ‘replica’ wherever applicable.

The main advantage of this type of topology is good for giving ‘local’ reads in geographically distant areas as well as increased redundancy in case of issues.

Let’s create four MySQL NDB Cluster with the following environment, from which one will be termed as ‘source’ cluster while the rest will be ‘replica’ clusters.

  • MySQL NDB Cluster version (Latest GA version)
  • 1 Management node
  • 4 Data nodes
  • 1 MySQLD
  • Configuration slots for up to 4 additional API nodes

Replication schema diagram:

MySQL (test01) ----->------->------->------> MySQL (test05)

(CLUSTER 1, Source1)                       (CLUSTER 2, Replica1)

^ ^

|  |--------> ------->------->------->---------> MySQL (test09)

|                                                               (CLUSTER 3, Replica2)

|

|------>------>------->------->------->------> MySQL (test13)

                                                                (CLUSTER 4, Replica3)


test01: MySQL server host name of source cluster,   IP: 100.103.21.66
test05: MySQL server host name of replica1 cluster, IP: 100.103.21.65
test09: MySQL server host name of replica2 cluster, IP: 100.103.21.70
test13: MySQL server host name of replica3 cluster, IP: 100.103.21.61

Step 1: Let’s start the ‘source’ cluster:



And, then start all the ‘replica’ cluster one by one. Start the first replica cluster:


Start the second replica cluster:


Start the third replica cluster:


Now all the clusters are up and running.

Step 2:

a) Let’s cleanup the old binary files if exist:

From the source MySQL server, let’s delete all existing binary log files (if any) through 'reset master' command. The status of the binary log file can be known through ‘show master status’ command.

mysql> reset master;
mysql> show master status;


b) Login as ‘root’ user in the source MySQL server. Then update the root user so that other host with ‘root’ user can able to connect to this source MySQL server.

mysql> bin/mysql -u root -P"13010";
mysql> UPDATE mysql.user SET host='%' where user='root' and host='localhost';
mysql> flush privileges;


From the first replica’s MySQL Server, try to connect to the source’s MySQL server.

mysql> bin/mysql -u root -P"13010";
mysql> stop replica;
mysql> reset replica all;
mysql> CHANGE MASTER TO MASTER_HOST='100.103.21.66', MASTER_USER='root', MASTER_PORT=13010;
mysql> start replica;


Now check the state of the replication: 
mysql> show replica status\G



From the above image we can see that we have successfully created a replication between source and first replica NDB Cluster.

Step 3:

Now let’s create a database ‘test1’, table ‘t1’, and will do transactions from the source cluster. We will check the same from the replica’s mysql server.

Here, we will do transaction on the table ‘t1’ continuously with high number of rows. The idea is we will add other two replicas in an interval while transactions are on going and will see if all the replicas have same number of rows or any changes in rows count we saw. Here, I am inserting rows through internal tool, can also be done through any script.

From the source’s MySQL server:


Let’s check from the replica’s MySQL server, if all the database objects and rows are populating at first replica cluster or not.


From the above image, we can see that a database ‘test1’, table ‘t1’ with rows getting populated from the source cluster. Let’s check the replication status.


From the above image, we can see that rows are applying in batches
to the table ‘t1’.

Let’s start the second replica server. Here, we need to login as root and issue the change master command.

mysql> bin/mysql -u root -P"13010";
mysql> CHANGE MASTER TO MASTER_HOST='100.103.21.66', MASTER_USER='root', MASTER_PORT=13010;
mysql> start replica;


Check the status of the second replica server:
mysql> show replica status\G



Let’s check whether database, table and rows are populating or not at second replica MySQL server.


From the above image, we can see that rows are populating in table ‘t1’. 

Let’s start the third replica MySQL server. Here, we need to login as root and issue the change master command.

mysql> bin/mysql -u root -P"13010";
mysql> CHANGE MASTER TO MASTER_HOST='100.103.21.66', MASTER_USER='root', MASTER_PORT=13010;
mysql> start replica;


Let’s check the replication status:




Since replication has been started, let’s check the database, table and rows status.


From the above image, we can see that rows are started populating into table ‘t1’ in third replica MySQL server. So replication has been established successfully between source MySQL server along with three replica MySQL server.

At the end, let’s check the final rows count in source MySQL server and then check all the three replica’s MySQL server.

From the source MySQL server:
Below image shows that the final rows count for table ‘t1’ is ‘3458910’.


Let’s check the rows count in first replica MySQL Server:


From the above image, we can see that replica has read all the relay logs and the final rows count is ‘3458910’ which is same as source rows count. Let’s check the second replica MySQL server rows count:


From the above image, we can see that second replica MySQL server has also same number of rows count i.e. ‘3458910’. Let’s check the rows count for third replica MySQL server.


From the above image, we can see that, third replica MySQL server has also same number of rows count i.e. ‘3458910’. So all rows of table 't1' from source has been replicated successfully to all the three replica MySQL server.

In this demo, I have only shown here rows insertion as the only operation, we can do all the transactions like update, delete, DDL operations etc from the source MySQL server and the same can be available at all the replica MySQL server.

This concludes our discussion on second replication topology in NDB Cluster.

Comments

  1. Thank you for sharing such useful information. I really enjoyed while reading your article and it is good to know the latest updates. Do post more. And also get to know about the best
    MySQL Development Company

    ReplyDelete

Post a Comment

Popular posts from this blog

MySQL NDB Cluster Backup & Restore In An Easy Way

MySQL NDB Cluster Installation Through Docker

Security Configuration For MySQL NDB Cluster Replication