MySQL NDB Cluster Replication Topologies (Part – I)

In this blog series, we will discuss various MySQL NDB Cluster replication topologies possible with a demonstration. We will start with a simple case i.e. one source (formerly called master) and one replica (formerly called slave).

Note: With MySQL version (8.0.21), we have started changing the term “master” to “source”, the term “slave” to “replica”. So in this blog we will refer these terms ‘source’ and ‘replica’ wherever applicable.

Let’s create two MySQL NDB Cluster with the following environment, from which one will be termed as ‘source’ cluster while the other one will be ‘replica’ cluster. For now, let’s stick to the identical environment from both the cluster. Later in the following blog series, we will change the environments and will run the replication.

  • MySQL NDB Cluster version (Latest GA version)
  • 1 Management node
  • 4 Data nodes
  • 1 Mysqld
  • Configuration slots for up to 4 additional API nodes

Replication schema diagram:

   CLUSTER A                                                    CLUSTER B

MySQL (test01) ------->------->--------->-------->  MySQL (test05)

   (source)                                                             (replica)

test01: MySQL server host name of source cluster, IP:
test05: MySQL server host name of replica cluster, IP:

Step 1: Let’s start the ‘source’ cluster:

And, then start the ‘replica’ cluster:

Now both the clusters are up and running.

Step 2: Configure MySQL servers for source cluster

I assume that user is aware of replication fundamentals or can refer to the link to understand a bit.

Let’s discuss about two MySQL configuration parameter that is required to start the replication i.e. log_bin and ndb_log_bin.

log_bin (binary logging):

When binary logging enabled, the server logs DDL and DML changes to the binary log. Before MySQL 8.0, binary logging was disabled by default, which means that a user must be enabled this variable (log_bin) before starting the replication. From MySQL 8.0 version, binary logging is enabled by default in the MySQL Server.


By enabling this configuration variable, any changes user will do to NDB tables will be written to binary log. The setting for this variable has no effect if binary logging is not already enabled on the server using ‘log_bin’. From NDB Cluster 8.0.16 and later, ndb_log_bin defaults to 0 (false).

If log_bin=1 and ndb_log_bin=0, then each MySQLD will log only things that it is locally aware of i.e other MySQLD won’t see the changes even if there are part of the cluster. For NDB tables, only DDL statements are logged in the binlog of the MySQLD where they are executed. While for non-NDB tables (innodb), both DDL and DML statements are logged in the binlog of the MySQLD where they live.

If log_bin=1 and ndb_log_bin=1 then, DDL statements executed on other MySQLDs attached to the same cluster and affecting NDB tables are logged. DML statements occurring on NDB tables (from any MySQLD / API) are also logged.

So in short, if user is using MySQL NDB Cluster version above 8.0.16 then only ‘ndb_log_bin’ need to be enabled as binary logging is already enabled by default in MySQL server. For older NDB Cluster version to 8.0, log_bin configuration variable must be enabled as it is disabled by default.

The easier way to set these variables is from MySQL configuration file i.e ‘my.cnf’ file and then start/restart the MySQL server.

As we are using latest MySQL NDB Cluster (8.0.25) so we need to set only ‘ndb_log_bin’ from source’s MySQL server.


In our demo, I have assigned a path to the binary log file location through a configuration variable ‘–log-bin’, by default, all binary log files are created in MySQL server data directory. I have set all the required configuration variables in ‘my.cnf’ file of source MySQL server.

All the three variables above either can be set before the MySQL server start (before the cluster start) or can be added later and then a MySQL server restart is needed to take the effect as these variables are not ‘dynamic’. In our demo, I have already added these variables in ‘my.cnf’ file so we don’t have to do a MySQL server restart.

Below is our source’s MySQL server config file i.e. ‘my.cnf’ where we have added these two config variables i.e. ndb-log-bin, log-bin.

Step 3:

a) Let’s cleanup the old binary log files if exist from the source:

From the source MySQL server, let’s delete all existing binary log files if any exist. This can be done through the command ‘reset master’. After the execution of ‘reset master’ command, a new empty binary log file will be created so that binary logging can be restarted. The status of the binary log file can be known through ‘show master status’ command. User should use this command (reset master) when it is absolute necessary.

mysql> reset master;
mysql> show master status;

b) Login as ‘root’ user in the source MySQL server. Then update the root account so that other host with ‘root’ user can connect to this source MySQL server.

mysql> bin/mysql -u root -P"13010";
mysql> UPDATE mysql.user SET host='%' where user='root' and host='localhost';
mysql> flush privileges;

In the upcoming blog series, we will see how to connect from the replica server with a new user with limited privileges rather through ‘root’ user. For simplicity, I have taken ‘root’ user account for this demonstration.

Now, from the replica’s MySQL Server, try to connect to the source’s MySQL server.

mysql> bin/mysql -u root -P"13010";
mysql> stop replica;
mysql> reset replica all;
mysql> start replica;

Stop replica: It stops the replication threads if running before.
Reset replica all: This is to ensure that we start the replication from a clean system. It clears the replication metadata repositories, deletes all the relay log files, and starts a new relay log file.
Change master: This is to ensure that the replica server connects to the source server with the required parameters such as source IP, user name, password, port number etc. and read data from the source server.

Now check the state of the replication:
mysql> show replica status\G

From the above image we can see that we have successfully created a replication between two MySQL NDB Clusters. Both Slave_IO and slave_SQL are running.

Step 4:

Now let’s verify the replication by creating a database, tables and few transactions from the source cluster. We will check the same from the replica’s mysql server.

From the source’s MySQL server:
Let’s create a database ‘test1’, table ‘t1’ and insert 14 rows to it.

Let’s check from the replica’s MySQL server:

Let’s see if all the database objects and rows are available at replica cluster or not. Login as root and check the row count:

From the above image, we can see that a database ‘test1’, table ‘t1’ with 14 rows got created at replica cluster. Let’s check the replication status once again:

From the above image, we can see the message that slave has read all the relay logs.. and waiting for the more events from the source MySQL server. That means all data from the source has been replicated successfully to replica cluster.

So this verify that data created at source cluster seamlessly replicated to replica cluster. This is a very simple way to setup a replication between two NDB clusters.

This concludes our discussion on first replication topology in NDB Cluster.


  1. Hi saroj, great post.but I believe these post between 2 NDB clusters replication.
    Is it possible to setup replication between NDB to MySQL / Mariadb (M-S ) ?
    If you already done poc , pls share your link.i would like to see and poc


Post a Comment

Popular posts from this blog

MySQL Cluster Self Healing And No Single Point Of Failure Feature

MySQL NDB Cluster Backup & Restore In An Easy Way

MySQL NDB Cluster Installation Through Docker