Setting MySQL Cluster using Docker

Dipto Chakrabarty
6 min readFeb 8, 2023

--

In this blog I will take you through setting up a three node mysql innodb cluster using only docker containers which can be implemented anywhere.

Mysql is one of the most prominent and popular database solutions adopted by corporate and companies for their applications or business needs.

A cluster is a collection of resources or in this case servers which provides scalability and high availability. Mysql clusters can serve millions of users handle high volumes of data load, provides real time response and agility.

All files required for following this blog can be found here on this link.

Architecture of MYSQL Cluster

The cluster has three nodes or containers in this case.

A single primary node which has read write privileges.

Two secondaries in which writes are replicated.

Setting up code

To provision the mysql cluster we first require three standalone mysql servers which we will use docker in our case.

To being we define a dockerfile which will be used as our base image

FROM mysql/mysql-server:8.0

COPY ./setup.sql /docker-entrypoint-initdb.d

EXPOSE 3306

mysql-server:8.0 is used and a sql data file is used to perform some initial setup. The file is copied to docker-entrypoint-initdb.d which will ensure to execute the content of the file

CREATE USER 'clusteradmin'@'%' IDENTIFIED BY 'cladmin';

GRANT ALL privileges ON *.* TO 'clusteradmin'@'%' with grant option;

reset master;

To setup a cluster we create an user clusteradmin which has all privileges. This user is created on all nodes when the setup is initiated.

Finally a docker-compose file is used to create the docker containers

version: "3.3"
services:
mysql-dev1:
build: .
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- ./db-data1:/var/lib/mysql
ports:
- "3306:3306"

mysql-dev2:
build: .
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: password
ports:
- "3307:3306"
volumes:
- ./db-data2:/var/lib/mysql

mysql-dev3:
build: .
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: password
ports:
- "3308:3306"
volumes:
- ./db-data3:/var/lib/mysql

volumes:
db-data1:
driver: local
db-data2:
driver: local
db-data3:
driver: local

Three docker containers have been setup which have the root password as password with three different port number mappings to the host system and default authentication plugin as password.

Ensure to set a volume for the containers as this is where mysql stores its metadata.

Performing the pre checks

Start the three servers by running the docker compose command

docker-compose up 
or
docker-compose up -d (for background process)

This starts up three docker containers which will be our mysql servers also parallel y creating three images for the containers.

To not create three separate images and use a single one build the docker image before hand and reference that in compose file.

Exec into one of the nodes as root user and view the user,host present to ensure clusteradmin user is present

docker exec -it {container name} /bin/bash   // exec into container

mysql -uroot -p'password' // Login as root user

SELECT user,host FROM mysql.user; // list all users and host

Login as clusteradmin user to ensure user has been setup with correct credentials

Considering one of the nodes as primary use mysqlsh to initiate the cluster creation process

Login to mysql shell using clusteradmin user

mysqlsh -uclusteradmin -p'cladmin'

The first step is to check the node configuration if it follows the required requirements.

On same shell run the below command replacing container names with the names of the docker containers since each container has connectivity to each other.

dba.checkInstanceConfiguration("clusteradmin@{container name}:port")

From the single mysqlsh shell the other nodes can also be checked by replacing the container names with corresponding container names.

Each container is then configured so that they can act as part of an innodb cluster.

On same shell run the below command replacing container names with the names of the docker containers.

dba.configureInstance("clusteradmin@{container name}:port")

This will setup the needed environment variables and settings so that each container can act as an innodb node.

The configureInstance command must be run three times for each container with the respective container name , this can be run from the same mysql shell .

On running the checkInstanceConfiguration we notice that it results in a success now.

Creating the cluster

On mysql shell initiate cluster creation

var cluster = dba.createCluster("{cluster name}")

Check cluster status

cluster.status()

We have a single node cluster in which we need to add the other two containers.

Add a different instance/container using the addInstance command

cluster.addInstance("clusteradmin@{container name}:{port}")

This command has to be used to add the other two containers to the primary node.

Running cluster.status() after each step will allow us to understand the cluster status

2 node cluster
3 node cluster

We have a three node mysql with docker setup with one primary and two secondaries. Primary is used for all write operations which is replicated to the secondaries.

Features of the mysql cluster

  1. Group Replication

As per the architecture the primary acts as the node which is responsible for writes in the database which is replicated to the secondaries.

Login to mysql primary node and enter data which is replicated to other secondaries

Create a new database on primary node

CREATE DATABASE developer;

Create table to insert data

CREATE TABLE dev(name VARCHAR(20 NOT NULL, SLNO INT NOT NULL, 
PRIMARY KEY(SLNO));

DESCRIBE dev;

Insert dummy data into the table

Now login to other two containers , the database and table along with the data will be replicated into it.

2. Automatic fail over

If for any case one the secondary nodes fails the cluster is still up and running , however if a primary node fails a new primary is elected and operations continue.

Initial cluster status present is this

We go ahead and stop and start the primary node container

In such situation the node goes to status missing and another node is selected as primary

new primary elected

Once node comes back up it rejoins the cluster

node with new primary

Errors or debugging

In some cases it may be required to perform a cluster rescan on node addition or after a fail over

Run the command

var cluster = dba.getCluster()  // if cluster var is not set

cluster.rescan()

--

--

Dipto Chakrabarty
Dipto Chakrabarty

Written by Dipto Chakrabarty

MS @CMU , Site Reliability Engineer , I talk about Cloud Distributed Systems. Tech Doctor making sure to diagnose and make your apps run smoothly in production.

Responses (2)