I have successfully installed and integrate MySQL Cluster with HAproxy and Keepalived to provide scalable MySQL service with cPanel server run on CentOS 6.3 64bit. As you guys know that cPanel has a function called “Setup Remote MySQL server” which we can use to remotely access and control MySQL server from cPanel.
This will bring a big advantage, because the cPanel server load will be reduced tremendously due to mysqld service and resource will be serve from a cluster of servers. Following picture shows my architecture:
I will be using following variables:
OS: CentOS 6.3 64bit
WHM/cPanel version: 11.34.0 (build 11)
MySQL root password: MhGGs4wYs
The Tricks
We will need to use same MySQL root password in all servers including the cPanel server
cPanel server’s SSH key need to be installed in all database servers to allow passwordless SSH login
All servers must have same /etc/hosts value
At least 4 servers for MySQL Cluster ( 2 SQL/Management/LB and 2 Data nodes)
mysql1 (active) and mysql2 (passive) will share a virtual IP which run on Keepalived
mysql1 and mysql2 will be the load balancer as well, which redirect MySQL traffic from cPanel server to mysql1 and mysql2
MySQL Cluster only support ndbcluster storage engine. Databases will be created in ndbcluster by default
For mysql1 and mysql2, MySQL will serve using port 3307 because 3306 will be used by HAProxy for load balancing
All Servers
1. In this post, I am going to turn off firewall and SELINUX for all servers:
$ service iptables stop
$ chkconfig iptables off
$ setenforce 0
$ sed -i . bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc /selinux /config
2. Install ntp using yum to make sure all servers’ time is in sync:
$ yum install ntp -y
$ ntpdate -u my . pool . ntp . org
Prepare the cPanel Server
1. Lets start declaring all hosts naming in /etc/hosts :
192.168.10.100 cpanel cpanel . mydomain . com
192.168.10.101 mysql1 mysql1 . mydomain . com
192.168.10.102 mysql2 mysql2 . mydomain . com
192.168.10.103 mysql -data1 mysql -data1 . mydomain . com
192.168.10.104 mysql -data2 mysql -data2 . mydomain . com
192.168.10.110 mysql mysql . mydomain . com #Virtual IP for mysql service
2. Copy /etc/hosts file to other servers:
$ scp /etc /hosts mysql1 : /etc
$ scp /etc /hosts mysql2 : /etc
$ scp /etc /hosts mysql -data1 : /etc
$ scp /etc /hosts mysql -data2 : /etc
3. Setup SSH key. This will allow passwordless SSH between cPanel server and MySQL servers:
Just press ‘Enter’ for all prompts.
4. Copy the SSH key to other servers:
$ ssh -copy -id -i ~ /. ssh /id_dsa root @ mysql1
$ ssh -copy -id -i ~ /. ssh /id_dsa root @ mysql2
$ ssh -copy -id -i ~ /. ssh /id_dsa root @ mysql -data1
$ ssh -copy -id -i ~ /. ssh /id_dsa root @ mysql -data2
5. Setup MySQL root password in WHM. Login to WHM > SQL Services > MySQL Root Password . Enter the MySQL root password and click “Change Password”.
6. Add additional host in WHM > SQL Services > Additional MySQL Access Hosts and add required host to be allowed to access the MySQL cluster as below:
Data Nodes (mysql-data1 and mysql-data2)
1. Download and install MySQL storage package from this page:
$ cd /usr /local /src
$ wget http : //mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-storage-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL -Cluster -gpl -storage -7.1.25 -1.el6.x86_64.rpm
2. Create a mysql configuration file at /etc/my.cnf and add following line. This configuration will tell the storage to communicate with mysql1 and mysql2 as the management nodes:
[ mysqld ]
ndbcluster
ndb -connectstring =mysql1 , mysql2
[ mysql_cluster ]
ndb -connectstring =mysql1 , mysql2
SQL Nodes (mysql1 and mysql2)
1. Install required package using yum :
$ yum install perl libaio* pcre* popt* openssl openssl -devel gcc make -y
2. Download all required packages for Keepalived , HAProxy and MySQL Cluster package from this site (management, tools, shared, client, server):
$ cd /usr /local /src
$ wget http : //www.keepalived.org/software/keepalived-1.2.7.tar.gz
$ wget http : //haproxy.1wt.eu/download/1.4/src/haproxy-1.4.22.tar.gz
$ wget http : //mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-management-7.1.25-1.el6.x86_64.rpm
$ wget http : //mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-tools-7.1.25-1.el6.x86_64.rpm
$ wget http : //mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-shared-7.1.25-1.el6.x86_64.rpm
$ wget http : //mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-client-7.1.25-1.el6.x86_64.rpm
$ wget http : //mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-server-7.1.25-1.el6.x86_64.rpm
3. Extract and compile Keepalived :
$ tar -xzf keepalived -1.2.7.tar.gz
$ cd keepalived -*
$ . /configure
$ make
$ make install
4. Extract and compile HAProxy :
$ tar -xzf haproxy -1.4.22.tar.gz
$ cd haproxy -*
$ make TARGET =linux26 ARCH =x86_64 USE_PCRE =1
$ make install
5. Install mysql packages with following order (management > tools > shared > client > server):
$ cd /usr /local /src
$ rpm -Uhv MySQL -Cluster -gpl -management -7.1.25 -1.el6.x86_64.rpm
$ rpm -Uhv MySQL -Cluster -gpl -tools -7.1.25 -1.el6.x86_64.rpm
$ rpm -Uhv MySQL -Cluster -gpl -shared -7.1.25 -1.el6.x86_64.rpm
$ rpm -Uhv MySQL -Cluster -gpl -client -7.1.25 -1.el6.x86_64.rpm
$ rpm -Uhv MySQL -Cluster -gpl -server -7.1.25 -1.el6.x86_64.rpm
6. Create new directory for MySQL cluster. We also need to create cluster configuration file config.ini underneath it:
$ mkdir /var /lib /mysql -cluster
$ vim /var /lib /mysql -cluster /config . ini
And add following line:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[ ndb_mgmd default ]
DataDir =/var /lib /mysql -cluster
[ ndb_mgmd ]
NodeId =1
HostName =mysql1
[ ndb_mgmd ]
NodeId =2
HostName =mysql2
[ ndbd default ]
NoOfReplicas =2
DataMemory =256M
IndexMemory =128M
DataDir =/var /lib /mysql -cluster
[ ndbd ]
NodeId =3
HostName =mysql -data1
[ ndbd ]
NodeId =4
HostName =mysql -data2
[ mysqld ]
NodeId =5
HostName =mysql1
[ mysqld ]
NodeId =6
HostName =mysql2
7. Create the mysql configuration file at /etc/my.cnf and add following line:
[ mysqld ]
ndbcluster
port =3307
ndb -connectstring =mysql1 , mysql2
default_storage_engine =ndbcluster
[ mysql_cluster ]
ndb -connectstring =mysql1 , mysql2
Starting the Cluster
1. Start mysql cluster management service:
For mysql1:
$ ndb_mgmd -f /var /lib /mysql -cluster /config . ini --ndb -nodeid =1
For mysql2:
$ ndb_mgmd -f /var /lib /mysql -cluster /config . ini --ndb -nodeid =2
2. Start the mysql cluster storage service in both data nodes (mysql-data1 & mysql-data2):
3. Start the mysql service (mysql1 & mysql2):
4. Login to mysql console and run following command (mysql1 & mysql2):
mysql > use mysql ;
mysql > alter table user engine =ndbcluster ;
mysql > alter table db engine =ndbcluster ;
5. Check the output of table db and user in mysql database and make sure it should appear as below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
mysql > SELECT table_name , engine FROM INFORMATION_SCHEMA . TABLES WHERE table_schema =DATABASE ( ) ;
+---------------------------+------------+
| table_name | engine |
+---------------------------+------------+
| user | ndbcluster |
| columns_priv | MyISAM |
| db | ndbcluster |
| event | MyISAM |
| func | MyISAM |
| general_log | CSV |
| help_category | MyISAM |
| help_keyword | MyISAM |
| help_relation | MyISAM |
| help_topic | MyISAM |
| host | MyISAM |
| ndb_apply_status | ndbcluster |
| ndb_binlog_index | MyISAM |
| plugin | MyISAM |
| proc | MyISAM |
| procs_priv | MyISAM |
| servers | MyISAM |
| slow_log | CSV |
| tables_priv | MyISAM |
| time_zone | MyISAM |
| time_zone_leap_second | MyISAM |
| time_zone_name | MyISAM |
| time_zone_transition | MyISAM |
| time_zone_transition_type | MyISAM |
+---------------------------+------------+
24 rows in set ( 0.00 sec )
6. Check the management status in mysql1 . You should see output similar to below:
$ ndb_mgm -e show
Connected to Management Server at : mysql1 : 1186
Cluster Configuration
---------------------
[ ndbd ( NDB ) ] 2 node ( s )
id =3 @ 192.168.10.103 ( mysql -5.1.66 ndb -7.1.25 , Nodegroup : 0 , Master )
id =4 @ 192.168.10.104 ( mysql -5.1.66 ndb -7.1.25 , Nodegroup : 0 )
[ ndb_mgmd ( MGM ) ] 2 node ( s )
id =1 @ 192.168.10.101 ( mysql -5.1.66 ndb -7.1.25 )
id =2 @ 192.168.10.102 ( mysql -5.1.66 ndb -7.1.25 )
[ mysqld ( API ) ] 2 node ( s )
id =5 @ 192.168.10.101 ( mysql -5.1.66 ndb -7.1.25 )
id =6 @ 192.168.10.102 ( mysql -5.1.66 ndb -7.1.25 )
7. Change MySQL root password to follow the MySQL root password in cPanel server (mysql1):
$ mysqladmin -u root password 'MhGGs4wYs'
8. Add MySQL root password into root environment so we do not need to specify password to access mysql console (mysql1 & mysql2):
And add following line:
[ client ]
user ="root"
password ="MhGGs4wYs"
9. Add haproxy user without password to be used by HAProxy to check the availability of real server (mysql1):
mysql > GRANT USAGE ON * . * TO haproxy @ '%' ;
10. Add root user from any host so cPanel servers can access and control the MySQL cluster (mysql1):
mysql > GRANT USAGE ON * . * TO root @ '%' IDENTIFIED BY 'MhGGs4wYs' ;
mysql > GRANT USAGE ON * . * TO root @ 'mysql1' IDENTIFIED BY 'MhGGs4wYs' ;
mysql > GRANT USAGE ON * . * TO root @ 'mysql2' IDENTIFIED BY 'MhGGs4wYs' ;
mysql > GRANT ALL PRIVILEGES ON * . * TO root @ '%' ;
mysql > GRANT ALL PRIVILEGES ON * . * TO root @ 'mysql1' ;
mysql > GRANT ALL PRIVILEGES ON * . * TO root @ 'mysql2' ;
11. The last step, we need to allow GRANT privileges to root@’%’ by running following command in mysql console (mysql1):
mysql > UPDATE mysql . user SET ` Grant_priv ` = 'Y' WHERE ` User ` = 'root' ;
Configuring Virtual IP and Load Balancer (mysql1 & mysql2)
1. Configure HAProxy by creating a configuration /etc/haproxy.cfg :
And add following line:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
defaults
log global
mode http
retries 2
option redispatch
maxconn 4096
contimeout 50000
clitimeout 50000
srvtimeout 50000
listen mysql_proxy 0.0.0.0 : 3306
mode tcp
balance roundrobin
option tcpka
option httpchk
option mysql -check user haproxy
server mysql1 192.168.10.101 : 3307 weight 1
server mysql2 192.168.10.102 : 3307 weight 1
2. Next we need to configure virtual IP. Open /etc/sysctl.conf and add following line to allow non-local IP to bind:
net . ipv4 . ip_nonlocal_bind = 1
And run following command to apply the changes:
3. Create Keepalived configuration file at /etc/keepalived.conf and add following line:
For mysql1:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid is exist or not
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.10.110 # the virtual IP
}
track_script {
chk_haproxy
}
}
For mysql2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid is exist or not
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}
vrrp_instance VI_1 {
interface eth0 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.10.110 # the virtual IP
}
track_script {
chk_haproxy
}
}
4. Start HAProxy:
$ haproxy -D -f /etc /haproxy . cfg
5. Start Keepalived:
$ keepalived -f /etc /keepalived . conf
6. Add following line into /etc/rc.local to make sure Keepalived and HAProxy start on boot:
And add following line:
/usr /local /sbin /haproxy -D -f /etc /haproxy . cfg
/usr /local /sbin /keepalived -f /etc /keepalived . conf
7. Check the virtual IP should be up in mysql1 :
$ ip a | grep eth0
2 : eth0 : < BROADCAST , MULTICAST , UP , LOWER_UP > mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 192.168.10.101 /24 brd 192.168.10.255 scope global eth0
inet 192.168.10.110 /32 scope global eth0
8. Verify in mysql2 whether Keepalived is running in backup mode:
$ tail /var /log /messages
Dec 14 12 : 08 : 56 mysql2 Keepalived_vrrp [ 3707 ] : VRRP_Instance ( VI_1 ) Entering BACKUP STATE
9. Check that HAProxy is run on port 3306 and mysqld is run on port 3307:
$ netstat -tulpn | grep -e mysql -e haproxy
tcp 0 0 0.0.0.0 : 3306 0.0.0.0 : * LISTEN 3587 /haproxy
tcp 0 0 0.0.0.0 : 3307 0.0.0.0 : * LISTEN 3215 /mysqld
Setup Remote MySQL Server in cPanel Server
1. Go to WHM > SQL Services > Setup Remote MySQL server and enter following details. Make sure the Remote server address is the virtual IP address setup in Keepalived in mysql1:
Remote server address ( IP address or FQDN ) : mysql . mydomain . org
Remote SSH Port : 22
Select authentication method : Public Key ( default )
Select an installed SSH Key : id_dsa
2. Wait for a while and you will see following output:
3. Now MySQL Cluster is integrated within WHM/cPanel. You may verify this by accessing into PHPmyAdmin in WHM at WHM > SQL Services > PHPmyAdmin and you should see that you are connected into the MySQL Cluster as screenshot below:
Testing
We can test our MySQL high availability architecture by turning off the power completely for mysql1 or mysql2 and mysql-data1 or mysql-data2 in the same time. You will notice that the MySQL service will still available in cPanel point-of-view.
Here is my PHPmyAdmin for my test blog running on WordPress. You can notice that the database created is under ndbcluster engine:
I never test this architecture in any production server yet and I cannot assure that all WHM/cPanel SQL functionalities are working as expected. Following features in cPanel has been tried and working well:
PHPMyAdmin
cPanel MySQL features (MySQL Database and MySQL Database Wizard)
. . . . . . . .