Recently, we had an edge case where a MySQL slave went out-of-sync but it couldn’t be rebuilt from scratch. The slave was acting as a master server to some applications and it had data was being written to it. It was a design error, and this is not recommended, but it happened. So how do you synchronize the data in this circumstance? This blog post describes the steps taken to recover from this situation. The tools used to recover the slave were pt-slave-restart, pt-table-checksum, pt-table-sync and mysqldiff.
Scenario
To illustrate this situation, it was built a master x slave configuration with sysbench running on the master server to simulate a general application workload. The environment was set with a Percona Server 5.7.24-26 and sysbench 1.0.16.
Below are the sysbench commands to prepare and simulate the workload:
# Create Data sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 --table_size=5000 --tables=100 --threads=1 --events=0 --time=60 --rand-type=uniform /usr/share/sysbench/oltp_read_only.lua prepare # Simulate Workload sysbench --db-driver=mysql --mysql-user=root --mysql-password=msandbox --mysql-socket=/tmp/mysql_sandbox45008.sock --mysql-db=test --range_size=100 --table_size=5000 --tables=100 --threads=10 --events=0 --time=6000 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua --report-interval=1 run
With the environment set, the slave server was stopped, and some operations to desynchronize the slave were performed to reproduce the problem.
Fixing the issue
With the slave desynchronized, a restart on the replication was executed. Immediately, the error below appeared:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
To recover the slave from this error, we had to point the slave to an existing binary log with a valid binary log position. To get a valid binary log position, the command shown below had to be executed on the master:
master [localhost] {msandbox} ((none)) > show master statusG *************************** 1. row *************************** File: mysql-bin.000007 Position: 218443612 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.01 sec)
Then, a CHANGE MASTER command was run on the slave:
slave1 [localhost] {msandbox} (test) > change master to master_log_file='mysql-bin.000007', MASTER_LOG_POS=218443612; Query OK, 0 rows affected (0.00 sec) slave1 [localhost] {msandbox} (test) > start slave; Query OK, 0 rows affected (0.00 sec)
Now the slave had a valid binary log file to read, but since it was inconsistent, it hit another error:
Last_SQL_Errno: 1032 Last_SQL_Error: Could not execute Delete_rows event on table test.sbtest8; Can't find record in 'sbtest8', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000005, end_log_pos 326822861
Working past the errors
Before fixing the inconsistencies, it was necessary to keep the replication running and to skip the errors. For this, the pt-slave-restart tool will be used. The tool needs to be run on the slave server:
pt-slave-restart --user root --socket=/tmp/mysql_sandbox45008.sock --ask-pass
The tool skips errors and starts the replication threads. Below is an example of the output of the pt-slave-restart:
$ pt-slave-restart --user root --socket=/tmp/mysql_sandbox45009.sock --ask-pass Enter password: 2019-02-22T14:18:01 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 1996 1146 2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 8698 1146 2019-02-22T14:18:02 S=/tmp/mysql_sandbox45009.sock,p=...,u=root mysql-relay.000007 38861 1146
Finding the inconsistencies
With the tool running on one terminal, the phase to check the inconsistencies began. First things first, an object definition check was performed using mysqldiff utility. The mysqldiff tool is part of MySQL utilities. To execute the tool:
$ mysqldiff --server1=root:msandbox@localhost:48008 --server2=root:msandbox@localhost:48009 test:test --difftype=sql --changes-for=server2
And below are the differences found between the master and the slave:
1-) A table that doesn’t exist
# WARNING: Objects in server1.test but not in server2.test: # TABLE: joinit
2-) A wrong table structure
# Comparing `test`.`sbtest98` to `test`.`sbtest98` [FAIL] # Transformation for --changes-for=server2: # ALTER TABLE `test`.`sbtest98` DROP INDEX k_98, DROP COLUMN x, ADD INDEX k_98 (k);
Performing the recommendations on the slave (creating the table and the table modification) the object’s definition was now equal. The next step was to check data consistency. For this, the pt-table-checksum was used to identify which tables are out-of-sync. This command was run on the master server.
$ pt-table-checksum -uroot -pmsandbox --socket=/tmp/mysql_sandbox48008.sock --replicate=percona.checksums --create-replicate-table --empty-replicate-table --no-check-binlog-format --recursion-method=hosts
And an output example:
01 master]$ pt-table-checksum --recursion-method dsn=D=percona,t=dsns --no-check-binlog-format --nocheck-replication-filter --host 127.0.0.1 --user root --port 48008 --password=msandbox Checking if all tables can be checksummed ... Starting checksum ... at /usr/bin/pt-table-checksum line 332. Replica lag is 66 seconds on bm-support01.bm.int.percona.com. Waiting. Replica lag is 46 seconds on bm-support01.bm.int.percona.com. Waiting. Replica lag is 33 seconds on bm-support01.bm.int.percona.com. Waiting. TS ERRORS DIFFS ROWS DIFF_ROWS CHUNKS SKIPPED TIME TABLE 02-26T16:27:59 0 0 5000 0 1 0 0.037 test.sbtest1 02-26T16:27:59 0 0 5000 0 1 0 0.039 test.sbtest10 02-26T16:27:59 0 1 5000 0 1 0 0.033 test.sbtest100 02-26T16:27:59 0 1 5000 0 1 0 0.034 test.sbtest11 02-26T16:27:59 0 1 5000 0 1 0 0.040 test.sbtest12 02-26T16:27:59 0 1 5000 0 1 0 0.034 test.sbtest13
Fixing the data inconsistencies
Analyzing the DIFFS column it is possible to identify which tables were compromised. With this information, the pt-table-sync tool was used to fix these inconsistencies. The tool synchronizes MySQL table data efficiently, performing non-op changes on the master so they can be replicated and applied on the slave. The tools need to be run on the slave server. Below is an example of the tool running:
$ pt-table-sync --execute --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock
It is possible to perform a dry-run of the tool before executing the changes to check what changes the tool will apply:
$ pt-table-sync --print --sync-to-master h=localhost,u=root,p=msandbox,D=test,t=sbtest100,S=/tmp/mysql_sandbox48009.sock REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('1', '1654', '97484653464-60074971666-42998564849-40530823048-27591234964-93988623123-02188386693-94155746040-59705759910-14095637891', '15000678573-85832916990-95201670192-53956490549-57402857633') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/; REPLACE INTO `test`.`sbtest100`(`id`, `k`, `c`, `pad`) VALUES ('2', '3007', '31679133794-00154186785-50053859647-19493043469-34585653717-64321870163-33743380797-12939513287-31354198555-82828841987', '30122503210-11153873086-87146161761-60299188705-59630949292') /*percona-toolkit src_db:test src_tbl:sbtest100 src_dsn:D=test,P=48008,S=/tmp/mysql_sandbox48009.sock,h=127.0.0.1,p=...,t=sbtest100,u=root dst_db:test dst_tbl:sbtest100 dst_dsn:D=test,S=/tmp/mysql_sandbox48009.sock,h=localhost,p=...,t=sbtest100,u=root lock:1 transaction:1 changing_src:1 replicate:0 bidirectional:0 pid:17806 user:vinicius.grippa host:bm-support01.bm.int.percona.com*/;
After executing the pt-table-sync, we recommend that you run the pt-table-checksum again and check if the DIFFS column shows the value of 0.
Conclusion
This blog post was intended to cover all possible issues that could happen on a slave when it goes out-of-sync such as DDL operations, binary log purge and DML operations. This process involves many steps and it could take a long time to finish, especially in large databases. Note that this process might take longer than the backup/restore process. However, in situations like the one mentioned above, it might be the only solution to recover a slave.
—
Image based on Photo by Randy Fath on Unsplash