Featured Talk: The Future of Replication is Today: New Features in Practice

In the past years, both MySQL 5.6, MySQL 5.7 and MariaDB 10 have been successful implementing new features. For many DBAs, the “old way” of replicating data is comfortable so taking the action to implement these new features seems like a momentous leap rather then a simple step. But perhaps it isn’t that complicated…

Giuseppe Maxia, a Quality Assurance Architect at VMware and loyal member of the Percona Live Confepercona-2015DSC_4112rence Committee will be presenting “The Future of Replication is Today: New Features in Practice” at the Percona Live Data Performance Conference this September in Amsterdam.
Percona’s Community Manager, Tom Diederich had an opportunity to catch up with Giuseppe last week and get an in-depth look at some of the items Giuseppe will be covering in his talk in addition to getting his take on some of the hot sessions to hit while at the conference.  This is how it went:

(Hint: Read to the end to find a special discount code) 

 

Tom: Your talk is titled, “The Future of Replication is today: new features in practice.” What are the top 3 areas in which replication options have improved in MySQL 5.6, MySQL 5.7, and MariaDB 10?
Giuseppe: Replication has been stagnant for over 10 years. Before MySQL 5.6, the only important change in the technology was the introduction of row-based replication in 2008. After that, we had to wait till 2013 to see global transaction identifiers in MySQL 5.6, followed by the same feature, with different implementation in 2014 with MariaDB 10. GTID has been complemented, in both flavors, with crash-safe replication tables, which is a feature that guarantees a reliable resume of replication after a server failure. There is also the parallel applier, a minor feature that has been implemented in both MySQL 5.6 and MariaDB, and improved in latest versions, although it seems to lack proper support for monitoring. The last feature that was introduced in MySQL 5.6 and MariaDB 10 is multi-source replication, i.e. the ability of replicating from multiple masters to a single slave. In both editions, the implementation is quite simple, and not so different from what DBAs are used to do for regular replication.
Tom: For DBAs, how difficult will it be to make the change from the “old way” of replicating data — to stop using the same comfortable features that have been around for several years — and put into practice some of the latest features?
Giuseppe: The adoption of new features can be deceptively simple. For example, GTID in MariaDB comes out of the box and its adoption could be as easy as running a backup followed by a restore, but it can produce unpleasant results if you try to combine this feature with multi-source replication without planning ahead. That said, the transition could be simpler than its counterpart in MySQL.
MySQL 5.6 and 5.7 require some reconfiguration to run GTID, and users can face unpleasant failures due to the complexity of the rules applying to this feature. They will need to read the manual thoroughly and test the deployment extensively before trusting an upgrade in production.
For multi-source replication, the difficulties are, in my experience, hidden in the users expectations. When speaking about multi-source (or multi-masters, as it is commonly referred to), many users have the mistaken expectation that they can easily insert anything in multiple masters as if they were doing it in a single server. However, the nature of asynchronous replication and the current implementation of multi-source topologies do not handle conflicts, and this fact will probably surprise and anger the early adopters.
Tom: What is still missing in replication technology? How can MySQL improve?
Giuseppe: There are two areas where the current implementation is lacking. The first one is monitoring data: while new features have been adding up to replication, there is not enough effort made to cover the monitoring needs. The current way of monitoring replication is hard-wired around the original replication feature, and little has been done to give the users a deeper view of what is going on. With the latest releases at our disposal, we can run parallel replication using multiple masters, and yet we have very little visibility on what goes on inside the dozen of threads that the new features can unchain inside a single slave. It’s like driving a F1 racing car with the dashboard of a Ford model-T. MySQL 5.7 has moved a few steps in that direction, with the new replication tables in performance_schema, but it is still a drop in the ocean compared to what we need.
The second area where replication is still too much tied with its past is in heterogeneous replication. While relational databases are still dominating the front-end of the web economy, its back-end is largely being run by different structures, such as Hadoop, MongoDB, Cassandra. Moving data back and forth between the relational storage and its growing siblings has become an urgent need. There have been a few sparks of change in this direction, but nothing that can qualify as promising changes.
Tom: Which other session(s) are you most looking forward to besides your own?
Giuseppe: I am always interested in the sessions that explain and discuss new features. I am most interested in the talks by Oracle engineers, who have been piling up many features in the latest years, and I am sure they have something more up their sleeve that will appear at the conference. I also attend eagerly sessions about complementary tools, which are usually highly educational and often give me more ideas.

Want to read more on the topic? Visit Giuseppe’s blog:

 MySQL Replication Monitoring 101

The Percona Live Data Performance Conference is the premier event for the rich and diverse MySQL, NoSQL and data in the cloud ecosystems in Europe. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and IoT (Internet of Things) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

This year’s conference will feature one day of tutorials and two days of keynote talks and breakout sessions related to MySQL, NoSQL and Data in the Cloud. Attendees will get briefed on the hottest topics, learn about building and maintaining high-performing deployments and hear from top industry leaders.

The Percona Live Europe Data Performance Conference will be September 21-23 at the Mövenpick Hotel Amsterdam City Centre.

Register using code “FeaturedTalk” and save 20 euros off of registration!

Hope to see you in Amsterdam!

The post Featured Talk: The Future of Replication is Today: New Features in Practice appeared first on MySQL Performance Blog.

Read more at: //www.mysqlperformanceblog.com/

Monitoring MySQL flow control in Percona XtraDB Cluster 5.6

Monitoring flow control in a Galera cluster is very important. If you do not, you will not understand why writes may sometimes be stalled. Percona XtraDB Cluster 5.6 provides 2 status variables for such monitoring: wsrep_flow_control_paused and wsrep_flow_control_paused_ns. Which one should you use?

What is flow control?

Flow control does not exist with regular MySQL replication, but only with Galera replication. It is simply the mechanism nodes are using when they are not able to keep up with the write load: to keep replication synchronous, the node that is starting to lag instructs the other nodes that writes should be paused for some time so it does not get too far behind.

If you are not familiar with this notion, you should read this blogpost.

Triggering flow control and graphing it

For this test, we’ll use a 3-node Percona XtraDB Cluster 5.6 cluster. On node 3, we will adjust gcs.fc_limit so that flow control is triggered very quickly and then we will lock the node:

pxc3> set global wsrep_provider_options="gcs.fc_limit=1";
pxc3> flush tables with read lock;

Now we will use sysbench to insert rows on node 1:

$ sysbench --test=oltp --oltp-table-size=50000 --mysql-user=root --mysql-socket=/tmp/pxc1.sock prepare

Because of flow control, writes will be stalled and sysbench will hang. So after some time, we will release the lock on node 3:

pxc3> unlock tables;

During the whole process, wsrep_flow_control_paused and wsrep_flow_control_paused_ns are recorded every second with mysqladmin ext -i1. We can then build a graph of the evolution of both variables:

wsrep_flow_control_pxc3

While we can clearly see when flow control was triggered on both graphs, it is much easier to know when flow control was stopped with wsrep_flow_control_paused_ns. It would be even more obvious if we have had several timeframes when flow control is in effect.

Conclusion

Monitoring a server is obviously necessary if you want to be able to catch issues. But you need to look at the right metrics. So don’t be scared if you are seeing that wsrep_flow_control_paused is not 0: it simply means that flow control has been triggered at some point since the server started up. If you want to know what is happening right now, prefer wsrep_flow_control_paused_ns.

The post Monitoring MySQL flow control in Percona XtraDB Cluster 5.6 appeared first on MySQL Performance Blog.

Read more at: //www.mysqlperformanceblog.com/

How to log slow queries on Slave in MySQL 5.0 with pt-query-digest

Working as a Percona Support Engineer, every day we are seeing lots of issues related to MySQL replication. One very common issue is slave lagging. There are many reasons for slave lag but one common reason is that queries are taking more time on slave then master. How to check and log those long-running queries?  From MySQL 5.1, log-slow-slave-statements variable was introduced, which you can enable on slave and log slow queries. But what if you want to log slow queries on slave in earlier versions like MySQL 5.0?  There is a good solution/workaround: pt-query-digest. How? let’s take a look….

If you want to log all queries that are running on slave (including those, running by sql thread), you can use pt-query-digest with –processlist and –print (In pt-query-digest 2.1.9) OR –output (In pt-query-digest 2.2.7) options and log all queries in specific file. I have tested it in my local environment and it works.

You can start pt-query-digest like below on Slave,

[email protected]:~$ /percona-toolkit-2.1.9/bin/pt-query-digest --processlist u=msandbox,p=msandbox,S=/tmp/mysql_sandbox34498.sock --print --no-report
OR
[email protected]:-$ /percona-toolkit-2.2.7/bin/pt-query-digest --processlist u=msandbox,p=msandbox,S=/tmp/mysql_sandbox34498.sock --no-report --output=slowlog

Run some long running queries on Master,

[email protected]:~$ mysql -umsandbox -p --socket=/tmp/mysql_sandbox34497.sock
Enter password:
mysql> use percona
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> delete from test limit 5000000;
Query OK, 5000000 rows affected (1 min 54.33 sec)
mysql> delete from test limit 5000000;
Query OK, 5000000 rows affected (1 min 56.42 sec)

mysql>

and you’ll see the output on Slave like this,

[email protected]:~/Downloads/percona-toolkit-2.1.9/bin$ ./pt-query-digest --processlist u=msandbox,p=msandbox,S=/tmp/mysql_sandbox34498.sock --print --no-report
# Time: 2014-03-18T12:10:57
# [email protected]: system user[system user] @ []
# Query_time: 114.000000 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
use percona;
delete from test limit 5000000;
[email protected]:~/Downloads/percona-toolkit-2.2.7/bin$ pt-query-digest --processlist u=msandbox,p=msandbox,S=/tmp/mysql_sandbox34498.sock --no-report --output=slowlog
# Time: 2014-03-18T12:21:05
# [email protected]: system user[system user] @ []
# Query_time: 117.000000 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 0
use percona;
delete from test limit 5000000;

You can also run pt-query-digest in background like a daemon and send output to the specific file like slow.log and review it.

i.e /percona-toolkit-2.1.9/bin/pt-query-digest –processlist u=msandbox,p=msandbox,S=/tmp/mysql_sandbox34498.sock –print –no-report > slow.log 2>&1

OR

i.e /percona-toolkit-2.2.7/bin/pt-query-digest –processlist u=msandbox,p=msandbox,S=/tmp/mysql_sandbox34498.sock –no-report –output=slowlog > slow.log 2>&1

Here, the default output will be just like slow query log. If we have master-master replication where every master is slave too and we want to log only those statements that are executing by sql_thread then –filter option can be used like this:

pt-query-digest –filter ‘$event->user eq “system user”‘ –no-report –output=slowlog

Since pt-query-digest–processlist polls 10 times/second ( –interval option), it’s not reliable to use for collecting complete query logs, because quick queries could fall in between the polling intervals. And in any case, it won’t measure query time with precision any better than 1/10th of a second. But if the goal is to identify queries that are very long-running, it should be adequate.

The post How to log slow queries on Slave in MySQL 5.0 with pt-query-digest appeared first on MySQL Performance Blog.

Read more at: //www.mysqlperformanceblog.com/