Percona Server 5.5.47-37.7 is now available

Percona Server 5.5.47-37.7
Percona is glad to announce the release of Percona Server 5.5.47-37.7 on January 12, 2016. Based on MySQL 5.5.47, including all the bug fixes in it, Percona Server 5.5.47-37.7 is now the current stable release in the 5.5 series.

Percona Server is open-source and free. Details of the release can be found in the 5.5.47-37.7 milestone on Launchpad. Downloads are available here and from the Percona Software Repositories.

Bugs Fixed:

  • Running OPTIMIZE TABLE or ALTER TABLE without the ENGINE clause would silently change table engine if enforce_storage_engine variable was active. This could also result in system tables being changed to incompatible storage engines, breaking server operation. Bug fixed #1488055.

Other bugs fixed: #1179451, #1524763, and #1530102.

Release notes for Percona Server 5.5.47-37.7 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Read more at: http://www.mysqlperformanceblog.com/

Percona Server 5.6.28-76.1 is now available

Percona Server 5.6.28-76.1Percona is glad to announce the release of Percona Server 5.6.28-76.1 on January 12, 2016. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.28, including all the bug fixes in it, Percona Server 5.6.28-76.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – and this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release can be found in the 5.6.28-76.1 milestone on Launchpad.

Bugs Fixed:

  • Clustering secondary index could not be created on a partitioned TokuDB table. Bug fixed #1527730 (DB-720).
  • When enabled, super-read-only option could break statement-based replication while executing a multi-table update statement on a slave. Bug fixed #1441259.
  • Running OPTIMIZE TABLE or ALTER TABLE without the ENGINE clause would silently change table engine if enforce_storage_engine variable was active. This could also result in system tables being changed to incompatible storage engines, breaking server operation. Bug fixed #1488055.
  • Setting the innodb_sched_priority_purge variable (available only in debug builds) while purge threads were stopped would cause a server crash. Bug fixed #1368552.
  • Small buffer pool size could cause XtraDB buffer flush thread to spin at 100% CPU. Bug fixed #1433432.
  • Enabling TokuDB with ps_tokudb_admin script inside the Docker container would cause an error due to insufficient privileges even when running as root. In order for this script to be used inside Docker containers this error has be been changed to a warning that a check is impossible. Bug fixed #1520890.
  • InnoDB status will start printing negative values for spin rounds per wait, if the wait number, even though being accounted as a signed 64-bit integer, will not fit into a signed 32-bit integer. Bug fixed #1527160 (upstream #79703).

Other bugs fixed: #1384595 (upstream #74579), #1384658 (upstream #74619), #1471141 (upstream #77705), #1179451, #1524763 and #1530102.

Release notes for Percona Server 5.6.28-76.1 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Read more at: http://www.mysqlperformanceblog.com/

MySQL is crashing: a support engineer’s point of view

In MySQL QA Episode #12, “MySQL is Crashing, now what?,” Roel demonstrated how to collect crash-related information that will help Percona discover what the issue is that you are experiencing, and fix it.

As a Support Engineer I (Sveta) am very happy to see this post – but as a person who better understands writing than recording – I’d like to have same information, in textual form. We discussed it, and decided to do a joint blog post. Hence, this post :)

If you haven’t seen the video yet, or you do not have any experience with gdb, core files and crashes, I highly recommend to watch it first.

Once you have an idea of why crashes happen, what to do after it happens in your environment, and how to open a Support issue and/or a bug report, you’re ready for the next step: which information do you need to provide? Note that the more complete and comprehensive information you provide, the quicker the evaluation and potential fix process will go – it’s a win-win situation!

At first we need the MySQL error log file. If possible, please send us the full error log file. Often users like to send only the part which they think is relevant, but the error log file can contain other information, recorded before the crash happened. For example, records about table corruptions, lack of disk space, issues with InnoDB dictionary, etc.

If your error log is quite large, please note it would compress very well using a standard compression tool like gzip. If for some reason you cannot send the full error log file, please sent all lines, written after the words “mysqld: ready for connections” (as seen the last time before the actual crash), until the end of the error log file (alternatively, you can also search for rows, started with word “Version:”). Or, if you use scripts (or mysqld_safe) which automatically restart MySQL Server in case of disaster, obviously please search for the one-previous server start after the crash.

An example which includes an automatic restart as mentioned above:

2015-08-03 14:24:03 9911 [Note] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: ready for connections.
Version: '5.6.25-73.1-log'  socket: '/tmp/mysql_sandbox21690.sock'  port: 21690  Percona Server (GPL), Release 73.1, Revision 07b797f
2015-08-03 14:24:25 7f5b193f9700 InnoDB: Buffer pool(s) load completed at 150803 14:24:25
11:25:12 UTC - mysqld got signal 4 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/
key_buffer_size=268435456
read_buffer_size=131072
max_used_connections=1
max_threads=216
thread_count=1
connection_count=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 348059 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(my_print_stacktrace+0x2e)[0x8dd38e]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_fatal_signal+0x491)[0x6a5dc1]
/lib64/libpthread.so.0(+0xf890)[0x7f5c58ac8890]
/lib64/libc.so.6(__poll+0x2d)[0x7f5c570fbc5d]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z26handle_connections_socketsv+0x1c2)[0x5f64c2]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z11mysqld_mainiPPc+0x1b5d)[0x5fd87d]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f5c57040b05]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld[0x5f10fd]
You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.
150803 14:25:12 mysqld_safe Number of processes running now: 0
150803 14:25:12 mysqld_safe mysqld restarted
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: /lib64/libssl.so.1.0.0: no version information available (required by /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld)
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: /lib64/libcrypto.so.1.0.0: no version information available (required by /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld)
2015-08-03 14:25:12 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2015-08-03 14:25:12 0 [Note] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld (mysqld 5.6.25-73.1-log) starting as process 10038 ...
2015-08-03 14:25:12 10038 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 50005)
2015-08-03 14:25:12 10038 [Warning] Buffered warning: Changed limits: max_connections: 214 (requested 10000)
2015-08-03 14:25:12 10038 [Warning] Buffered warning: Changed limits: table_open_cache: 400 (requested 4096)
2015-08-03 14:25:12 10038 [Note] Plugin 'FEDERATED' is disabled.
2015-08-03 14:25:12 10038 [Note] InnoDB: Using atomics to ref count buffer pool pages
2015-08-03 14:25:12 10038 [Note] InnoDB: The InnoDB memory heap is disabled
2015-08-03 14:25:12 10038 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2015-08-03 14:25:12 10038 [Note] InnoDB: Memory barrier is not used
2015-08-03 14:25:12 10038 [Note] InnoDB: Compressed tables use zlib 1.2.3
2015-08-03 14:25:12 10038 [Note] InnoDB: Using Linux native AIO
2015-08-03 14:25:12 10038 [Note] InnoDB: Using CPU crc32 instructions
2015-08-03 14:25:12 10038 [Note] InnoDB: Initializing buffer pool, size = 4.0G
2015-08-03 14:25:13 10038 [Note] InnoDB: Completed initialization of buffer pool
2015-08-03 14:25:13 10038 [Note] InnoDB: Highest supported file format is Barracuda.
2015-08-03 14:25:13 10038 [Note] InnoDB: The log sequence numbers 514865622 and 514865622 in ibdata files do not match the log sequence number 514865742 in the ib_logfiles!
2015-08-03 14:25:13 10038 [Note] InnoDB: Database was not shutdown normally!
2015-08-03 14:25:13 10038 [Note] InnoDB: Starting crash recovery.
2015-08-03 14:25:13 10038 [Note] InnoDB: Reading tablespace information from the .ibd files...
2015-08-03 14:25:14 10038 [Note] InnoDB: Restoring possible half-written data pages
2015-08-03 14:25:14 10038 [Note] InnoDB: from the doublewrite buffer...
InnoDB: Last MySQL binlog file position 0 150866, file name mysql-bin.000006
2015-08-03 14:25:16 10038 [Note] InnoDB: 128 rollback segment(s) are active.
2015-08-03 14:25:16 10038 [Note] InnoDB: Waiting for purge to start
2015-08-03 14:25:16 10038 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.25-rel73.1 started; log sequence number 514865742
2015-08-03 14:25:16 7f67ceff9700 InnoDB: Loading buffer pool(s) from .//ib_buffer_pool
2015-08-03 14:25:16 10038 [Note] Recovering after a crash using mysql-bin
2015-08-03 14:25:16 10038 [Note] Starting crash recovery...
2015-08-03 14:25:16 10038 [Note] Crash recovery finished.
2015-08-03 14:25:17 10038 [Note] RSA private key file not found: /home/sveta/sandboxes/rsandbox_Percona-Server-5_6_25/master/data//private_key.pem. Some authentication plugins will not work.
2015-08-03 14:25:17 10038 [Note] RSA public key file not found: /home/sveta/sandboxes/rsandbox_Percona-Server-5_6_25/master/data//public_key.pem. Some authentication plugins will not work.
2015-08-03 14:25:17 10038 [Note] Server hostname (bind-address): '127.0.0.1'; port: 21690
2015-08-03 14:25:17 10038 [Note]   - '127.0.0.1' resolves to '127.0.0.1';
2015-08-03 14:25:17 10038 [Note] Server socket created on IP: '127.0.0.1'.
2015-08-03 14:25:17 10038 [Warning] 'proxies_priv' entry '@ [email protected]' ignored in --skip-name-resolve mode.
2015-08-03 14:25:17 10038 [Note] Event Scheduler: Loaded 0 events
2015-08-03 14:25:17 10038 [Note] /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld: ready for connections.
Version: '5.6.25-73.1-log'  socket: '/tmp/mysql_sandbox21690.sock'  port: 21690  Percona Server (GPL), Release 73.1, Revision 07b797f

Usually the error log file contains the actual query which caused the crash. If it does not and you know the query (for example, if your application logs errors / query problems), please send us this query too. Additionally, if possible, include the CREATE TABLE statements for any tables mentioned in the query. Actually working with the query is the first thing which you can do to resolve the issue: try to run this query (on a non-production/test server which is as close a copy to your production server as possible), to ensure it crashes MySQL Server consistently. If so, you can try and create a temporary workaround by avoiding this kind of queries in your application.

If you have doubts as to which query caused the crash, but have the general query log turned ON, you can use utility

parse_general_log.pl

  from percona-qa to create a potential test case. Simply execute:

$ sudo yum install bzr
$ cd ~
$ bzr branch lp:percona-qa
$ cp /path_that_contains_your_general_log/your_log_file.sql ~
$ ~/percona-qa/parse_general_log.pl -i./your_log_file.sql -o./output.sql

And subsequently execute output.sql against mysqld on a non-production test server to see if a crash is produced. Alternatively, you may mail us the output.sql file (provided your company privacy etc. policies allow for this). If you want to try and reduce the testcase further, please see QA Episode #7 on reducing testcases.

The next thing which we need is a backtrace. You usually have a simple backtrace showing in the error log directly after crash. An example (extracted from an error log) of what this looks like:

stack_bottom = 0 thread_stack 0x40000
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(my_print_stacktrace+0x2e)[0x8dd38e]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_fatal_signal+0x491)[0x6a5dc1]
/lib64/libpthread.so.0(+0xf890)[0x7f5c58ac8890]
/lib64/libc.so.6(__poll+0x2d)[0x7f5c570fbc5d]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z26handle_connections_socketsv+0x1c2)[0x5f64c2]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(_Z11mysqld_mainiPPc+0x1b5d)[0x5fd87d]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f5c57040b05]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld[0x5f10fd]

Note that the above backtrace is mangled. You can send us the file like this (we can demangle it). However, if you want to work with it yourself more comfortably you can unmangle it with help of

c++filt

  utility:

[email protected]:~/sandboxes/rsandbox_Percona-Server-5_6_25> cat master/data/msandbox.err | c++filt
...
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(my_print_stacktrace+0x2e)[0x8dd38e]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_fatal_signal+0x491)[0x6a5dc1]
/lib64/libpthread.so.0(+0xf890)[0x7f5c58ac8890]
/lib64/libc.so.6(__poll+0x2d)[0x7f5c570fbc5d]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(handle_connections_sockets()+0x1c2)[0x5f64c2]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld(mysqld_main(int, char**)+0x1b5d)[0x5fd87d]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f5c57040b05]
/home/sveta/SharedData/Downloads/5.6.25/bin/mysqld[0x5f10fd]
...

Now the backtrace looks much nicer. When sending error log reports to us, please to not use 

c++filt

 on them before sending. We have a list of known bugs, and to scan the known bugs list we need to receive the error log unaltered.

You can also turn core files ON. Core files are memory dump files, created when a process crashes. They are very helpful for debugging, because they contain not only the backtrace of crashing thread, but backtraces of all threads, and much of what was in memory at the time the crash occurred.

Sidenote: Please note it is always a good idea to have the debuginfo (for example Percona-Server-56-debuginfo.x86_64 from the Percona Repository) package installed. This package provides the debugging symbols for Percona server (there are similar packages for other distributions) and ensures that stack traces are more readable and contain more information. It is important to ensure that you have the right package version etc. as symbols are different for each version of mysqld. If you have installed Percona Server from our repository, you can simply install the debuginfo package, the version will be correct, and the package will be auto-updated when Percona Server is updated.

By default the MySQL server does not create core files. To let it do so, you can follow instructions from the “GDB Cheat sheet” (page 2 under header ‘Core Files Cheat Sheet’). In short:

  • Add the option core-file under the [mysqld] section of your configuration file
  • Tune your OS options, so it allows mysqld to create core files as described in the cheat sheet
    echo "core.%p.%e.%s" > /proc/sys/kernel/core_pattern
    ulimit -c unlimited
    sudo sysctl -w fs.suid_dumpable=2
    Note: some systems use ‘kernel’ instead of ‘fs’: use kernel.suid_dumpable=2 instead if you get key or file warnings/errors.
  • Restart the MySQL server

Besides the core file which is generated by the MySQL server, you can also setup the operating system to dump a core file. These are two different core files (for a single crash of the mysqld binary), and the amount of information contained within may differ. The procedure above shows how to setup the one for the MySQL Server alone.

If you like your operating system to dump a core file as well, please see the MySQL QA Episode #12 video. Also, please note that changing the ulimit and fs.suid_dumpable settings may alter the security of your system. Please read more about these options online before using them or leaving them permanently on a production system.

Once a core file is generated, you can use the GDB utility to debug the core file (also called a ‘coredump’). GDB allows you to better resolve backtraces (also called ‘stack traces’ or ‘stacks’), for example by taking a back trace of all threads instead of only the crashing threads. This is off-course better then the single backtrace available in the error log file. To use GDB, you need to first start it:

gdb /path_to_mysqld /path_to_core

/path_to_core is usually your data directory (for coredumps produced by mysqld as a result of using the –core-file option in your my.cnf file), or sometimes in the same directory where the crashing binary is (for coredumps produced by the OS) – though you can specify an alternate fixed location for OS coredumps as the cheat sheet. Note that OS generated dumps are sometimes written with very few privileges and so you may have to use chown/chmod/sudo to access it.

Once you’re into GDB, and all looks fine, run the commands

bt

(backtrace) and  

bt thread apply all

(get backtrace for all threads) to get the stacktraces. bt should more or less match the backtrace seen in the error log, but sometimes this may differ.

For us, ideally you would run the following commands in GDB (as seen in the cheat sheet):

set trace-commands on
set pagination off
set print pretty on
set print array on
set print array-indexes on
set print elements 4096
set logging file gdb_standard.txt
set logging on
thread apply all bt
set logging off
set logging file gdb_full.txt
set logging on
thread apply all bt full

After you run these commands and have existed ( quit ) GDB, please send us the 

./gdb_standard.txt

  and

./gdb_full.txt

 files.

Finally, we would be happy to receive the actual core file from you. In terms of security and privacy, please note that a core file often contains fragments, or sections, or even the full memory of your server.

However, a core file without mysqld is useless, thus please add the mysqld binary together with the core file. If you use our compiled binaries you can also specify the exact package and file name which you downloaded, but if you use a self-compiled version of the server, the mysqld binary is required for us to resolve backtrace and other necessary information (like varialbes) from your core file. Generally speaking, it’s easier just to sent mysqld along.

Also, it would be really nice, if you send us library files which are dynamically linked with mysqld you use. You can get them by using a tool, called

ldd_files.sh

  from the percona-qa suite. Just create a temporary directory, copy your

mysqld

  binary to it and run the tool on it:

[email protected]:~/tmp> wget http://bazaar.launchpad.net/~percona-core/percona-qa/trunk/download/head:/ldd_files.sh-20150713030145-8xdk0llrd3skfsan-1/ldd_files.sh
[email protected]:~/tmp> mkdir tmp
[email protected]:~/tmp> cd tmp/
[email protected]:~/tmp/tmp> cp /home/sveta/SharedData/Downloads/5.6.25/bin/mysqld . # Copy of your mysqld
[email protected]:~/tmp/tmp> ../ldd_files.sh mysqld # Run the tool on it
cp: cannot stat ‘./mysqld: /lib64/libssl.so.1.0.0: no version information available’: No such file or directory # Ignore
cp: cannot stat ‘./mysqld: /lib64/libcrypto.so.1.0.0: no version information available’: No such file or directory # Ignore
[email protected]:~/tmp/tmp> ls
ld-linux-x86-64.so.2 libaio.so.1 libcrypto.so.1.0.0 libcrypt.so.1 libc.so.6 libdl.so.2 libgcc_s.so.1 libm.so.6 libpthread.so.0 librt.so.1 libssl.so.1.0.0 libstdc++.so.6 libz.so.1 mysqld # Files to supply in combination with mysqld

These library files are needed if case some of the frames from the stacktrace are system calls, so that our developers can resolve/check those frames also.

Summary

If you hit a crash, please send us (in order of preference, but even better ‘all of these’):

  • The error log file (please sent it unaltered – i.e. before c++filt was executed – which allows us to scan for known bugs)
  • The crashing query (from your application logs and/or extracted from the core file – ref the query extraction blog post)
    • Please include the matching CREATE TABLE statements
  • A resolved backtrace (and/or preferably the ./gdb_standard.txt and ./gdb_full.txt files)
  • The core file together with the mysqld binary and preferably the ldd files

Thank you!

 

The post MySQL is crashing: a support engineer’s point of view appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/

Percona Live Europe 2015! Call for speakers; registration open

Percona Live Europe 2015! Call for speakers; registration now openPercona Live is moving from London to Amsterdam this year and the event is also expanding to three full days. Percona Live Europe 2015, September 21-23, will be at the Mövenpick Hotel Amsterdam City Centre. The call for speakers and Super Saver registration are now open. Hurry though because the deadline for submitting a speaking proposal is June 21st and Super Saver registration ends July 5th!

This year’s conference will feature one day of tutorials and two days of keynote talks and breakout sessions related to MySQL, NoSQL and Data in the Cloud. You’ll get briefed on the hottest topics, learn about operating a high-performing deployment and hear from top-industry leaders describe the future of the ecosystem – encompassing MySQL, MariaDB, Percona Server, MongoDB (and more). Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

Have something to say? Why not lead a breakout session or a tutorial?

Breakout sessions are 50 minutes including a Q&A. Tutorial sessions focus on an immediate and practical application of in-depth MySQL and NoSQL knowledge. Tutorial speakers should assume that attendees will have laptops to work through detailed and potentially hands-on presentations. Tutorials are typically three hours long including a Q&A, however, if you have content for a full day, submissions for 6-hour-long tutorials are also being accepted. If your tutorial or breakout session is approved, you’ll receive a complimentary full-conference pass.

Huge thanks to our Conference Committee!

There would be no Percona Live without the hard work of our conference committees. Meet this year’s Percona Live Europe 2015 Conference Committee – a dedicated group of experts in MySQL, NoSQL and Data in the Cloud:

  • Erik Beebe – Founder and CTO, ObjectRocket
  • Luis Motta Campos – Database Administrator, ebay Classifieds Group
  • Colin Charles – Chief Evangelist, MariaDB
  • César Trigo Esteban – Development Director, Gigigo
  • Kenny Gorman – Chief Technologist; Data. Office of the CTO, Rackspace
  • Amrith Kumar – Founder and CTO, Tesora
  • Giuseppe Maxia – Quality Assurance Architect, VMWare
  • Shlomi Noach – Senior Systems Engineer, Booking.com
  • Konstantin Osipov – Engineering Manager, Mail.Ru
  • Morgan Tocker – MySQL Community Manager, Oracle
  • Art van Scheppingen – Head of Database Engineering, Spil Games
  • Charity Majors- Production Engineering Manager, Facebook
  • Peter Zaitsev – Co-founder and CEO, Percona

Sponsorships

Sponsorship opportunities for Percona Live Europe 2015 are now available. Sponsors become part of a dynamic and fast-growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. This year’s conference will feature expanded accommodations and turnkey kiosks.

Planning to attend?

Super Saver registration discounts for Percona Live Europe 2015 are available through July 5th (at 11:30 p.m. CEST). Visit the Percona Live Europe 2015 website for more information about the conference. Interested community members can also register to receive email updates about Percona Live Europe 2015.

Percona has also negotiated a special hotel rate at the Mövenpick Hotel Amsterdam City Centre. If you book your hotel before July 6th your delicious breakfast is included.

I hope to see you in Amsterdam!

The post Percona Live Europe 2015! Call for speakers; registration open appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/

Reference architecture for a write-intensive MySQL deployment

We designed Percona Cloud Tools (both hardware and software setup) to handle a very high-intensive MySQL write workload. For example, we already observe inserts of 1bln+ datapoints per day. So I wanted to share what kind of hardware we use to achieve this result.

Let me describe what we use, and later I will explain why.

Server:

  • Chassis: Supermicro SC825TQ-R740LPB 2U Rackmount Chassis
  • Motherboard: Supermicro X9DRI-F dual socket
  • CPU: Dual Intel Xeon Ivy Bridge E5-2643v2 (6x 3.5Ghz cores, 12x HT cores, 25M L3)
  • Memory: 256GB (16x 16GB 256-bit quad-channel) ECC registered DDR3-1600
  • Raid: LSI MegaRAID 9260-4i 4-port 6G/s hardware RAID controller, 512M buffer
  • MainStorage: PCIe SSD HGST FlashMAX II 4.8TB
  • Secondary Storage (OS, logs): RAID 1 over 2x 3TB hard drives

Software:

When selecting hardware for your application, you need to look at many aspects – typically you’re looking for a solution for which you already have experience in working with and has also proved to be the most efficient option. For us it has been as follows:

Cloud vs Bare Metal
We have experience having hardware hosted at the data center as well as cash for upfront investments in hardware so we decided to go for physical self-hosted hardware instead of the cloud. Going this route also gave us maximum flexibility in choosing a hardware setup that was the most optimal for our application rather than selecting one of the stock options.

Scale Up vs Scale Out
We have designed a system from scratch to be able to utilize multiple servers through sharding – so our main concern is choosing the most optimal configuration for the server and provisioning servers as needed. In addition to raw performance we also need to consider power usage and overhead of managing many servers which typically makes having slightly more high-end hardware worth it.

Resource Usage
Every application uses resources in different ways so an optimal configuration will be different depending on your application. Yet all applications use the same resources you need to consider. Typically you want to plan for all of your resources to be substantially used – providing some margin for spikes and maintenance.

CPU

  • Our application processes a lot of data and uses the TokuDB storage engine which uses a lot of CPU for compression, so we needed powerful CPUs.
  • Many MySQL functions are not parallel, think executing single query or Alter table so we’re going for CPU with faster cores rather than larger amount of cores. The resulting configuration with 2 sockets giving 12 cores and 24 threads is good enough for our workloads.
  • Lower end CPUs such as Xeon E3 have very attractive price/performance but only support 32GB of memory which was not enough for our application.

Memory

  • For database boxes memory is mainly used as a cache, so depending on your application you may be better off investing in memory or storage for optimal performance. Check out this blog post for more details.
  • Accessing data in memory is much faster than even on the fastest flash storage so it is still important.
    For our workload having recent data in memory is very important so we get as much “cheap” memory as we can populating all 16 slots with 16GB dimms which have attractive cost per GB at this point.

Storage
There are multiple uses for the storage so there are many variables to consider

  • Bandwidth
    • We need to be able access data on the storage device quickly and with stable response time. HGST FlashMax II has been able to meet these very demanding needs.
  • Endurance
    • When using flash storage you need to worry about endurance – how much beating with writes flash storage can handle before it wears out. Some low cost MLC SSDs would wear out in the time frame of weeks if being written with maximum speed. HGST FlashMax II has endurance rating of 10 Petabytes written (for a random workload) – 30 Petabytes written (for a sequential workload)
    • We also use TokuDB storage engine which significantly reduces amount of writes compared to Innodb.
  • Durability
    • Does the storage provide true durability with data guaranteed to be persisted when write is acknowledged at the operating system level when power goes down or is loss possible?
      We do not want to risk database corruption in case of power failure so we were looking for storage solution which guarantees durability.
      HGST FlashMax II guarantees durability which has been confirmed by our stress tests.
  • Size
    • To scale application storage demands you need to scale both number of IO operations storage can handle and storage size. For flash storage it is often the size which becomes limiting factor.
      HGST FlashMax II 4.8 TB capacity is best available on the market which allows us to go “All Flash” and achieve very quick data access to all our data set.
  • Secondary Storage
    • Not every application need requires flash storage properties.
    • We have secondary storage with conventional drives for operating system and logs.
      Sequential read/write pattern works well with low cost conventional drives and also allow us to increase flash life time, having it handling less writes.
    • We’re using RAID with BBU for secondary storage to be able to have fully durable binary logs without paying high performance penalty.

Why PCIe SSD over SATA SSD?
There are arguments that SATA SSD provides just a good enough performance for MySQL and there is no need for PCIe. While these arguments are valid in one dimension, there are several more to consider.

First, like I said PCIe SSD still provides a best absolute response time and it is an important factor for an end user experience in SaaS systems like Percona Cloud Tools.
Second, consider maintenance operations like backup, ALTER TABLES or slave setups. While these operations are boring and do not get as much attention as a response time or throughput in benchmarks, it is still operations that DBAs performs basically daily, and it is very important to finish a backup or ALTER TABLE in a predictable time, especially on 3-4TB datasize range. And this is where PCIe SSD performs much better than SATA SSDs. For SATA SSD, especially bigger size, write endurance is another point of concern.

Why TokuDB engine?
The TokuDB engine is the best when it comes to insert operations to a huge dataset, and few more factors makes it a no-brainer:

  • TokuDB compression is a huge win. I estimate into this storage ( FlashMAX II 4.8TB) we will fit about 20-30TB of raw data.
  • TokuDB is SSD friendly, as it performs much less data writes per INSERT operation than InnoDB, which greatly extends SSD (which is, well, expensive to say the least) lifetime.

The post Reference architecture for a write-intensive MySQL deployment appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/

Percona Server with TokuDB (beta): Installation, configuration

My previous post was an introduction to the TokuDB storage engine and aimed at explaining the basics of its design and how it differentiates from InnoDB/XtraDB. This post is all about motivating you to give it a try and have a look for yourself. Percona Server is not officially supporting TokuDB as of today, though the guys in the development team are working hard on this and the first GA release of Percona Server with TokuDB is looming on the horizon. However, there’s a beta version available now. For the installation tests in this post I’ve used the latest version of Percona Server alongside the accompanying TokuDB complement, which was published last week.

Installing Percona Server with TokuDB on a sandbox

One of the tools Percona Support Engineers really love is Giuseppe Maxia’s MySQL Sandbox. It allows us to setup a sandbox running a MySQL instance of our choice and makes executing multiple ones for testing purposes very easily. Whenever a customer reaches us with a problem happening on a particular version of MySQL or Percona Server that we can reproduce, we quickly spin off a new sandbox and test it ourselves, so it’s very handy. I’ll use one here to explore this beta version of Percona Server with TokuDB but if you prefer you can install it the regular way using a package from our apt experimental or yum testing repositories.

We start by downloading the tarballs from here: TokuDB’s plugin has been packaged in its own tarball, so there are two to download. Once you get them let’s decompress both and create a unified working directory, compressing it again to create a single tarball we’ll use as source to create our sandbox:

[[email protected] ~]# tar zxvf Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz
[[email protected] ~]# tar zxvf Percona-Server-5.6.17-rel66.0-608.TokuDB.Linux.x86_64.tar.gz
[[email protected] ~]# tar cfa Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz Percona-Server-5.6.17-rel66.0-608.Linux.x86_64/

Before going ahead, verify if you have transparent huge pages enabled as TokuDB won’t run if it is set. See this documentation page for explanation on what this is and how to disable it on Ubuntu. In my CentOS test server it was defined in a slightly different place and I’ve done the following to temporarily disable it:

[[email protected]]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
[[email protected]]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag

We’re now ready to create our sandbox. The following command should be enough (I’ve chosen to run Percona Server on port 5617, you can use any other available one):

[[email protected] ~]# make_sandbox Percona-Server-5.6.17-rel66.0-608.Linux.x86_64.tar.gz -- --sandbox_directory=tokudb --sandbox_port=5617

If the creation process goes well you will see something like the following at the end:

.... sandbox server started
Your sandbox server was installed in $HOME/sandboxes/tokudb

You should now be able to access the MySQL console on the sandbox with the default credentials; if you cannot, verify the log-in $HOME/sandboxes/tokudb/data/msandbox.err:

[[email protected]~]# mysql --socket=/tmp/mysql_sandbox5617.sock -umsandbox -pmsandbox

Alternatively, you can make use of the “use” script located inside the sandbox directory, which employs the same credentials (configured in the client section of the configuration file my.sandbox.cnf):

[[email protected] ~]# cd sandboxes/tokudb/
[[email protected] tokudb]# ./use

First thing to check is if TokuDB is being listed as an available storage engine:

mysql> SHOW ENGINES;
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| Engine             | Support | Comment                                                                    | Transactions | XA   | Savepoints |
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+
| (...)              | (...)   | (...)                                                                      | (...)        | (...)| (...)      |
| TokuDB             | YES     | Tokutek TokuDB Storage Engine with Fractal Tree(tm) Technology             | YES          | YES  | YES        |
| InnoDB             | DEFAULT | Percona-XtraDB, Supports transactions, row-level locking, and foreign keys | YES          | YES  | YES        |
| (...)          | (...)       | (...)                                                                      | NO           | (...)| (...)      |
+--------------------+---------+----------------------------------------------------------------------------+--------------+------+------------+

If that’s not the case, you may need to load the plugins manually – I had to do so in my sandbox; you may not need if you’re installing it from a package in a fresh setup:

mysql> INSTALL PLUGIN tokudb SONAME 'ha_tokudb.so';

TokuDB should now figure in the list of supported ENGINES but you still need to activate the related plugins:

mysql> INSTALL PLUGIN tokudb_file_map SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_fractal_tree_info SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_fractal_tree_block_map SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_trx SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_locks SONAME 'ha_tokudb.so';
mysql> INSTALL PLUGIN tokudb_lock_waits SONAME 'ha_tokudb.so';

Please note the INSTALL PLUGIN action results in permanent changes and thus is required only once. No modifications to MySQL’s configuration file are required to have those plugins load in subsequent server restarts.

Now you should see not only the main TokuDB plugin but also the add-ons to the INFORMATION SCHEMA:

mysql> SHOW PLUGINS;
+-------------------------------+----------+--------------------+--------------+---------+
| Name                          | Status   | Type               | Library      | License |
+-------------------------------+----------+--------------------+--------------+---------+
| (...)                         | (...)    | (...)              | (...)        | (...)   |
| TokuDB                        | ACTIVE   | STORAGE ENGINE     | ha_tokudb.so | GPL     |
| TokuDB_trx                    | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_locks                  | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_lock_waits             | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_file_map               | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_fractal_tree_info      | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
| TokuDB_fractal_tree_block_map | ACTIVE   | INFORMATION SCHEMA | ha_tokudb.so | GPL     |
+-------------------------------+----------+--------------------+--------------+---------+

We are now ready to create our first TokuDB table – the only different thing to do here is to specify TokuDB as the storage engine to use:

mysql> CREATE TABLE test.Numbers (id INT PRIMARY KEY, number VARCHAR(20)) ENGINE=TokuDB;

Note some unfamiliar files lying in the datadir; the details surrounding those is certainly good material for future posts:

[[email protected]]# ls ~/sandboxes/tokudb/data
auto.cnf                   _test_Numbers_main_3_2_19.tokudb
ibdata1                    _test_Numbers_status_3_1_19.tokudb
ib_logfile0                tokudb.directory
ib_logfile1                tokudb.environment
log000000000000.tokulog25  __tokudb_lock_dont_delete_me_data
msandbox.err               __tokudb_lock_dont_delete_me_environment
mysql                      __tokudb_lock_dont_delete_me_logs
mysql_sandbox5617.pid      __tokudb_lock_dont_delete_me_recovery
performance_schema         __tokudb_lock_dont_delete_me_temp
tc.log                     tokudb.rollback
test

Configuration: what’s really important

As noted by Vadim long ago, “Tuning of TokuDB is much easier than InnoDB, there’re only a few parameters to change, and actually out-of-box things running pretty well“:

mysql> show variables like 'tokudb_%';
+---------------------------------+------------------+
| Variable_name                   | Value            |
+---------------------------------+------------------+
| tokudb_alter_print_error        | OFF              |
| tokudb_analyze_time             | 5                |
| tokudb_block_size               | 4194304          |
| tokudb_cache_size               | 522651648        |
| tokudb_check_jemalloc           | 1                |
| tokudb_checkpoint_lock          | OFF              |
| tokudb_checkpoint_on_flush_logs | OFF              |
| tokudb_checkpointing_period     | 60               |
| tokudb_cleaner_iterations       | 5                |
| tokudb_cleaner_period           | 1                |
| tokudb_commit_sync              | ON               |
| tokudb_create_index_online      | ON               |
| tokudb_data_dir                 |                  |
| tokudb_debug                    | 0                |
| tokudb_directio                 | OFF              |
| tokudb_disable_hot_alter        | OFF              |
| tokudb_disable_prefetching      | OFF              |
| tokudb_disable_slow_alter       | OFF              |
| tokudb_disable_slow_update      | OFF              |
| tokudb_disable_slow_upsert      | OFF              |
| tokudb_empty_scan               | rl               |
| tokudb_fs_reserve_percent       | 5                |
| tokudb_fsync_log_period         | 0                |
| tokudb_hide_default_row_format  | ON               |
| tokudb_init_flags               | 11403457         |
| tokudb_killed_time              | 4000             |
| tokudb_last_lock_timeout        |                  |
| tokudb_load_save_space          | ON               |
| tokudb_loader_memory_size       | 100000000        |
| tokudb_lock_timeout             | 4000             |
| tokudb_lock_timeout_debug       | 1                |
| tokudb_log_dir                  |                  |
| tokudb_max_lock_memory          | 65331456         |
| tokudb_pk_insert_mode           | 1                |
| tokudb_prelock_empty            | ON               |
| tokudb_read_block_size          | 65536            |
| tokudb_read_buf_size            | 131072           |
| tokudb_read_status_frequency    | 10000            |
| tokudb_row_format               | tokudb_zlib      |
| tokudb_tmp_dir                  |                  |
| tokudb_version                  | tokudb-7.1.7-rc7 |
| tokudb_write_status_frequency   | 1000             |
+---------------------------------+------------------+
42 rows in set (0.00 sec)

The most important of the tokudb_ variables is arguably tokudb_cache_size. The test server where I ran those tests (test01) have a little less than 1G of memory and as you can see above TokuDB is “reserving” half (50%) of them to itself. That’s the default behavior but, of course, you can change it. And you must do it if you are also going to have InnoDB tables on your server – you should not overcommit memory between InnoDB and TokuDB engines. Shlomi Noach wrote a good post explaining the main TokuDB-specific variables and what they do. It’s definitely a worth read.

I hope you have fun testing Percona Server with TokuDB! If you run into any problems worth reporting, please let us know.

The post Percona Server with TokuDB (beta): Installation, configuration appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/

Percona Server 5.6.19-67.0 with TokuDB (GA) now available

Percona ServerPercona is glad to announce the release of Percona Server 5.6.19-67.0 on July 1, 2014. Download the latest version from the Percona web site or from the Percona Software Repositories.

Based on MySQL 5.6.19, including all the bug fixes in it, Percona Server 5.6.19-67.0 is the current GA release in the Percona Server 5.6 series. All of Percona’s software is open-source and free. Complete details of this release can be found in the 5.6.19-67.0 milestone on Launchpad.

New Features:

  • Percona has merged a contributed patch by Kostja Osipov implementing the Multiple user level locks per connection feature. This feature fixes the upstream bugs: #1118 and #67806.
  • TokuDB storage engine support is now considered general availability (GA) quality. The TokuDB storage engine from Tokutek improves scalability and the operational efficiency of MySQL with faster performance and increased compression. It is available as a separate package and can be installed along with the Percona Server by following the instructions in the release documentation.
  • Percona Server now supports the MTR --valgrind option for a server that is either statically or dynamically linked with jemalloc.

Bugs Fixed:

  • The libperconaserverclient18.1 package was missing the library files. Bug fixed #1329911.
  • Percona Server introduced a regression in 5.6.17-66.0 when support for TokuDB storage engine was initially introduced. This regression caused spurious “wrong table structure” errors for PERFORMANCE_SCHEMA tables. Bug fixed #1329772.
  • Race condition in group commit code could lead to a race condition in PFS instrumentation code resulting in a server crash. Bug fixed #1309026 (upstream #72681).

Other bugs fixed: #1326348 and #1167486.

NOTE: There was no Percona Server 5.6.18 release because there was no MySQL Community Server 5.6.18 release. That version number was used for a MySQL Enterprise Edition release to address the OpenSSL “Heartbleed” issue.

Release notes for Percona Server 5.6.19-67.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

The post Percona Server 5.6.19-67.0 with TokuDB (GA) now available appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/

Percona Server 5.5.38-35.2 is now available

Percona ServerPercona is glad to announce the release of Percona Server 5.5.38-35.2 on July 2, 2014 (Downloads are available here and from the Percona Software Repositories). Based on MySQL 5.5.38, including all the bug fixes in it, Percona Server 5.5.38-35.2 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.38-35.2 milestone at Launchpad.

Bugs Fixed:

  • Fixed the duplicate string in InnoDB version number. Bug fixed #1328421.
  • An uninitialized memory use in User Statistics has been fixed. Bug fixed #1167486.

Release notes for Percona Server 5.5.38-35.2 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona Server 5.5.38-35.2 is now available appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/

Percona Server 5.6.16-64.2 with TokuDB engine Beta is now available

Percona Server version 5.6.15-64.2 with TokuDB engine

Percona Server version 5.6.16-64.2 with TokuDB engine

Percona is glad to announce the first BETA release of Percona Server 5.6.16-64.2 with TokuDB engine on March 27th, 2014. Downloads are available here and from the Percona Software Repositories.

Based on Percona Server 5.6.16-64.2 including all the features and bug fixes in it, and on TokuDB 7.1.5-rc.4, Percona Server 5.6.16-64.2-tokudb is the first BETA release in the Percona Server 5.6 with TokuDB engine series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.6.16-64.2 milestone at Launchpad.

New Feature

  • Installer will now automatically install and enable TokuDB engine on new installations.

More information on how to install and use TokuDB can be found in the documentation. This feature is currently considered BETA quality.

Release notes for Percona Server 5.6.16-64.2 with TokuDB storage engine are available in our online documentation. We did our best to eliminate bugs and problems during the testing this release, but this is a software, so bugs are expected. If you encounter them, please report them to our bug tracking system.

The post Percona Server 5.6.16-64.2 with TokuDB engine Beta is now available appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/

Innodb redo log archiving

Innodb redo log archivingPercona Server 5.6.11-60.3 introduces a new “log archiving” feature. Percona XtraBackup 2.1.5 supports “apply archived logs.” What does it mean and how it can be used?

Percona products propose three kinds of incremental backups. The first is full scan of data files and comparison the data with backup data to find some delta. This approach provides a history of changes and saves disk space by storing only data deltas. But the disadvantage is a full-data file scan that adds load to the disk subsystem. The second kind of incremental backup avoids extra disk load during data file scans.

The idea is in reading only changed data pages. The information about what specific pages were changed is provided by the server itself which writes files with the information during work. It’s a good alternative but changed-pages tracking adds some small load. And Percona XtraBackup’s delta reading leads to non-sequential disk io. This is good alternative but there is one more option.

The Innodb engine has a data log. It writes all operations which modify database pages to log files. This log is used in the case of unexpected server terminating to recover data. The Innodb log consists of the several log files which are filled sequentially in circular. The idea is to save those files somewhere and apply all modifications from archived logs to backup data files. The disadvantage of this approach is in using extra disk space. The advantage is there is no need to do an “explicit” backup on the host server. A simple script could sit and wait for logs to appear then scp/netcat them over to another machine.

But why not use good-old replication? Maybe replication does not have such performance as logs recovering but it is more controlled and well-known. Archived logs allows you to do any number of things with them from just storing them to doing periodic log applying. You can not recover from a ‘DROP TABLE’, etc with replication. But with this framework one could maintain the idea of “point in time” backups.

So the “archived logs” feature is one more option to organize incremental backups. It is not widely used as it was issued not so far and there is not A good understanding of how it works and how it can be used. We are open to any suggestions about its suggest improvements and use cases. The subject of this post is to describe how it works in depth. As log archiving is closely tied with innodb redo logs the internals of redo logs will be covered too. This post would be useful not only for DBA but also for Software Engineers because not only common principles are considered but the specific code too, and knowledge from this post can be used for further MySQL code exploring and patching.

What is the innodb log and how it is written?

Let’s remember what are innodb logs, why they are written, what they are used for.

The Innodb engine has buffer pool. This is a cache of database pages. Any changes are done on page in buffer pool, then page is considered as “dirty,” which means it must be flushed, and pushed to the flush list which is processed periodically by special thread. If pages are not flushed to disk and server is terminated unexpectedly the changes will be lost. To avoid this innodb writes changes to redo log and recover data from redo log during start. This technique allows to delay buffer pool pages flushing. It can increase performance because several changes of one page can be accumulated in memory and then flushed by one io. Except that flushed pages can be grouped to decrease the number of non-sequential io’s. But the down-side of this approach is time for data recovering. Let’s consider how this log is stored, generated and used for data recovering.

Log files

Redo log consists of a several log files which are treated as a circular buffer. The number and the size of log files can be configured. Each log file has a header. The description of this header can be found in “storage/innobase/include/log0log.h” by “LOG_GROUP_ID” keyword.

Each log file contains log records. Redo log records are written sequentially by log blocks of OS_FILE_LOG_BLOCK_SIZE size which is equal to 512 bytes by default and can be changed with innodb option. Each record has its LSN. LSN is a “Log Sequence Number” – the number of bytes written to log from the log creation to the certain log record. Each block consists of header, trailer and log records.

Log blocks

Let’s consider log block header. The first 4 bytes of the header is log block number. The block number is very similar as LSN but LSN is measured in bytes and block number is measured by OS_FILE_LOG_BLOCK_SIZE. Here is the simple formula how LSN is converted to block number:

return(((ulint) (lsn / OS_FILE_LOG_BLOCK_SIZE) & 0x3FFFFFFFUL) + 1);

This formula can be found in log_block_convert_lsn_to_no() function. The next two bytes is the number of bytes used in the block. The next two bytes is the offset of the first MTR log record in this block. What is MTR will be described below. Currently it can be considered as a synonym of bunch of log records which are gathered together as a description of some logical operation. For example it can be a group of log records for inserting new row to some table. This field is used when there are records of several MTR’s in one block. The next four bytes is a checkpoint number. The trailer is four bytes of log block checksum. The above description can be found in “storage/innobase/include/log0log.h” by “LOG_BLOCK_HDR_NO” keyword.

Before writing to disk log blocks must be somehow formed and stored. And the question is:

How log blocks are stored in memory and on disk?

Where log blocks are stored before flushing to disk and how they are written and flushed?

Global log object and log buffer

The answer to the first part of the question is log buffer. Server holds very important global object log_sys in memory. It contains a lot of useful information about logging state. Log buffer is pointed by log_sys->buf pointer which is initialized in log_init(). I would highlight the following log_sys fields that are used for work with log buffer and flushing:

log_sys->buf_size – the size of log buffer, can be set with innodb-log-buffer-size variable, the default value is 8M;
log_sys->buf_free – the offset from which the next log record will be written;
log_sys->max_buf_free – if log_sys->buf_free is greater then this value log buffer must be flushed, see log_free_check();
log_sys->buf_next_to_write – the offset of the next log record to write to disk;
log_sys->write_lsn – the LSN up to which log is written;
log_sys->flushed_to_disk_lsn – the LSN up to which log is written and flushed;
log_sys->lsn – the last LSN in log buffer;

So log_sys->buf_next_to_write is between 0 and log_sys->buf_free, log_sys->write_lsn is equal or less log_sys->lsn, log_sys->flushed_to_disk_lsn is less or equal to log_sys->write_lsn.

The relationships for those fields can be easily traced with debugged by setting up watchpoints.

Ok, we have log buffer, but how do log records come to this buffer?

Where log records come from?

Innodb has special objects that allow you to gather redo log records for some operations in one bunch before writing them to log buffer. These objects are called “mini-transactions” and corresponding functions and data types have “mtr” prefix in the code. The objects itself are described in mtr_t “c” structure. The most interesting fields of this structure are the following:

mtr_t::log – contains log records for the mini-transaction,
mtr_t::memo – contains pointers to pages which are changed or locked by the mini-transaction, it is used to push pages to flush list and release locks after logs records are copied to log buffer in mtr_commit() (see mtr_memo_pop_all() called in mtr_commit()).

mtr_start() function initializes an object of mtr_t type and mtr_commit() writes log records from mtr_t::log to log_sys->buf + log_sys->buf_free. So the typical sequence of any operation which changes data is the following:


mtr_start(); // initialize mtr object
some_ops... // operations on data which are logged in mtr_t::log
mtr_commit(); // write logged operations from mtr_t::log to log buffer log_sys->buf

page_cur_insert_rec_write_log() is a good example of how mtr records can be written and mtr::memo can be filled. The low-level function which writes data to log buffer is log_write_low(). This function is invoked inside of mtr_commit() and not only copy the log records from mtr_t object to log buffer log_sys->buf but also creates a new log blocks inside of log_sys->buf, fills their header, trailer, calculates checksum.

So log buffer contains log blocks which are sequentially filled with log records which are grouped in “mini-transactions” which logically can be treated as some logical operation over data which consists of a sequence of mini-operations(log records).

As log records are written sequentially in log buffer one mini-transaction and even one log record can be written in two neighbour blocks. That is why the header field which would contain the offset of the first MTR in the block is necessary to calculate the point from which log records parsing can be started. This field was described in 2.2.

So we have a buffer of log blocks in a memory. How is data from this buffer written to disk? The mysql documentation says that this depends on innodb_flush_log_at_trx_commit option. There can be three cases depending on the value of this option. Let’s consider each of them.

Writing log buffer to disk: innodb_flush_log_at_trx_commit is 1 or 2.

The first two cases is when innodb_flush_log_at_trx_commit is 1 or 2. In these cases flush log records are written for 2 and flushed for 1 on each transaction commit. If innodb_flush_log_at_trx_commit is 2 log records are flushed periodically by special thread which will be considered later. The low-level function which writes log records from buffer to file is log_group_write_buf(). But in the most cases it is not called directly but it is called from more high level log_write_up_to(). For the current case the calling stack is the following:


(trx_commit_in_memory() or
trx_commit_complete_for_mysql() or
trx_prepare() e.t.c)->
trx_flush_log_if_needed()->
trx_flush_log_if_needed_low()->
log_write_up_to()->
log_group_write_buf().

It is quite easy to find the higher levels of calling stack, just set up breakpoint on log_group_write_buf() and execute any sql query that modifies innodb data. For example for the simple “insert” sql query the higher levels of calling stack are the following:


mysql_execute_command()->
trans_commit_stmt()->
ha_commit_trans()->
TC_LOG_DUMMY::commit()->
ha_commit_low()->
innobase_commit()->
trx_commit_complete_for_mysql()->
trx_flush_log_if_needed()-> ... .

log_io_complete() callback is invoked when i/o is finished for log files (see fil_aio_wait()). log_io_complete() flushes log files if this is not forbidden by innodb_flush_method or innodb_flush_log_at_trx_commit options.

Writing log buffer to disk: innodb_flush_log_at_trx_commit is equal to 0

The third case is when innodb_flush_log_at_trx_commit is equal to 0. For this case log buffer is NOT written to disk on transaction commit, it is written and flushed periodically by separate thread “srv_master_thread”. If innodb_flush_log_at_trx_commit = 0 log files are flushed in the same thread by the same calls. The calling stack is the following:


srv_master_thread()->
(srv_master_do_active_tasks() or srv_master_do_idle_tasks() or srv_master_do_shutdown_tasks())->
srv_sync_log_buffer_in_background()->
log_buffer_sync_in_background()->log_write_up_to()->... .

Special cases for logs flushing

While log_io_complete() do flushing depending on innodb_flush_log_at_trx_commit value among others log_write_up_to() has it’s own flushing criteria. This is flush_to_disk function argument. So it is possible to force log files flushing even if innodb_flush_log_at_trx_commit = 0. Here are examples of such cases:

1) buf_flush_write_block_low()
Each page contains information about the last applied LSN(buf_flush_write_block_low::newest_modification), each log record is a description of change on certain page. Imagine we flushed some changed pages but log records for these pages were not flushed and server goes down. After starting the server some pages will have the newest modifications, but some of them were not flushed and the correspondent log records are lost too. We will have inconsistent database in this case. That is why log records must be flushed before the pages they refer.

2) srv_sync_log_buffer_in_background()
As it was described above this function is called periodically by special thread and forces flushing.

3) log_checkpoint()
When checkpoint is made log files must be reliably flushed.

4) The special handlerton innobase_flush_logs() which can be called through ha_flush_logs() from mysql server.
For example ha_flush_logs() is called from MYSQL_BIN_LOG::reset_logs() when “RESET MASTER” or “RESET SLAVE” are executed.

5) srv_master_do_shutdown_tasks() – on shutdown, ha_innobase::create() – on table creating, ha_innobase::delete_table() – on table removing, innobase_drop_database() – on all database tables removing, innobase_rename_table() – on table rename e.t.c

If log files are treated as circular buffer what happens when the buffer is overflown?

Briefly. Innodb has a mechanism which allows you to avoid overflowing. It is called “checkpoints.” The checkpoint is a state when log files are synchronized with data files. In this case there is no need to keep the history of changes before checkpoint because all pages with the last modifications LSN less or equal to checkpoint LSN are flushed and the log files space from the last written LSN to the last checkpoint LSN can be reused. We will not describe a checkpoint process here because it is a separate interesting subject. The only thing we need to know is when checkpoint happens all pages with modification LSN less or equal to checkpoint LSN are reliably flushed.

How archived logs are written by server.

So the log contains information about page changes. But as we said, log files are the circular buffer. This means that they occupy fixed disk size and the oldest records can be rewritten by the newest ones as there are points when data files are synchronized with log files called checkpoints and there is no need to store the previous history of log records to guarantee database consistency. The idea is to save somewhere all log records to have the possibility of applying them to backuped data to have some kind if incremental backup. For example if we want to have an archive of log records. As log consists of log files it is reasonable to store log records in such files too, and these files are called “archived logs.”

Archived log files are written to the directory which can be set with special innodb option. Each file has the same size as innodb log size and the suffix of each archived file is the LSN from which it is started.

As well as log writing system log archiving system stores its data in global log_sys object. Here are the most valuable fields in log_sys from my point of view:

log_sys->archive_buf, log_sys->archive_buf_size – logs archive buffer and its size, log records are copied from log buffer log_sys->buf to this buffer before writing to disk;
log_sys->archiving_phase – the current phase of log archiving: LOG_ARCHIVE_READ when log records are being copied from log_sys->buf to log_sys->archive_buf, LOG_ARCHIVE_WRITE when log_sys->archive_buf is being written to disk;
log_sys->archived_lsn – the LSN to which log files are written;
log_sys->next_archived_lsn – the LSN to which write operations was invoked but not yet finished;
log_sys->max_archived_lsn_age – the maximum difference between log_sys->lsn and log_sys->archived_lsn, if this difference exceeds the log are being archived synchronously, i.e. the difference is decreased;
log_sys->archive_lock – this is rw-lock which is used for synchronizing LOG_ARCHIVE_WRITE and LOG_ARCHIVE_READ phases, it is x-locked on LOG_ARCHIVE_WRITE phase.

So how is data copied from log_sys->buf to log_sys->archived_buf? log_archive_do() is used for this. It is not only set the proper state for archived log fields in log_sys but also invokes log_group_read_log_seg() with corresponding arguments which not only copy data from log buffer to archived log buffer but also invokes asynchronous write operation for archived log buffer. log_archive_do() can wait until io operations are finished using log_sys->archive_lock if corresponding function parameter is set.

The main question is on what circumstances log_archive_do() is invoked, i.e. when log records are being written to archived log files. The first call stack is the following:


log_free_check()->
log_check_margins()->
log_archive_margin()->
log_archive_do().

Here is text of log_free_check() with comments:


/*********************************************************************//
Checks if there is need for a log buffer flush or a new checkpoint, and does
this if yes. Any database operation should call this when it has modified
more than about 4 pages. NOTE that this function may only be called when the
OS thread owns no synchronization objects except the dictionary mutex. */
UNIV_INLINE
void
log_free_check(void)
/*================*/
{

#ifdef UNIV_SYNC_DEBUG
ut_ad(sync_thread_levels_empty_except_dict());
#endif /* UNIV_SYNC_DEBUG */

if (log_sys->check_flush_or_checkpoint) {

log_check_margins();
}
}

log_sys->check_flush_or_checkpoint is set when there is no enough free space in log buffer or it is time to do checkpoint or any other bound case. log_archive_margin() is invoked only if the limit if the difference between log_sys->lsn and log_sys->archived_lsn is exceeded. Let’s refer to this difference as archived lsn age.

One more call log_archive_do() is from log_open() when archived lsn age exceeds some limit. log_open() is called on each mtr_commit(). And for this case archived logs are written synchronously.

The next synchronous call is from log_archive_all() during shutdown.

Summarizing all above archived logs begins to be written when the log buffer is full enough to be written or when checkpoint happens or when the server is in the process of shut down. And there is no any delay between writing to archive log buffer and writing to disk. I mean there is no way to say that archived logs must be written once a second as it is possible for redo logs with innodb_flush_log_at_trx_commit = 0. As soon as data is copied to the buffer the write operation is invoked immediately for this buffer. Archived log buffer is not filled on each mtr_commit() so it does not slow down the usual logging process. The exception is when there are a lot of io operations what can be the reason of archive log age is too big. The result of big archive log age is the synchronous archived logs writing during mtr_commit(). Memory to memory copying is quite fast operation that is why the data is copied to archived log buffer and is written to disk asynchronously minimizing delays which can be caused by logs archiving.

PS: Here is another call stack for writing archived log buffer to archived log files:

log_io_complete()->log_io_complete_archive()->log_archive_check_completion_low()->log_archive_groups().

I propose to explore this stack yourself.

Logs recovery process, how it is started and works inside. Archived logs applying.

So we discovered how innodb redo logging works, and how redo logs are archived. And the last uncovered thing is how recovery works and how archived logs are applied. These two processes are very similar – that is why they are discussed in one section of this post.

The story begins with innobase_start_or_create_for_mysql() which is invoked from innobase_init(). The following trident in innobase_start_or_create_for_mysql() can be used to search the relevant code:


if (create_new_db) {
...
} else if (srv_archive_recovery) {
...
} else {
...
}

The second condition and the last one is the place from which archived logs applying and innodb logs recovery processes correspondingly start. These two blocks wrap two pairs of functions:


recv_recovery_from_archive_start()
recv_recovery_from_archive_finish()

and

recv_recovery_from_checkpoint_start()
recv_recovery_from_checkpoint_finish()

And all the magic happens in these pairs. As well as global log_sys object for redo logging there is global recv_sys object for innodb recovery and archived logs applying. It is created and initialized in recv_sys_create() and recv_sys_init() functions correspondingly. The following fields if recv_sys object are the most important from my point:

recv_sys->limit_lsn – the LSN up to which recovery should be made, this value is initialized with the maximum value of uint64_t(see #define LSN_MAX) for the recovery process and with certain value which is passed as an argument of recv_recovery_from_archive_start() function and can be set via xtrabackup option for log applying;
recv_sys->parse_start_lsn – the LSN from which logs parsing is started, for the the logs recovery this value equals to the last checkpoint LSN, for logs applying this is last applied LSN;
recv_sys->scanned_lsn – the LSN up to which log files are scanned;
recv_sys->recovered_lsn – the LSN up to which log records are applied, this value <= recv_sys->scanned_lsn;

The first thing that must be done for starting recovery process is to find out the point in log files where the recovery must be started from. This is the last checkpoint LSN. recv_find_max_checkpoint() proceed this. As we can see in log_group_checkpoint() the following code writes checkpoint info into two places in the first log file depending on the checkpoint number:


/* We alternate the physical place of the checkpoint info in the first
log file */

 

if ((log_sys->next_checkpoint_no & 1) == 0) {
write_offset = LOG_CHECKPOINT_1;
} else {
write_offset = LOG_CHECKPOINT_2;
}

So recv_find_max_checkpoint() reads checkpoint info from both places and selects the latest checkpoint.

The same idea is applied for logs, too, but the last applied LSN instead of last checkpoint LSN must be found. Here is the call stack for reading last applied LSN:


innobase_start_or_create_for_mysql()->
open_or_create_data_files()->
fil_read_first_page().

The last applied LSN is stored in the first page of data files in (min|max)_flushed_lsn fields(see FIL_PAGE_FILE_FLUSH_LSN offset). These values are written in fil_write_flushed_lsn_to_data_files() function on server shutdown.

So the main difference between logs applying and recovery process at this stage is the manner of calculating LSN from which log records will be read. For logs applying the last flushed LSN is used but for recovery process it is the last checkpoint LSN. Why does this difference take place? Logs can be applied periodically. Assume we gather archived logs and apply them once an hour to have fresh backup. After applying the previous bunch of log files there can be unfinished transactions. For the recovery process any unfinished transactions are rolled back to have consistent db state at server starting. But for the logs applying process there is no need to roll back them because any unfinished transactions can be finished during the next logs applying.

After calculating the start LSN the sequence of actions is the same for both recovering and applying. The next step is reading and parsing log records. See recv_group_scan_log_recs() which is invoked from recv_recovery_from_checkpoint_start_func() for logs recovering and recv_recovery_from_archive_start()->log_group_recover_from_archive_file() for logs applying.

The first we read log records to some buffer and then invoke recv_scan_log_recs() to parse them. recv_scan_log_recs() checks each log block on consistency(checksum + comparing the log block number written in log block with log block number calculated from log block LSN) and other edge cases and copy it to parsing buffer recv_sys->buf with recv_sys_add_to_parsing_buf() function. The parsing buffer is then parsed by recv_parse_log_recs(). Log records are stored in hash table recv_sys->addr_hash. The key for this hash table is calculated basing on space id and page number pair. This pair refers to the page to which log records must be applied. The value of the hash table is object of recv_addr_t type. recv_addr_t type contains rec_list field which is the list of log records for applying to the (space id, page num) page (see recv_add_to_hash_table().

After parsing and storing log record in hash table recv_sys->addr_hash log records are applied. The function which is responsible for log records applying is recv_apply_hashed_log_recs(). It is invoked from recv_scan_log_recs() if there is no enough memory to store log records and at the end of recovering/applying process. For each element of recv_sys->addr_hash, i.e. for each DB page which must be changed with log records recv_recover_page() is invoked. It can be invoked as from recv_apply_hashed_log_recs() in the case if page is already in buffer pool of from buf_page_io_complete() on io completion, i.e. just after page was read from storage. Applying log records on page read completion is necessary and very convenient. Assume log records have not yet applied as we had enough memory to store the whole recovery log records. But we want for example to boot DB dictionary. I this case any records that concern to the pages of the dictionary will be applied to those pages just after reading them from storage to buffer pool.

The function which applies log records to the certain page is recv_recover_page_func(). It gets the list of log records for the certain page from recv_sys->addr_hash hash table, for each element of this list it compares the lsn of last page changes with the LSN of the record, and if the former is greater the later it applies log record to the page.

After applying all log records from archived logs xtrabackup writes last applied LSN to (min|max)_flushed LSN fields of each data file and finishes execution. The logs recovery process rollbacks all unfinished transactions unless this is forbidden with innodb-force-recovery parameter.

Conclusion

We covered the processes of redo logs writing and recovery in depth. These are very important processes as they provide data consistency on crashes. These two processes became a base for logs archiving and applying features. As log records can describe any data changes the idea is to store these records somewhere and then apply them to backups for organizing some kind of incremental backup.

The features were implemented a short time ago and currently they are not widely used. So if you have something to say about them you are welcome to comment for discussion.

The post Innodb redo log archiving appeared first on MySQL Performance Blog.

Read more at: http://www.mysqlperformanceblog.com/