How To Test and Deploy Kubernetes Operator for MySQL(PXC) in OSX/macOS?

kubernetes on mac osx

kubernetes on mac osxIn this blog post, I’m going to show you how to test Kubernetes locally on OSX/macOS. Testing Kubernetes without having access to a cloud operator in a local lab is not as easy as it sounds. I’d like to share some of my experiences in this adventure. For those who have already experienced in Virtualbox & Vagrant combination, I can tell you that it doesn’t work. Since Kubernetes will require virtualization, setting another virtual environment within another VirtualBox has several issues. After trying to bring up a cluster for a day or two, I gave up my traditional lab and figured out that Kubernetes has an alternate solution called minikube.

Installation

If your OSX/macOS doesn’t have brew I strongly recommend installing it. My OSX/macOS version at the time of this post was macOS 10.14.3 (18D109).

$ brew update && brew install kubectl && brew cask install docker minikube virtualbox

Once minikube is installed, we’ll need to start the virtual environment that is required to run our operator.

I’m starting my minikube environment with 4Gb memory since our Percona Xtradb(PXC) Cluster will have 3 MySQL nodes + 1 ProxySQL pod.

$ minikube start --memory 4096
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f604.png" alt="😄" class="wp-smiley" style="height: 1em;max-height: 1em" />  minikube v0.35.0 on darwin (amd64)
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f525.png" alt="🔥" class="wp-smiley" style="height: 1em;max-height: 1em" />  Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f4f6.png" alt="📶" class="wp-smiley" style="height: 1em;max-height: 1em" />  "minikube" IP address is 192.168.99.100
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f433.png" alt="🐳" class="wp-smiley" style="height: 1em;max-height: 1em" />  Configuring Docker as the container runtime ...
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/2728.png" alt="✨" class="wp-smiley" style="height: 1em;max-height: 1em" />  Preparing Kubernetes environment ...
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f69c.png" alt="🚜" class="wp-smiley" style="height: 1em;max-height: 1em" />  Pulling images required by Kubernetes v1.13.4 ...
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f680.png" alt="🚀" class="wp-smiley" style="height: 1em;max-height: 1em" />  Launching Kubernetes v1.13.4 using kubeadm ...
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/231b.png" alt="⌛" class="wp-smiley" style="height: 1em;max-height: 1em" />  Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f511.png" alt="🔑" class="wp-smiley" style="height: 1em;max-height: 1em" />  Configuring cluster permissions ...
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f914.png" alt="🤔" class="wp-smiley" style="height: 1em;max-height: 1em" />  Verifying component health .....
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f497.png" alt="💗" class="wp-smiley" style="height: 1em;max-height: 1em" />  kubectl is now configured to use "minikube"
<img src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f3c4.png" alt="🏄" class="wp-smiley" style="height: 1em;max-height: 1em" />  Done! Thank you for using minikube!

We’re now ready to install Install Percona XtraDB Cluster on Kubernetes.

Setup

Clone and download Kubernetes Operator for MySQL.

$ git clone -b release-0.2.0 https://github.com/percona/percona-xtradb-cluster-operator
Cloning into 'percona-xtradb-cluster-operator'...
remote: Enumerating objects: 191, done.
remote: Counting objects: 100% (191/191), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 10321 (delta 73), reused 138 (delta 67), pack-reused 10130
Receiving objects: 100% (10321/10321), 17.04 MiB | 3.03 MiB/s, done.
Resolving deltas: 100% (3526/3526), done.
Checking out files: 100% (5159/5159), done.
$ cd percona-xtradb-cluster-operator

Here we have to make the following modifications for this operator to work on OSX/macOS.

  1. Reduce memory allocation for each pod.
  2. Reduce CPU usage for each pod.
  3. Change the topology type (because we want to run all PXC instances on one node).
$ sed -i.bak 's/1G/500m/g' deploy/cr.yaml
$ grep "memory" deploy/cr.yaml
        memory: 500m
      #   memory: 500m
        memory: 500m
      #   memory: 500m
$ sed -i.bak 's/600m/200m/g' deploy/cr.yaml
$ grep "cpu" deploy/cr.yaml
        cpu: 200m
      #   cpu: "1"
        cpu: 200m
      #   cpu: 700m
$ grep "topology" deploy/cr.yaml
      topologyKey: "kubernetes.io/hostname"
    #   topologyKey: "failure-domain.beta.kubernetes.io/zone"
$ sed -i.bak 's/kubernetes.io/hostname/none/g' deploy/cr.yaml
$ grep "topology" deploy/cr.yaml
      topologyKey: "none"
    #   topologyKey: "failure-domain.beta.kubernetes.io/zone"

We’re now ready to deploy our PXC via the operator.

$ kubectl apply -f deploy/crd.yaml
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com created
$ kubectl create namespace pxc
namespace/pxc created
$ kubectl config set-context $(kubectl config current-context) --namespace=pxc
Context "minikube" modified.
$ kubectl apply -f deploy/rbac.yaml
role.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created
rolebinding.rbac.authorization.k8s.io/default-account-percona-xtradb-cluster-operator created
$ kubectl apply -f deploy/operator.yaml
deployment.apps/percona-xtradb-cluster-operator created
$ kubectl apply -f deploy/secrets.yaml
secret/my-cluster-secrets created
$ kubectl apply -f deploy/configmap.yaml
configmap/pxc created
$ kubectl apply -f deploy/cr.yaml
perconaxtradbcluster.pxc.percona.com/cluster1 created

Here we’re ready to monitor the progress of our deployment.

$ kubectl get pods
NAME                                               READY   STATUS              RESTARTS   AGE
cluster1-pxc-node-0                                0/1     ContainerCreating   0          86s
cluster1-pxc-proxysql-0                            1/1     Running             0          86s
percona-xtradb-cluster-operator-5857dfcb6c-g7bbg   1/1     Running             0          109s

If any of the nodes is having difficulty passing any STATUS to Running state

$ kubectl describe pod cluster1-pxc-node-0
Name:               cluster1-pxc-node-0
Namespace:          pxc
Priority:           0
.
..
...
Events:
  Type     Reason            Age                     From               Message
  ----     ------            ----                    ----               -------
  Warning  FailedScheduling  3m47s (x14 over 3m51s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         3m47s                   default-scheduler  Successfully assigned pxc/cluster1-pxc-node-0 to minikube
  Normal   Pulling           3m45s                   kubelet, minikube  pulling image "perconalab/pxc-openshift:0.2.0"
  Normal   Pulled            118s                    kubelet, minikube  Successfully pulled image "perconalab/pxc-openshift:0.2.0"
  Normal   Created           117s                    kubelet, minikube  Created container
  Normal   Started           117s                    kubelet, minikube  Started container
  Warning  Unhealthy         89s                     kubelet, minikube  Readiness probe failed:
At this stage we’re ready to verify our cluster as soon as we see following output (READY 1/1):
$ kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-pxc-node-0                                1/1     Running   0          7m38s
cluster1-pxc-node-1                                1/1     Running   0          4m46s
cluster1-pxc-node-2                                1/1     Running   0          2m25s
cluster1-pxc-proxysql-0                            1/1     Running   0          7m38s
percona-xtradb-cluster-operator-5857dfcb6c-g7bbg   1/1     Running   0          8m1s

In order to connect to this cluster, we’ll need to deploy a client shell access.

$ kubectl run -i --rm --tty percona-client --image=percona:5.7 --restart=Never -- bash -il
If you don't see a command prompt, try pressing enter.
bash-4.2$ mysql -h cluster1-pxc-proxysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 3617
Server version: 5.5.30 (ProxySQL)
Copyright (c) 2009-2019 Percona LLC and/or its affiliates
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> s
--------------
mysql  Ver 14.14 Distrib 5.7.25-28, for Linux (x86_64) using  6.2
Connection id:		3617
Current database:	information_schema
Current user:		[email protected]e
SSL:			Not in use
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server version:		5.5.30 (ProxySQL)
Protocol version:	10
Connection:		cluster1-pxc-proxysql via TCP/IP
Server characterset:	latin1
Db     characterset:	utf8
Client characterset:	latin1
Conn.  characterset:	latin1
TCP port:		3306
Uptime:			14 min 1 sec
Threads: 1  Questions: 3  Slow queries: 0
--------------

A few things to remember:

  • Secrets for this setup are under deploy/secrets.yaml, you can decode via
$ echo -n '{secret}' |base64 -D

  • To reconnect shell
$ kubectl run -i --tty percona-client --image=percona:5.7 -- sh

  • To redeploy the pod delete first and repeat above steps without configuration changes
$ kubectl delete -f deploy/cr.yaml

  • To stop and delete  minikube virtual environment
$ minikube stop

$ minikube delete

References

Credits


Photo by frank mckenna on Unsplash

Read more at: http://www.mysqlperformanceblog.com/

MongoDB on ARM Processors

reads updates transactions per hour per $

ARM processors have been around for a while. In mid-2015/2016 there were a couple of attempts by the community to port MongoDB to work with this architecture. At the time, the main storage engine was MMAP and most of the available ARM boards were 32-bits. Overall, the port worked, but the fact is having MongoDB running on a Raspberry Pi was more a hack than a setup. The public cloud providers didn’t yet offer machines running with these processors.

The ARM processors are power-efficient and, for this reason, they are used in smartphones, smart devices and, now, even laptops. It was just a matter of time to have them available in the cloud as well. Now that AWS is offering ARM-based instances you might be thinking: “Hmmm, these instances include the same amount of cores and memory compared to the traditional x86-based offers, but cost a fraction of the price!”.

But do they perform alike?

In this blog, we selected three different AWS instances to compare: one powered by  an ARM processor, the second one backed by a traditional x86_64 Intel processor with the same number of cores and memory as the ARM instance, and finally another Intel-backed instance that costs roughly the same as the ARM instance but carries half as many cores. We acknowledge these processors are not supposed to be “equivalent”, and we do not intend to go deeper in CPU architecture in this blog. Our goal is purely to check how the ARM-backed instance fares in comparison to the Intel-based ones.

These are the instances we will consider in this blog post.

Methodology

We will use the Yahoo Cloud Serving Benchmark (YCSB, https://github.com/brianfrankcooper/YCSB) running on a dedicated instance (c5d.4xlarge) to simulate load in three distinct tests:

  1. a load of 1 billion documents in one collection having only the primary key (which we’ll call Inserts).
  2. a workload comprised of exclusively reads (Reads)
  3. a workload comprised of a mix of 75% reads with 5% scans plus 25% updates (Reads/Updates)

We will run each test with a varying number of concurrent threads (32, 64, and 128), repeating each set three times and keeping only the second-best result.

All instances will run the same MongoDB version (4.0.3, installed from a tarball and running with default settings) and operating system, Ubuntu 16.04. We chose this setup because MongoDB offer includes an ARM version for Ubuntu-based machines.

All the instances will be configured with:

  • 100 GB EBS with 5000 PIOPS and 20 GB EBS boot device
  • Data volume formatted with XFS, 4k blocks
  • Default swappiness and disk scheduler
  • Default kernel parameters
  • Enhanced cloud watch configured
  • Free monitoring tier enabled

Preparing the environment

We start with the setup of the benchmark software we will use for the test, YCSB. The first task was to spin up a powerful machine (c5d.4xlarge) to run the software and then prepare the environment:

The YCSB program requires Java, Maven, Python, and pymongo which doesn’t come by default in our Linux version – Ubuntu server x86. Here are the steps we used to configure our environment:

Installing Java

sudo apt-get install java-devel

Installing Maven

wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.tar.gz
sudo tar xzf apache-maven-*-bin.tar.gz -C /usr/local
cd /usr/local
sudo ln -s apache-maven-* maven
sudo vi /etc/profile.d/maven.sh

Add the following to maven.sh

export M2_HOME=/usr/local/maven
export PATH=${M2_HOME}/bin:${PATH}

Installing Python 2.7

sudo apt-get install python2.7

Installing pip to resolve the pymongo dependency

sudo apt-get install python-pip

Installing pymongo (driver)

sudo pip install pymongo

Installing YCSB

curl -O --location https://github.com/brianfrankcooper/YCSB/releases/download/0.5.0/ycsb-0.5.0.tar.gz
tar xfvz ycsb-0.5.0.tar.gz
cd ycsb-0.5.0

YCSB comes with different workloads, and also allows for the customization of a workload to match our own requirements. If you want to learn more about the workloads have a look at https://github.com/brianfrankcooper/YCSB/blob/master/workloads/workload_template

First, we will edit the workloads/workloada file to perform 1 billion inserts (for our first test) while also preparing it to later perform only reads (for our second test):

recordcount=1000000
operationcount=1000000
workload=com.yahoo.ycsb.workloads.CoreWorkload
readallfields=true
readproportion=1
updateproportion=0.0

We will then change the workloads/workloadb file so as to provide a mixed workload for our third test.  We also set it to perform 1 billion reads, but we break it down into 70% of read queries and 30% of updates with a scan ratio of 5%, while also placing a cap on the maximum number of scanned documents (2000) in an effort to emulate real traffic – workloads are not perfect, right?

recordcount=10000000
operationcount=10000000
workload=com.yahoo.ycsb.workloads.CoreWorkload
readallfields=true
readproportion=0.7
updateproportion=0.25
scanproportion=0.05
insertproportion=0
maxscanlength=2000

With that, we have the environment configured for testing.

Running the tests

With all instances configured and ready, we run the stress test against our MongoDB servers using the following command :

./bin/ycsb [load/run] mongodb -s -P workloads/workload[ab] -threads [32/64/128] 
 -p mongodb.url=mongodb://xxx.xxx.xxx.xxx.:27017/ycsb0000[0-9] 
 -jvm-args="-Dlogback.configurationFile=disablelogs.xml"

The parameters between brackets varied according to the instance and operation being executed:

  • [load/run] load means insert data while run means perform action (update/read)
  • workload[a/b] reference the different workloads we’ve used
  • [32/64/128] indicate the number of concurrent threads being used for the test
  • ycsb0000[0-9] is the database name we’ve used for the tests (for reference only)

Results

Without further ado, the table below summarizes the results for our tests:

 

 

 

Performance cost

Considering throughput alone – and in the context of those tests, particularly the last one – you may get more performance for the same cost. That’s certainly not always the case, which our results above also demonstrate. And, as usual, it depends on “how much performance do you need” – a matter that is even more pertinent in the cloud. With that in mind, we had another look at our data under the “performance cost” lens.

As we saw above, the c5.4xlarge instance performed better than the other two instances for a little over 50% more (in terms of cost). Did it deliver 50% more (performance) as well? Well, sometimes it did even more than that, but not always. We used the following formula to extrapolate the OPS (Operations Per Second) data we’ve got from our tests into OPH (Operations Per Hour), so we could them calculate how much bang (operations) for the buck (US$1) each instance was able to provide:

transactions/hour/US$1 = (OPS * 3600) / instance cost per hour

This is, of course, an artificial metric that aims to correlate performance and cost. For this reason, instead of plotting the raw values, we have normalized the results using the best performer instance as baseline(100%):

 

 

The intent behind these was only to demonstrate another way to evaluate how much we’re getting for what we’re paying. Of course, you need to have a clear understanding of your own requirements in order to make a balanced decision.

Parting thoughts

We hope this post awakens your curiosity not only about how MongoDB may perform on ARM-based servers, but also by demonstrating another way you can perform your own tests with the YCSB benchmark. Feel free to reach out to us through the comments section below if you have any suggestions, questions, or other observations to make about the work we presented here.

Read more at: http://www.mysqlperformanceblog.com/

Using pg_repack to Rebuild PostgreSQL Database Objects Online

Rebuild PostgreSQL Database Objects

Rebuild PostgreSQL Database ObjectsIn this blog post, we’ll look at how to use

pg_repack

 to rebuild PostgreSQL database objects online.

We’ve seen a lot of questions regarding the options available in PostgreSQL for rebuilding a table online. We created this blog post to explain the 

pg_repack

 extension, available in PostgreSQL for this requirement. pg_repack is a well-known extension that was created and is maintained as an open source project by several authors.

There are three main reasons why you need to use

pg_repack

 in a PostgreSQL server:

  1. Reclaim free space from a table to disk, after deleting a huge chunk of records
  2. Rebuild a table to re-order the records and shrink/pack them to lesser number of pages. This may let a query fetch just one page  ( or < n pages) instead of n pages from disk. In other words, less IO and more performance.
  3. Reclaim free space from a table that has grown in size with a lot of bloat due to improper autovacuum settings.

You might have already read our previous articles that explained what bloat is, and discussed the internals of autovacuum. After reading these articles, you can see there is an autovacuum background process that removes dead tuples from a table and allows the space to be re-used by future updates/inserts on that table. Over a period of time, tables that take the maximum number of updates or deletes may have a lot of bloated space due to poorly tuned autovacuum settings. This leads to slow performing queries on these tables. Rebuilding the table is the best way to avoid this. 

Why is just autovacuum not enough for tables with bloat?

We have discussed several parameters that change the behavior of an autovacuum process in this blog post. There cannot be more than

autovacuum_max_workers

 number of autovacuum processes running in a database cluster at a time. At the same time, due to untuned autovacuum settings and no manual vacuuming of the database as a weekly or monthy jobs, many tables can be skipped from autovacuum. We have discussed in this post that the default autovacuum settings run autovacuum on a table with ten records more times than a table with a million records. So, it is very important to tune your autovacuum settings, set table-level customized autovacuum parameters and enable automated jobs to identify tables with huge bloat and run manual vacuum on them as scheduled jobs during low peak times (after thorough testing).

VACUUM FULL

VACUUM FULL

 is the default option available with a PostgreSQL installation that allows us to rebuild a table. This is similar to

ALTER TABLE

 in MySQL. However, this command acquires an exclusive lock and locks reads and writes on a table. 

VACUUM FULL tablename;

pg_repack

pg_repack

 is an extension available for PostgreSQL that helps us rebuild a table online. This is similar to

pt-online-schema-change

 for online table rebuild/reorg in MySQL. However,

pg_repack

 works for tables with a Primary key or a NOT NULL Unique key only.

Installing pg_repack extension

In RedHat/CentOS/OEL from PGDG Repo

Obtain the latest PGDG repo from https://yum.postgresql.org/ and perform the following step:

# yum install pg_repack11 (This works for PostgreSQL 11)
Similarly, for PostgreSQL 10,
# yum install pg_repack10

In Debian/Ubuntu from PGDG repo

Add certificates, repo and install

pg_repack

:

Following certificate may change. Please validate before you perform these steps.
# sudo apt-get install wget ca-certificates
# wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
# sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
# sudo apt-get update
# apt-get install postgresql-server-dev-11
# apt-get install postgresql-11-repack

Loading and creating pg_repack extension

Step 1 :

You need to add

pg_repack

to

shared_preload_libraries

. For that, just set this parameter in postgresql.conf or postgresql.auto.conf file.

shared_preload_libraries = 'pg_repack'

Setting this parameter requires a restart.

$ pg_ctl -D $PGDATA restart -mf

Step 2 :

In order to start using

pg_repack

, you must create this extension in each database where you wish to run it:

$ psql
c percona
CREATE EXTENSION pg_repack;

Using pg_repack to Rebuild Tables Online

Similar to

pt-online-schema-change

, you can use the option

--dry-run

 to see if this table can be rebuilt using

pg_repack

. When you rebuild a table using

pg_repack

, all its associated Indexes does get rebuild automatically. You can also use

-t

 instead of

--table

 as an argument to rebuild a specific table.

Success message you see when a table satisfies the requirements for pg_repack.

$ pg_repack --dry-run -d percona --table scott.employee
INFO: Dry run enabled, not executing repack
INFO: repacking table "scott.employee"

Error message when a table does not satisfy the requirements for pg_repack.

$ pg_repack --dry-run -d percona --table scott.sales
INFO: Dry run enabled, not executing repack
WARNING: relation "scott.sales" must have a primary key or not-null unique keys

Now to execute the rebuild of a table: scott.employee ONLINE, you can use the following command. It is just the previous command without

--dry-run

.

$ pg_repack -d percona --table scott.employee
INFO: repacking table "scott.employee"

Rebuilding Multiple Tables using pg_repack

Use an additional

--table

 for each table you wish to rebuild.

Dry Run

$ pg_repack --dry-run -d percona --table scott.employee --table scott.departments
INFO: Dry run enabled, not executing repack
INFO: repacking table "scott.departments"
INFO: repacking table "scott.employee"

Execute

$ pg_repack -d percona --table scott.employee --table scott.departments
INFO: repacking table "scott.departments"
INFO: repacking table "scott.employee"

Rebuilding an entire Database using pg_repack

You can rebuild an entire database online using

-d

. Any table that is not eligible for

pg_repack

is skipped automatically.

Dry Run

$ pg_repack --dry-run -d percona
INFO: Dry run enabled, not executing repack
INFO: repacking table "scott.departments"
INFO: repacking table "scott.employee"

Execute

$ pg_repack -d percona
INFO: repacking table "scott.departments"
INFO: repacking table "scott.employee"


Running pg_repack in parallel jobs

To perform a parallel rebuild of a table, you can use the option

-j

. Please ensure that you have sufficient free CPUs that can be allocated to run

pg_repack

in parallel.

$ pg_repack -d percona -t scott.employee -j 4
NOTICE: Setting up workers.conns
INFO: repacking table "scott.employee"

Running pg_repack remotely

You can always run

pg_repack

from a Remote Machine. This helps in scenarios where we have PostgreSQL databases deployed on Amazon RDS. To run

pg_repack

from a remote machine, you must have the same version of

pg_repack

installed in the remote server as well as the database server (say AWS RDS).

Read more at: http://www.mysqlperformanceblog.com/

Upcoming Webinar Wed 2/6: Percona Software News and Roadmap Update

Percona Software News and Roadmap Update Webinar

Percona Software News and Roadmap Update WebinarJoin Percona CEO Peter Zaitsev as he presents Percona Software News and Roadmap Update on Wednesday, February 6, 2019, at 11:00 AM PST (UTC-8) / 2:00 PM EST (UTC-5).

Register Now

Come and listen to Percona CEO Peter Zaitsev discuss what’s new in Percona open source software. Topics include Percona Server for MySQL and MongoDB, Percona XtraBackup, Percona Toolkit, Percona XtraDB Cluster and Percona Monitoring and Management.

During this webinar, Peter will talk about newly released features in Percona software. He will also show a few quick demos and share with you highlights from the Percona open source software roadmap.

Peter will also talk about new developments in Percona commercial services and finish with a Q&A.

Register today to join Peter for his Percona Software News and Roadmap Update.

Read more at: http://www.mysqlperformanceblog.com/

New Percona Package Signing Key Requires Update on RHEL and CentOS

percona release package signing

On December 20th, 2018 we began to sign our packages with a new encryption key. Our percona-release package contains both the latest and older package signing keys. However, older versions of the percona-release rpm package do not contain our latest key. Users with older percona-release packages installed, that have not been updated, may see an error message when trying to install our newer packages.

Redhat Enterprise Linux (RHEL) and CentOS users may see an error similar to the following:

The GPG keys listed for the "Percona-Release YUM repository - x86_64" repository are already installed but they are not correct for this package.
Check that the correct key URLs are configured for this repository.

Thankfully,  the solution to this problem is simple. You will need to update your percona-release package before installing packages that are signed with the latest encryption key:

$ sudo yum update percona-release

Ubuntu and Debian systems will not encounter this error as package signing and key verification works differently on those systems.


Photo by Markus Spiske on Unsplash

Read more at: http://www.mysqlperformanceblog.com/

Upcoming Webinar Thurs 2/7: Top Trends in Modern Data Architecture for 2019

Top Trends in Modern Data Architecture for 2019

Top Trends in Modern Data Architecture for 2019Please join Percona’s PMM Product Manager, Michael Coburn for a webinar on The Top Trends in Modern Data Architecture for 2019 hosted by DBTA on Thursday, February 7th at 11:00 AM PST (UTC-8) / 2:00 PM EST (UTC-5).

Register Now

A strong data architecture strategy is critical to supporting your organization’s data-driven goals. AI and machine learning, data discovery and real-time analytics reflect that notion. Additionally, greater speed, flexibility, and scalability are common wish-list items. Smarter data governance and security capabilities are not that far behind. What’s more, many new technologies and approaches have come to the forefront of data architecture discussions. Data lakes, in-memory databases and engines like Spark and cloud services of all shapes and sizes are just a few examples.

In order to learn more about the top trends in modern data architecture for 2019, register for this webinar today.

Read more at: http://www.mysqlperformanceblog.com/

Percona Server for MongoDB 3.6.10-3.0 Is Now Available

Percona Server for MongoDB

Percona Server for MongoDB

Percona announces the release of Percona Server for MongoDB 3.6.10-3.0 on February 6, 2019. Download the latest version from the Percona website or the Percona Software Repositories. This
release is also available for Ubuntu 18.10 (Cosmic Cuttlefish).

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.6 Community Edition. It supports MongoDB 3.6 protocols and drivers.

Percona Server for MongoDB extends Community Edition functionality by including the Percona Memory Engine storage engine, as well as several enterprise-grade features. It also includes MongoRocks storage engine (which is now deprecated). Percona Server for MongoDB requires no changes to MongoDB applications or code.

In Percona Server for MongoDB 3.6.10-3.0, data at rest encryption becomes GA. The data at rest encryption feature now covers the temporary files used for external sorting and the rollback files. You can decrypt and examine the contents of the rollback files using the new perconadecrypt command line tool.

In this release, Percona Server for MongoDB supports the ngram full-text search engine. Thanks to Sunguck Lee (@SunguckLee) for this contribution. To enable the ngram full-text search engine, create an index passing ngram to the default_language parameter:

mongo > db.collection.createIndex({name:"text"}, {default_language: "ngram"})

New Features

  • PSMDB-276perconadecrypt tool is now available for decrypting the encrypted rollback files.
  • PSMDB-250: The Ngram full-text search engine has been added to Percona Server for MongoDB. Thanks to @SunguckLee on GitHub

Bugs Fixed

  • PSMDB-234: It was possible to use a key file for encryption the owner of which was not the owner of the mongod process.
  • PSMDB-269: In some cases, a hot backup was not using the correct path to the keydb directory designated for data encryption.
  • PSMDB-273: When using data at rest encryption, temporary files for external sorting and rollback files were not encrypted
  • PSMDB-272mongos could crash when running the createBackup command.
  • PSMDB-233: WiredTiger encryption options were silently ignored at server startup, although a storage engine different from WiredTiger was used.
  • PSMDB-257: MongoDB could not be started with a group-readable key file owned by root.
  • PSMDB-266: In some cases, it was possible to add arbitrary collections to the keydb directory which may only store encryption data.

Other bugs fixed: PSMDB-239PSMDB-243

The Percona Server for MongoDB 3.6.10-3.0 release notes are available in the official documentation.

Read more at: http://www.mysqlperformanceblog.com/

Column Families in MyRocks

myrocks column families

myrocks column familiesIn my webinar How To Rock with MyRocks I briefly mentioned the column families feature in MyRocks, that allows a fine tuning for indexes and primary keys.

Let’s review it in more detail.

To recap, MyRocks is based on the RocksDB library, which stores all data in [key => value] pairs, so when it translates to MySQL, all Primary Keys (data) and secondary keys (indexes) are stored in [ key => value ] pairs, which by default are assigned to “default” Column Family.

Each column family has individual set of

  • SST files, and their parameters
  • Memtable and its parameters
  • Bloom filters, and their parameters
  • Compression settings

There is a N:1 relation between tables and indexes to column family, so schematically it looks like this:

column families myrocks

How do you assign tables and indexes to a column family?

It is defined in the COMMENT section for a key or primary key:

CREATE TABLE tab1(
a INT,
b INT,
PRIMARY KEY (a) COMMENT ‘cfname=cf1’,
KEY key_b (b) COMMENT ‘cfname=cf2’)
)

Now, if you want to define individual parameters for column families, you should use

rocksdb_override_cf_options 

For example:

rocksdb_override_cf_options=’cf1={compression=kNoCompression}; cf2={compression=kLZ4Compression,bottommost_compression==kZSTD}’

Be careful of defining too many column families: as I mentioned, each column family will use an individual memtable, which takes 64MB of memory by default.

There is also an individual set of SST tables. You can see how they perform with

SHOW ENGINE ROCKSDB STATUS

 :

  Type: CF_COMPACTION
  Name: cf1
Status: 
** Compaction Stats [cf1] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      0/0     0.00 KB   0.0      0.0     0.0      0.0       1.5      1.5       0.0   1.0      0.0     99.1        15       37    0.410       0      0
  L5      3/0   197.05 MB   0.8      0.4     0.4      0.0       0.4      0.4       0.0   1.0     75.6     75.6         6        1    5.923   8862K      0
  L6      7/0   341.24 MB   0.0      1.7     1.3      0.5       0.8      0.3       0.0   0.6     42.8     19.5        42        7    5.933     61M      0
 Sum     10/0   538.29 MB   0.0      2.2     1.7      0.5       2.7      2.2       0.0   1.8     35.5     44.1        63        45    1.392     70M     0
 Int      0/0     0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0         0        0    0.000       0      0
  Type: CF_COMPACTION
  Name: cf2
Status: 
** Compaction Stats [cf2] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      0/0     0.00 KB   0.0      0.0     0.0      0.0       0.3      0.3       0.0   1.0      0.0     13.5        22       22    1.023       0      0
  L6      4/0   178.61 MB   0.0      0.6     0.3      0.3       0.4      0.2       0.0   1.5      9.3      7.3        61        5   12.243     72M      0
 Sum      4/0   178.61 MB   0.0      0.6     0.3      0.3       0.7      0.5       0.0   2.5      6.8      9.0        84       27    3.100     72M      0
 Int      0/0     0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0         0        0    0.000       0      0

To check the current column families and their settings you can use:

SELECT * FROM INFORMATION_SCHEMA.ROCKSDB_CF_OPTIONS:
| cf1        | COMPARATOR                                                      | RocksDB_SE_v3.10      
|
| cf1        | MERGE_OPERATOR                                                  | NULL                                                                                                      |
| cf1        | COMPACTION_FILTER                                               | NULL                                                                                                                                                                     
| cf1        | COMPACTION_FILTER_FACTORY                                       | Rdb_compact_filter_factory                                                                                                                                               
| cf1        | WRITE_BUFFER_SIZE                                               | 67108864                                                                                                       
| cf1        | MAX_WRITE_BUFFER_NUMBER                                         | 2                                                                                                                                                                        
| cf1        | MIN_WRITE_BUFFER_NUMBER_TO_MERGE                                | 1                                                                                                                                                                        
| cf1        | NUM_LEVELS                                                      | 7                                                                                                                                                                        
| cf1        | LEVEL0_FILE_NUM_COMPACTION_TRIGGER                              | 4                                                                                                                                                                        
| cf1        | LEVEL0_SLOWDOWN_WRITES_TRIGGER                                  | 20                                                                                                                                                                       
| cf1        | LEVEL0_STOP_WRITES_TRIGGER                                      | 36                                                                                                                                                                       
| cf1        | MAX_MEM_COMPACTION_LEVEL                                        | 0                                                                                                                                                                        
| cf1        | TARGET_FILE_SIZE_BASE                                           | 67108864                                                                                                                                                                 
| cf1        | TARGET_FILE_SIZE_MULTIPLIER                                     | 1                                                                                                                                                                        
| cf1        | MAX_BYTES_FOR_LEVEL_BASE                                        | 268435456                                                                                                                                                                
| cf1        | LEVEL_COMPACTION_DYNAMIC_LEVEL_BYTES                            | ON                                                                                                                                                                       
| cf1        | MAX_BYTES_FOR_LEVEL_MULTIPLIER                                  | 10.000000                                                                                                                                                                
| cf1        | SOFT_RATE_LIMIT                                                 | 0.000000                                                                                                                                                                 
| cf1        | HARD_RATE_LIMIT                                                 | 0.000000                                                                                                                                                                 
| cf1        | RATE_LIMIT_DELAY_MAX_MILLISECONDS                               | 100                                                                                                                                                                      
| cf1        | ARENA_BLOCK_SIZE                                                | 0                                                                                                                                                                        
| cf1        | DISABLE_AUTO_COMPACTIONS                                        | OFF                                                                                                                                                                      
| cf1        | PURGE_REDUNDANT_KVS_WHILE_FLUSH                                 | ON                                                                                                                                                                       
| cf1        | MAX_SEQUENTIAL_SKIP_IN_ITERATIONS                               | 8                                                                                                                                                                        
| cf1        | MEMTABLE_FACTORY                                                | SkipListFactory                                                                                                                                                          
| cf1        | INPLACE_UPDATE_SUPPORT                                          | OFF                                                                                                                                                                      
| cf1        | INPLACE_UPDATE_NUM_LOCKS                                        | ON                                                                                                                                                                       
| cf1        | MEMTABLE_PREFIX_BLOOM_BITS_RATIO                                | 0.000000                                                                                                                                                                 
| cf1        | MEMTABLE_PREFIX_BLOOM_HUGE_PAGE_TLB_SIZE                        | 0                                                                                                                                                                        
| cf1        | BLOOM_LOCALITY                                                  | 0                                                                                                                                                                        
| cf1        | MAX_SUCCESSIVE_MERGES                                           | 0                                                                                                                                                                        
| cf1        | OPTIMIZE_FILTERS_FOR_HITS                                       | ON                                                                                                                                                                       
| cf1        | MAX_BYTES_FOR_LEVEL_MULTIPLIER_ADDITIONAL                       | 1:1:1:1:1:1:1                                                                                                                                                            
| cf1        | COMPRESSION_TYPE                                                | kNoCompression                                                                                                                                                           
| cf1        | COMPRESSION_PER_LEVEL                                           | NUL                                                                                                                                                                      
| cf1        | COMPRESSION_OPTS                                                | -14:-1:0                                                                                                                                                                 
| cf1        | BOTTOMMOST_COMPRESSION                                          | kLZ4Compression                                                                                                                                                          
| cf1        | PREFIX_EXTRACTOR                                                | NULL                                                                                                                                                                     
| cf1        | COMPACTION_STYLE                                                | kCompactionStyleLevel                                                                                                                                                    
| cf1        | COMPACTION_OPTIONS_UNIVERSAL                                    | {SIZE_RATIO=1; MIN_MERGE_WIDTH=2; MAX_MERGE_WIDTH=4294967295; MAX_SIZE_AMPLIFICATION_PERCENT=200; COMPRESSION_SIZE_PERCENT=-1; STOP_STYLE=kCompactionStopStyleTotalSize} |
| cf1        | COMPACTION_OPTION_FIFO::MAX_TABLE_FILES_SIZE                    | 1073741824                                                                                                                                                               
| cf1        | TABLE_FACTORY::FLUSH_BLOCK_POLICY_FACTORY                       | FlushBlockBySizePolicyFactory(0x4715df0)                                                                                                                                 
| cf1        | TABLE_FACTORY::CACHE_INDEX_AND_FILTER_BLOCKS                    | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::CACHE_INDEX_AND_FILTER_BLOCKS_WITH_HIGH_PRIORITY | 0                                                                                                                                                                        
| cf1        | TABLE_FACTORY::PIN_L0_FILTER_AND_INDEX_BLOCKS_IN_CACHE          | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::INDEX_TYPE                                       | 0                                                                                                                                                                        
| cf1        | TABLE_FACTORY::HASH_INDEX_ALLOW_COLLISION                       | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::CHECKSUM                                         | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::NO_BLOCK_CACHE                                   | 0                                                                                                                                                                        
| cf1        | TABLE_FACTORY::BLOCK_CACHE                                      | 0x470c880                                                                                                                                                                
| cf1        | TABLE_FACTORY::BLOCK_CACHE_NAME                                 | LRUCache                                                                                                                                                                 
| cf1        | TABLE_FACTORY::BLOCK_CACHE_OPTIONS                              |                                                                                                                                                                          
| cf1        | TABLE_FACTORY::CAPACITY                                         | 536870912                                                                                                                                                                
| cf1        | TABLE_FACTORY::NUM_SHARD_BITS                                   | 6                                                                                                                                                                        
| cf1        | TABLE_FACTORY::STRICT_CAPACITY_LIMIT                            | 0                                                                                                                                                                        
| cf1        | TABLE_FACTORY::HIGH_PRI_POOL_RATIO                              | 0.000                                                                                                                                                                    
| cf1        | TABLE_FACTORY::BLOCK_CACHE_COMPRESSED                           | (nil)                                                                                                                                                                    
| cf1        | TABLE_FACTORY::PERSISTENT_CACHE                                 | (nil)                                                                                                                                                                    
| cf1        | TABLE_FACTORY::BLOCK_SIZE                                       | 16384                                                                                                                                                                    
| cf1        | TABLE_FACTORY::BLOCK_SIZE_DEVIATION                             | 10                                                                                                                                                                       
| cf1        | TABLE_FACTORY::BLOCK_RESTART_INTERVAL                           | 16                                                                                                                                                                       
| cf1        | TABLE_FACTORY::INDEX_BLOCK_RESTART_INTERVAL                     | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::METADATA_BLOCK_SIZE                              | 4096                                                                                                                                                                     
| cf1        | TABLE_FACTORY::PARTITION_FILTERS                                | 0                                                                                                                                                                        
| cf1        | TABLE_FACTORY::USE_DELTA_ENCODING                               | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::FILTER_POLICY                                    | rocksdb.BuiltinBloomFilter                                                                                                                                               
| cf1        | TABLE_FACTORY::WHOLE_KEY_FILTERING                              | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::VERIFY_COMPRESSION                               | 0                                                                                                                                                                        
| cf1        | TABLE_FACTORY::READ_AMP_BYTES_PER_BIT                           | 0                                                                                                                                                                        
| cf1        | TABLE_FACTORY::FORMAT_VERSION                                   | 2                                                                                                                                                                        
| cf1        | TABLE_FACTORY::ENABLE_INDEX_COMPRESSION                         | 1                                                                                                                                                                        
| cf1        | TABLE_FACTORY::BLOCK_ALIGN                                      | 0                   
    

As a reminder MyRocks is available in Percona Server 5.7 and Percona Server 8.0, you can try it and share your experience!


Photo by Debby Hudson on Unsplash

Read more at: http://www.mysqlperformanceblog.com/

Percona Responds to MySQL LOCAL INFILE Security Issues

LOCAL INFILE Security

LOCAL INFILE SecurityIn this post, we’ll cover Percona’s thoughts about the current MySQL community discussion happening around MySQL LOCAL INFILE security issues.

This post is released given the already public discussion of this particular issue, with the exploitation code currently redacted to ensure forks of MySQL client libraries have sufficient time to implement their response strategies.

This post has been updated to now include previously redacted content, in line with responsible disclosure sufficient time has passed to allow forks to update and get those updates out for circulation.

Background

MySQL’s

LOCAL INFILE

  feature is fully documented by Oracle MySQL, and there is a legitimate use for the

LOCAL INFILE

 feature to upload data to a MySQL server in a single statement from a file on the client system.

However, some MySQL clients can be coerced into sending contents local to the machine they are running upon, without having issued a

LOCAL INFILE

 directive. This appears to be linked to how Adminer php web interface was attacked to point to a MALICIOUSLY crafted MySQL service to extract file data from the host on which Adminer was deployed. This malicious “server” has, it would appear, existed since early 2013.

The attack requires the use of a malicious/crafted MySQL “server”, to send a request for the file in place of the expected response to the SQL query in the normal query response flow.

IF however the client checks for the expected response, there is no file ex-filtration without further additional effort. This was noted with Java & ProxySQL testing, as a specific response was expected, and not sending the expected response would cause the client to retry.

I use the term “server” loosely here ,as often this is simply a service emulating the MySQL v10 protocol, and does not actually provide complete MySQL interaction capability—though this is theoretically possible, given enough effort or the adaption of a proxy to carry out this attack whilst backing onto a real MySQL server for the interaction capability.

For example, the “server” always responds OK to any auth attempt, regardless of credentials used, and doesn’t interpret any SQL sent. Consequently, you can send any string as a query, and the “server” responds with the request for a file on the client, which the client dutifully provides if local_infile is enabled.

There is potential, no doubt, for a far more sophisticated “server”. However, in my testing I did not go to this length, and instead produced the bare minimum required to test this theory—which proved to be true where local_infile was enabled.

The attack flow is as follows:

  1. The client connects to MySQL server, performs MySQL protocol handshaking to agree on capabilities.
  2. Authentication handshake (“server” often accepts any credentials passed to it).
  3. The client issues a query, e.g. SET NAMES, or other SQL (“server ignores this and immediately responds with file request response in 4.”).
  4. The server responds with a packet that is normally reserved when it is issued a “LOAD LOCAL DATA IN FILE…” SQL statement (0xFB…)
  5. IF Vulnerable the client responds with the full content of the file path if present on the local file system and if permissions allow this file to be read.
    1. Client’s handling here varies, the client may drop the connection with malformed packet error, or continue.

Exploitation testing

The following MySQL  clients were tested via their respective docker containers; and default configurations, the bash script which orchestrated this is as follows: <REDACTED>

This tests the various forks of the MySQL client; along with some manual testing the results were:

  • Percona Server for MySQL 5.7.24-26 (Not vulnerable)
    • PS 5.7.x aborts after server greeting
  • Percona Server for MySQL 5.6.42-64.2  (Not vulnerable)
    • PS 5.6 accepts the server greeting, proceeds to log in, aborts without handling malicious payload.
  • MariaDB 5.5
    • Susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.0
    • Susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.1.37
    • susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MariaDB 10.4.1
    • susceptible to LOCAL INFILE abuse in testing
      • MariaDB has stated they will release a fix that tracks in the client to ensure the SQL for LOAD LOCAL INFILE was requested and otherwise drops the server request without handling.
  • MySQL 5.7. (Not vulnerable by default)
    • Not susceptible to LOCAL INFILE abuse by default, enabling local_infile however makes this susceptible
  • MySQL 5.6. (Not vulnerable)
    • Not susceptible to LOCAL INFILE abuse by default, enabling local_infile however makes this susceptible
  • MySQL 8.0.14 (Not vulnerable)
    • Not susceptible to LOCAL INFILE abuse, enabling local_infile however makes this susceptible.
  • PHP 7 mysqli
    • Depends on libmysqlclient in use (As PHP’s mysqli is a C wrapper of the underlying library).
  • Ruby
    • Depends on libmysqlclient in use
    • Note: I couldn’t get this to build on my laptop due to a reported syntax error in mysql.c. However, given this wraps libmysqlclient, I would suggest the result to likely mirror PHP’s test.
  • ProxySQL
    • Underlying library is known susceptible to LOCAL INFILE abuse.
    • ProxySQL issues SQL to the backend MySQL server, and protocol commands such as PING, and expects a specific result in for queries issued by ProxySQL. This leads to difficulty for the malicious server being generic, a targeted client that specifically seeks to target ProxySQL is likely possible however this has not been explored at this time.
  • Java
    • com.mysql.jdbc.Driver
      • As with ProxySQL, testing this drive issues “background” SQL, and expects a specific response. While theoretically possible to have a malicious service target on this drive, this has not been explored at this time.
  • Connector/J

There are many more clients out there ranging from protocol compatible implementations to wrappers of the underlying c library.

Your own research will ensure you are taking appropriate measures should you choose/need to mitigate this risk in your controls.

Can/Should this be fixed?

This is a particularly tricky issue to correct in code, as the MySQL client needs to be aware of a

LOAD LOCAL INFILE

 SQL statement getting sent. MariaDB’s proposed path implements this. Even then, if a stored procedure issues a file request via

LOAD LOCAL INFILE...

, the client has no awareness of this even being needed until the packet is received with the request, and local_infile can be abused. However, the intent is to allow the feature to load data, and as such DBAs/Admins should seek to employ compensating controls to reduce the risk to their organization:

Mitigation

  • DO NOT implement any stored procedures which trigger a
    LOAD INFILE

    .

  • Close/remove/secure access to ANY web admin interfaces.
    • Remember, security through obscurity is no security at all. This only delays time to access, it does not prevent access.
  • Deploy mandatory access controls
    • SELinux, AppArmor, GRSecurity, etc. can all help to ensure your client is not reading anything unexpected, lowering your risk of exposure through proper configuration.
  • Deploy Egress controls on your application nodes to ensure your application server can only reach your MySQL service(s) and does not attempt to connect elsewhere (As the exploit requires a malicious MySQL service).
    • Iptables/firewalld/ufw/pfsense/other firewall/etc.
    • This ensures that your vulnerable clients are not connecting to anything you do not know about.
    • This does not protect against a skilled adversary. Your application needs to communicate out to the internet to server pages. Running a malicious MySQL service on a suitably high random port can aid to “hide” this network traffic.
  • Be aware of Domain Name Service (DNS) rebinding attacks if you are using a Fully Qualified Domain Name (FQDN) to connect between application and database server. Use an IP address or socket in configurations if possible to negate this attack.
  • Deploy MySQL Transport Layer Security (TLS) configuration to ensure the server you expect requires the use of TLS during connection, set your client (if possible) to VERIFY_IDENTITY to ensure TLS “fails closed” if the client fails to negotiate TLS, and to perform basic identity checking of the server being connected to.
    • This will NOT dissuade a determined adversary who has a presence in your network long enough to perform certificate spoofing (in theory), and nothing but time to carry this out.
    • mysslstrip can also lead to issues if your configuration does “fail open” as such it is imperative you have:
      • In my.cnf: ssl_mode=VERIFY_IDENTITY
      • On the cli: –ssl_mode=VERIFY_IDENTITY
      • Be aware: This performs verification of the CA (Certificate Authority) and certificate hostname, this can lead to issues if you are using self-signed certificates and the CA is not trusted.
    • This is ONLY an issue if an adversary has the capability of being able to Man in the middle your Application <-> MySQL servers;
      • If they have this capability; this feature abuse is only a single avenue of data ex-filtration they can perform.
  • Deploy a Network Intrusion Detection System
    • There are many open source software (OSS) options, for example:
    • Set alerts on the logs, curate a response process to handle these alerts.
  • Client option mitigation may be possible; however, this varies from client to client and from underlying library to library.
    • MariaDB client binary.
      • Add to my.cnf: local_infile = 0
      • Or set –local_infile=0 on the command line
    • PHP / Ruby / Anything that relies on libmysqlclient
      • Replace libmysqlclient with a version that does not enable local_infile by default
        • This can be difficult, so ensure you test your process before running anything on production!
      • Switch to use PDO MySQL over MySQLi (PDO implementation implicitly sets, local_infile to 0 at the time of writing in php’s C code).
        • Authors note: mysqli_options($conn, MYSQLI_OPT_LOCAL_INFILE, false); failed to mitigate this in testing, YMMV (Your Mileage May Vary).
        • Attempting to set a custom handler to return nothing also failed to mitigate this. Again, YMMV.

IDS Rule example

Here I provide an example “FAST” format rule for your IDS/IPS system;

Note however YMMV; this works with Snort, Suricata, and _may_ work with Zeek (formerly Bro), OSSEC, etc. However, please test and adapt as needed;

alert tcp any any <> any any (msg: “MySQL LOCAL INFILE request packet detected”; “content:”|00 00 01 FB|”; rawbytes)

Note this is only an example, this doesn’t detect any packets flowing over TLS connections.

If you are running an Intrusion Prevention System (IPS), you should change the rule action from alert to drop.

Here the rule is set to any any as an adversary may wish to not use 3306 in an attempt to avoid detection you can of course change this as desired to suit your needs.

You must also assess if your applications are running local_infile legitimately and conduct your own threat modeling as well as impact analysis, prior to implementing such a rule.

Note increasing the “noise” threshold for your team, will likely only result in your team becoming desensitized to the “noise” and potentially missing an important alert as a result.

For example, you could modify the left and right side any any, to be anything not in your internal network range communicating to anything not in your internal network range:

alert tcp 192.168.1.0/24 any <> !192.168.1.0/24 any  (msg:”MySQL LOCAL INFILE request packet detected”; “content:”|00 00 01 FB|”; rawbytes)

Adapting to your environment is key for this IDS rule to be effective.

Further reading

As noted this issue is already being publicly discussed, as such I add links here to sources relevant to this discussion and exploitation.

Exploitation Network flow

Here I am using wireshark to show the tcp communication flow client to MySQL ‘server’, here the ‘server’ is of course malicious and set to sent the file request on each receipt of ‘Request Query’: as can be seen here the ‘server’ masquerades as a 5.1.66 MySQL server running on Debian squeeze.

 

Now we jump to the malicious response packet, sent in reply to the earlier ‘Request Query’; which through legitimate use of SQL where 

LOCAL INFILE

 is issued this request packet would be the expected response from the ‘server’. In this case however the ‘server’ is requesting the file: /proc/self/environ this file can contain a wealth of information including anything you may have stored in an environment variable, in practise this can be set to any full filepath.

I’ve highlighted the two key parts of this packet, the tail of mysql.packet_length (blue) which heads the request and the file path being requested (red).

Here the client responds with the content of the requested file path, and displays no indication the file content has been sent to the MySQL ‘server’.

As shown here the complete TCP conversation flow between the MySQL binary client and MySQL ‘server’ (content has been redacted).

Note the SQL sent was 

I AM MYSQL BINARY

, this is not valid SQL; however this also demonstrates that the MySQL client does not parse SQL before sending it to the ‘server’.

If you’re one whom enjoys CTF’s (Capture the flag) challenges, you may find a version of the above here: https://github.com/Oneiroi/ctf/tree/master/pcap/mysql/data_exfil this includes a pcap file showing a similar packet flow.

I have shared my modified version of the rogue mysql server on Github here: https://github.com/Oneiroi/Rogue-MySql-Server/tree/local_infile_abuse_test some minor edits were required to make functional (there are ~40 some forks of the original project on GH, some with additional functionality)

Thanks

This assessment was not a single person effort, here I would like to link to and give thanks where appropriate to the following individuals whom have helped with this investigation:

Willem de Groot – For sharing insights into the Adminer exploitation and for graciously responding to an inquiry from myself (this helped me get the PoC working, thank you).

Gifts – original author of evil mysql server implementation (in 2013!), from which I was able to adapt to function for this investigation.

Ceri Williams – for helping me with proxySQL testing.

Marcelo Altman – for discussing MySQL protocol in depth.

Sergei Golubchik – for responding to my email notice for MariaDB, and implementing a workaround mitigation so quickly, as well providing me with a notice on the Connector/J announcement url.

Peter Zaitsev – for linking me to the original reddit discussion and for feedback.

Read more at: http://www.mysqlperformanceblog.com/

ProxySQL 1.4.14 and Updated proxysql-admin Tool

ProxySQL 1.4.14

ProxySQL 1.4.14

ProxySQL 1.4.14, released by ProxySQL, is now available for download in the Percona Repository along with an updated version of Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL,  and database servers in the MySQL ecosystem (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

The ProxySQL 1.4.14 source and binary packages available from the Percona download page for ProxySQL include ProxySQL Admin – a tool developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 1.4.14 are available as well. You can download the original ProxySQL from GitHub. GitHub hosts the documentation in the wiki format.

This release introduces an improvement on how proxysql-admin works with the --max-connections option. In previous releases, this option always equaled to 1000. Now, proxysql_galera_checker uses the value of the --max-connections option set by the user either in the command line (proxysql-admin --max-connections) or in the configuration file.

If the user doesn’t set this option, it defaults to 1000.

Improvements

  • PSQLADM-130: Every time a node is removed and then added back, proxysql_galera_checker script restores the custom value of the --max-connections option set using proxysql-admin --max-connections.
  • The --syncusers option of proxysql-admin starts to support MariaDB. Thanks to Jonas Kint (@jonaskint) for this contribution.

ProxySQL is available under Open Source license GPLv3.

Read more at: http://www.mysqlperformanceblog.com/