Percona Server 5.6 Webinar follow-up and Q&A

Good news everyone! I recently presented a webinar: Percona Server 5.6: Enterprise Grade MySQL. It was also recorded so you can watch along or view the slide deck. As with all my talks, I am not simply reading the slides so it really is worth to listen to the audio rather than just glance through the slide deck.

There were a number of great questions asked which I’ll answer below:

Q: How does Stewart feel about this version of 5.6 taking into consideration “Stewart’s .20 rule?” (ref 2013 Percona Live Conference).

A: For those who aren’t familiar with it, I have a rule which I call “Stewart’s dot twenty rule” which I’ve posted a few times about on my personal blog. It states: “a piece of software is never really mature until a dot twenty release.” I would say that MySQL 5.6 (and Percona Server 5.6) are both in really good states currently.

I strongly recommend the excellent series of “Fun With Bugs” posts by Valeriy Kravchuk. The latest Fun With Bugs post is: Fun with Bugs #20 – welcome MySQL 5.6.13! and certainly worth a read. I’m rather safe in saying that the first GA release of MySQL 5.6 was by far the best first GA release of any MySQL version ever and subsequent MySQL 5.6 releases have improved upon that. It is quite likely that 5.6 will work perfectly for you today.

If you are really conservative with software upgrades and want as few surprises as possible, then you can of course wait – but I’d certainly recommend kicking the tyres of 5.6 over the next few months and starting to plan a migration.

Q: Any estimate on availability of XtraDB Cluster using 5.6?

A: Since Percona XtraDB Cluster is built upon both Percona Server and Galera it’s only natural to build upon a GA release of Percona Server and a GA release of Galera.

Q: What’s the birds name?

Spike the cockatiel

Spike the cockatiel

A: (Background: at one point during the Webinar you could hear one of our pet birds start to burst into song). I’m glad you asked as it gives me an excellent opportunity to include gratuitous photos of our birds! They’re both Cockatiels. People will often think cockatoo (specifically the Sulphur-Crested Cockatoo)and not cockatiel. A cockatoo is any of the 21 species belonging to the bird family Cacatuidae and the cockatiel is the smallest of the 21 species.

Beaker helping out with our next release

Beaker the cockatiel helping with Percona Server 5.6

We have both a boy (Spike) and a girl (Beaker). Spike is the one who sings (while Beaker, like the muppet, goes meep) and could be heard for a moment during the webinar. Beaker has also been spotted helping with Percona Server 5.6 releases.

Q: The ‘first in Percona Server’ optimizations, did Oracle implement Percona code or write their own?

A: It would be accurate to say that there are changes in MySQL 5.6 that have been inspired by our work, and previously there has been Percona code that has made its way into MySQL (see COPYING.Percona in the MySQL bzr repository). For a multitude of reasons that aren’t worth going into here, it has historically been problematic getting code into MySQL if you didn’t work for the company that owned MySQL. This has been true of MySQL AB, Sun and Oracle and is certainly nothing new or unique to Oracle. What is different now is that things seem to be changing for the better and there is likely to be more cooperation with Oracle going forward.

Q: Has HandlerSocket been cooked into your 5.6 releases yet? Have there been any other improvements on that front?

A: We don’t currently have HandlerSocket in Percona Server 5.6. There has been a very small amount of adoption of HandlerSocket and we’ve taken the approach that we’ll see if the HandlerSocket team ports to 5.6 and if there is adequate demand for HandlerSocket in 5.6. So far, you’re the first person to request it.

Q: What Oracle 5.6 features have not yet been copied or reimplemented in Percona 5.6?

A: Everything in Oracle MySQL 5.6 is in Percona Server 5.6 and has been from the very first Percona Server 5.6 release.

Q: Was innodb fake changes picked up by Oracle?

A: No, at least not yet :)

Q: Has Percona developed or found some solutions for migrating a production Percona server 5.5 to a production Percona server 5.6 without any downtime. Previously I solved this by making a newer version of Percona server as a replica of an older version of either Percona server or mysql db. Then I would point the application servers to the new replica to complete the deployment with a trivial downtime. It seems like this approach is not valid given the new replication design.

A: You can do the old replication trick

Q: Can Xtrabackup 5.6 be used on a system running Percona Server 5.5?

A: Percona XtraBackup 2.1 (the current stable release, which works with MySQL 5.6 and Percona Server 5.6) will also work with MySQL 5.5, Percona Server 5.5, Percona Server 5.1 and MySQL 5.1 running the innodb plugin. There is also support for various MariaDB versions.

Q: question on replication: my database has no partitioned table, multi-thread replication (feature of 5.6) is not going to help. Am I right?

A: Currently the multi-threaded replication slave will partition work up across database schema. It doesn’t matter if your tables are partitioned or not, it matters what database (schema) they’re in. If all your tables are in the same schema, then parallel slave will not currently help.

Q: Is Percona Server 5.6 a drop in replacement for 5.5 or is there an upgrade process? If so, what is involved to roll back to 5.5 if necessary?

A: The upgrade process should be fairly painless and could well be a simple drop-in replacement. It does, of course, depend on what features you may be using along with the type and size of workload. We have a In-Place upgrading from Percona Server 5.5 to Percona Server 5.6 section in our Percona Server 5.6 manual and along with the Changed in Percona Server 5.6 section this should provide a fair amount of insight into what you may expect from the Percona Server side of things. There is also the Upgrading from MySQL 5.5 to 5.6 section of the MySQL manual which is well worth a read.

There is a section in the MySQL manual on downgrading from 5.6 to 5.5 and I don’t think there should be any extra limitations imposed by Percona Server on going from 5.6 back to 5.5. That being said, downgrading is certainly not as well tested as upgrading and I would consider it more of a last resort than something to jump to quickly.

Q: When does production Percona server 5.6 release?

A: Soon. The current Percona Server 5.6 releases are fairly solid and I can certainly recommend trialling them.

Q: Are there any known mysqllib binding issues or deprecations for 5.6?

A: None that I’m aware of.

Q: Is there a white paper or other docs on migratiing from 5.1 Percona server to 5.6 Percona server?

A: Not currently. Generally, the recommended practice is to go through each major version (going through 5.5 before heading to 5.6). There is upgrade documentation for upgrading 5.1 to 5.5 and for 5.5 to 5.6 – and you can certainly run 5.5 for only a few minutes before upgrading to 5.6.

Q: Will you offer training on 5.6?

A: Yes! There is a Moving to MySQL 5.6 training course offered by Percona which covers both MySQL 5.6 and Percona Server 5.6.

Q: I didn’t notice any mention of the improved NUMA support in PS 5.5 (http://www.percona.com/doc/percona-server/5.5/performance/innodb_numa_support.html). Is this carried over to Oracle and/or Percona 5.6?

A: Yes it has made it into Percona Server 5.6. See http://www.percona.com/doc/percona-server/5.6/performance/innodb_numa_support.html for the 5.6 documentation on it. I am not aware of Oracle having implemented it though.

Q: Have you made tests of user_stats overhead compared to performance_schema in 5.6?

A: I’m not aware of any published benchmarks for 5.6 although it would be great to see some.

Q: Does this release support the live table changes?

A: For some types of changes, yes.

Q: Is “Warning: Using a password on the command line interface can be insecure.” error being filtered out in the Percona release?

A: No. It’s not a good idea to provide passwords on the command line.

Q: He also promised a migration blog post ;-)

A: As promised, I am right now going to pester people about writing various posts on migrating from 5.5 to 5.6.

The post Percona Server 5.6 Webinar follow-up and Q&A appeared first on MySQL Performance Blog.

Big Data with MySQL and Hadoop at MySQL Connect 2013

I will be talking about Big Data with MySQL and Hadoop at MySQL Connect 2013 (Sept. 21-22) in San Francisco as well as at Percona University at Washington, DC (September 12, 2013). Apache Hadoop is a very popular Big Data solution and we can nowadays easily integrate it with MySQL. I will start with a brief introduction of Apache Hadoop and its components (HFDS, Map/Reduce, Hive, HBase/HCatalog, Flume, Scoop, etc). Next I will show 2 major Big Data scenarios:

  • From file to Hadoop to MySQL. This is an example of “ELT” process: Extract data from external source; Load data into Hadoop; Transform data/Analyze data; Extract results to MySQL. It is similar to the original Data Warehouse ETL (Extract; Transfer; Load) process, however, instead of “transforming” data before loading it to the Data Warehouse, we will load it “as is” and then run the data analysis. As a result of this analysis (map/reduce process) we can generate a report and load it to MySQL (using Sqoop export). To illustrate this process I will show 2 classical examples: Clickstream analysis and Twitter feed analysis. On top of those examples I will also show how to use MySQL / Full Text Search solutions to perform a near real-time reports from HBase.

Picture 1: ELT pipeline, from File to Hadoop to MySQL

clickstream_example

  • From OLTP MySQL to Hadoop to MySQL reporting. In this scenario we extract data (potentially close to real-time) from MySQL, load it to Hadoop for storage and analysis and later generate reports to load it into another MySQL instance (reporting), which can be used to generate and display graphs.

Picture 2: From OLTP MySQL to Hadoop to MySQL reporting.

hadoop_mysql_reporting

Note: The reason why we need an additional storage for MySQL reports is that it may take a long time to generate a Hive report (as it is executed with Map/Reduce which reads all the files/no indexes). So it make sense to “offload” a common reports’ results into a separate storage (MySQL).

In both scenarios we will need a way to integrate Hadoop and MySQL. In my previous post, MySQL and Hadoop integration, I have demonstrated how to integrate Hadoop and MySQL with Sqoop and Hadoop Applier for MySQL. This case is similar, however we can use a different toolset. In the scenario 1 (i.e. Clickstream) we can use Apache Flume to grab files (or read “events”) and load them into Hadoop. With Flume we can define a “source” and a “sink”. Flume supports a range of different sources including HTTP requests, Syslog, TCP, etc. HTTP source is interesting, as we can convert all (or a number of) HTTP requests (“Source”) into an “event” which can be loaded into Hadoop (“Sink”).

During my presentation I will show the exact configurations for the sample Clickstream process, including:

  1. Flume configuration
  2. HiveQL queries to generate a report
  3. Sqoop export queries to load the report into MySQL

See you at MySQL Connect 2013!

The post Big Data with MySQL and Hadoop at MySQL Connect 2013 appeared first on MySQL Performance Blog.

The top 5 proactive measures to minimize MySQL downtime

The top 5 proactive measures to minimize MySQL downtimeI’m happy to announce that the recording for my recent webinar “5 Proactive Measures to Minimize MySQL Downtime” is now available, along with the slides. They can both be found here.

My webinar focused on the top 5 operational measures that prevent or reduce downtime — along with the related business impact in a significant number of customer emergency scenarios.

As a senior consultant on Percona’s 24×7 Emergency Consulting team, I’ve helped resolve a myriad of client emergencies related to MySQL downtime and know that every emergency situation is unique and has its own set of challenges. However, when cases are systematically studied and analyzed, patterns of what typically causes MySQL downtime and how it can be best avoided emerge. And that’s what my webinar focused on. Thanks to everyone who attended, asked questions, and sent me thank you emails after the fact.

If you were not able to attend but are interested in the material, be sure to watch the recording, as the slides include only a small part of the information presented. If you have questions, please leave them in the comments section below and I’ll answer them as best I can.

The post The top 5 proactive measures to minimize MySQL downtime appeared first on MySQL Performance Blog.

Percona XtraBackup 2.1.4 is now available

Percona XtraBackup for MySQL Percona is glad to announce the release of Percona XtraBackup 2.1.4 on August 9th, 2013. Downloads are available from our download site here and Percona Software Repositories.

This release is the current GA (Generally Available) stable release in the 2.1 series. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases.

New Features:

  • Percona XtraBackup has introduced additional options to handle the locking during the FLUSH TABLES WITH READ LOCK. These options can be used to minimize the amount of the time when MySQL operates in the read-only mode.
  • Percona XtraBackup has now been rebased on MySQL versions 5.1.70, 5.5.30, 5.6.11 and Percona Server versions 5.1.70-rel14.8 and 5.5.31-rel30.3 server versions.
  • In order to speed up the backup process, slave thread is not stopped during copying non-InnoDB data when innobackupex –no-lock option is used as using this option requires absence of DDL or DML to non-transaction tables during backup.
  • Source tarball (and Debian source) now include all MySQL source trees required for the build. This means internet connection during package build isn’t required anymore.
  • Two new options options, innobackupex –decrypt and innobackupex –decompress, have been implemented to make decryption and decompression processes more user friendly.

Bugs Fixed:

  • There were no 2.1.x release packages available for Ubuntu Raring. Bug fixed #1199257.
  • During the backup process loading tablespaces was started before the log copying, this could lead to a race between the datafiles state in the resulting backup and xtrabackup_logfile. Tablespace created at a sensitive time would be missing in both the backup itself and as the corresponding log record in xtrabackup_logfile, so it would not be created on innobackupex –apply-log either. Bug fixed #1177206.
  • Fixed the libssl.so.6 dependency issues in binary tarballs releases. Bug fixed #1172916.
  • innobackupex did not encrypt non-InnoDB files when doing local (i.e. non-streaming) backups. Bug fixed #1160778.
  • Difference in behavior between InnoDB 5.5 and 5.6 codebases in cases when a newly created tablespace has uninitialized first page at the time when XtraBackup opens it while creating a list of tablespaces to backup would cause assertion error. Bug fixed #1187071.
  • xbcrypt could sometimes fail when reading encrypted stream from a pipe or network. Bug fixed #1190610.
  • innobackupex could not prepare the backup if there was no xtrabackup_binary file in the backup directory and the xtrabackup binary was not specified explicitly with innobackupex –ibbackup option. Bug fixed #1199190.
  • Debug builds would fail due to compiler errors on Ubuntu Quantal/Raring builds. Fixed compiler warnings by backporting the corresponding changes from upstream. Bug fixed #1192454.
  • innobackupex would terminate with an error if innobackupex –safe-slave-backup option was used for backing up the master server. Bug fixed #1190716.
  • Under some circumstances XtraBackup could fail on a backup prepare with innodb_flush_method=O_DIRECT when XFS filesystem was being used. Bug fixed #1190779.
  • Percona XtraBackup didn’t recognize checkpoint #0 as a valid checkpoint on xtrabackup –prepare which would cause an error. Bug fixed #1196475.
  • Percona XtraBackup didn’t recognize the O_DIRECT_NO_FSYNC value for innodb_flush_method which was introduced in MySQL 5.6.7. Fixed by adding the value to the list of supported values for innodb_flush_method in xtrabackup_56. Bug fixed #1206363.
  • innobackupex would terminate if innobackupex –galera-info option was specified when backing up non-galera server. Bug fixed #1192347.

Other bug fixes: bug fixed #1097434, bug fixed #1201599, bug fixed #1198220, bug fixed #1097444, bug fixed #1042796, bug fixed #1204463, bug fixed #1197644, bug fixed #1197249, bug fixed #1196894, bug fixed #1194813, bug fixed #1183500, bug fixed #1181432, bug fixed #1201686, bug fixed #1182995, bug fixed #1204085, bug fixed #1204083, bug fixed #1204075, bug fixed #1203672, bug fixed #1190876, bug fixed #1194879, bug fixed #1194837.

Known issues:

  • Backups of MySQL/Percona Server 5.6 versions prior to 5.6.11 cannot be prepared with Percona XtraBackup 2.1.4. Until the upstream bug #69780 is fixed and merged into Percona XtraBackup, Percona XtraBackup 2.1.3 should be used to prepare and restore such backups. This issue is reported as bug #1203669.

Release notes with all the bugfixes for Percona XtraBackup 2.1.4 are available in our online documentation. All of Percona‘s software is open source and free, all the details of the release can be found in the 2.1.4 milestone at Launchpad. Bugs can be reported on the launchpad bug tracker.

The post Percona XtraBackup 2.1.4 is now available appeared first on MySQL Performance Blog.

Can you really be Bias free ?

When we started Percona about 7 years ago one of the ideas behind the business was to provide customer focused services, which I refereed to as “bias free” at the time.

What do I mean by customer focused services ? Is not this something most of the company out there would claim ? Not in our definition! What we have observed years ago is what many companies view the services organization as the part of sales force, looking to recommend the product and services companies sell whenever they fit or not. In fact I had conversation with consultants from number of companies asking them direct question – if you found during project delivery your solution is the better fit can you recommend against implementing it ? In most cases the answer is no, either being direct management instructions or a fear of confrontation – Sales team could be quite unhappy with sale which they have done rolling back because of “consultants fault”. We wanted to be different. We wanted to ensure our services are the best value and the best customer fit to our knowledge.

Our first focus was Bias because in the smaller organization the other things are less of the issue – if I both sell and when deliver the project there is no one to be upset about if I decide something is better fit. I just have my own integrity and bias to care about.

As the time passed though I came to realize it is quite Utopian to strive to be bias free. You are biased whenever you want it or not and attempt to be “bias free” would much more likely to cause you into denial about your existing biases than anything else. We are biased by our knowledge and experiences this is just how life goes and I think the most important thing we can to do in this regard as professionals is to acknowledge our biases to ourselves. For example I’m biased against using Windows on Servers, and much more Biased towards Linux compared to other operating systems… and of course I will have a lot of facts to back up my bias. (This is what happens – we tend to filter information which supports our predispositions).

As I came to understanding myself as well as other members of our team can’t be Bias free I needed a different idea about how we can provide customer services with highest integrity possible ? Here are my current thoughts:

Acknowledge your Bias You are biased. Be honest to yourself about that. This is a first step towards reducing your bias impact.
Maintain your Integrity Being biased is OK. Lacking integrity is not. When you’re working with the customer you need to act in this customer interests. Your and your company interest come second. Some other industries have the laws on the subjects ours do not, but I think if we all enforce this rule on as we all will be much better.
Understand the Customer Reality The Integrity is not enough if you do not really understand your customer as your best recommendations coming right from your heart may be very wrong if you do not really understand customer situation. For example I already told you I’m biased towards Linux when it comes to the Server Operating System of Choice but I’ve recommended customer more than once to stay on Windows because it was completely Windows shop with no Linux experience whatsoever and getting people to install Linux for the first time in their life for the purpose of running mission critical database is not very wise. Another example would be Percona Server product. We do recommend this product to many customers, due to some specific valuable features we have added to the stock MySQL. We also may recommend MariaDB if some of its advanced features are needed. In many cases we however do not recommend any change because the system is working fine with Community MySQL and any change comes with its risks and costs.
Problems have Multiple solutions The big mistake often comes from “School-like” approach to the problems thinking it has “the solution”. In fact there are probably more than one solution to the same problem and it if often impossible to predict in advance which one would be better. Accepting existence of multiple solution and your limits in understanding all implications of them help to keep an open mind. Frankly it was very eye opening experience for me when I would think there is no way the client’s approach would work but it actually worked out quite fine, better than more complicated solution I was advocating.

There is an other danger though with this approach – if you do not push your product hard (as other vendors may do) you might come across as unsure about your product or not interested in the business. It is a tight line to walk. I teach my team to come up with the options and highlight the benefits and drawbacks we’re aware for any solutions allowing customers to make the choice. Even if we strive to understand Customer’s Reality there is a lot of internal process and politics which you would unlikely ever know.

We can also use this as a learning opportunity – if we’re not a fit is this an opportunity to improve our products and services ? Sometimes it and sometimes it simply falls outside of the area you want to focus on as a business.

So I do not claim to be bias free any more. I also do not claim us to be perfect – over last 7 years there have been many mistakes done by the team causing pains to our customers. At the same time I am staying committed working hard at having products and Services which would be great fit for our customers and really pursuing the customers where we can make a positive difference in their life, understanding there are always situations when something else is better fit.

The post Can you really be Bias free ? appeared first on MySQL Performance Blog.

ipcalc using mysql function

Aku cuba menggunakan sepenuhnya function mysql yang ada. Salah satunya penggunaan ipcalc; ipcalc ini dalam fedora ada (/bin/ipcalc), gunanya untuk calculate network address.

INSTALL:
1) Copy code dekat bawah
2) Connect ke mysql atau guna phpmyadmin
3) paste

Anda boleh check ade 3 functions yang akan dicreate
1) ipcalc
2) mask2prefix = convert netmask to prefix
3) prefix2mask = convert prefix to netmask

CODE:

-- ipcalc: calculate IPv4 address
-- nawawi <mohd.nawawi(at)gmail.com>
-- http://www.ronggeg.net/

DELIMITER //

CREATE FUNCTION `mask2prefix`(mask VARCHAR(15)) RETURNS bigint(10)
    DETERMINISTIC
BEGIN

DECLARE m BIGINT(10);
DECLARE c INT;

SET m=INET_ATON(mask);
SET c=0;
WHILE(m > 0) DO
 SET m=m << 1 & 0xffffffff;
 SET c=c+1;
END WHILE;
RETURN c;
END//

CREATE FUNCTION `prefix2mask`(prefix INT(3)) RETURNS varchar(15) CHARSET utf8
    DETERMINISTIC
BEGIN
RETURN INET_NTOA(~((1 << (32 - prefix)) - 1) & 0xffffffff);
END//

CREATE FUNCTION `ipcalc`(opt CHAR(1), ip VARCHAR(15), mask VARCHAR(15)) RETURNS varchar(15) CHARSET utf8
    DETERMINISTIC
BEGIN

DECLARE i BIGINT(10);
DECLARE m BIGINT(10);
DECLARE r BIGINT(10);

SET i=(INET_ATON(ip));
SET m=(INET_ATON(mask));

CASE opt
 WHEN 'p' THEN RETURN mask2prefix(mask);
 WHEN 'n' THEN SET r=i & m;
 WHEN 'b' THEN SET r=(i & m) | ~m & 0xffffffff;
ELSE
 SET r=NULL;
END CASE;
RETURN INET_NTOA(r);

END//

DELIMITER ;

CARA GUNA:
ipcalc(option,ip,netmask);
option: n = network, b = broadcast, p = prefix

mysql> select ipcalc('n','192.168.0.1','255.255.255.0') as network;
+-------------+
| network     |
+-------------+
| 192.168.0.0 |
+-------------+
1 row in set (0.00 sec)

mysql> select ipcalc('b','192.168.0.1','255.255.255.0') as broadcast;
+---------------+
| broadcast     |
+---------------+
| 192.168.0.255 |
+---------------+
1 row in set (0.00 sec)

mysql> select ipcalc('p','192.168.0.1','255.255.255.0') as prefix;
+--------+
| prefix |
+--------+
| 24     |
+--------+
1 row in set (0.00 sec)

mysql> select ipcalc('n','192.168.0.1',prefix2mask(24)) as network;
+-------------+
| network     |
+-------------+
| 192.168.0.0 |
+-------------+
1 row in set (0.00 sec)

mysql> select ipcalc('b','192.168.0.1',prefix2mask(24)) as broadcast;
+---------------+
| broadcast     |
+---------------+
| 192.168.0.255 |
+---------------+
1 row in set (0.00 sec)

mysql> CREATE DATABASE  `kejap` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)

mysql> use kejap;
Database changed
mysql> CREATE TABLE  kejap.ip (ip VARCHAR( 15 ) NOT NULL ,mask VARCHAR( 15 ) NOT NULL) ENGINE = INNODB;
Query OK, 0 rows affected (0.05 sec)

mysql> INSERT INTO  kejap.ip (ip,mask) VALUES ('192.168.0.1',  '255.255.255.0'), ('192.168.2.1',  '255.255.255.0');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> select ip,mask,ipcalc('n',ip,mask) as network from ip;
+-------------+---------------+-------------+
| ip          | mask          | network     |
+-------------+---------------+-------------+
| 192.168.0.1 | 255.255.255.0 | 192.168.0.0 |
| 192.168.2.1 | 255.255.0.0   | 192.168.0.0 |
+-------------+---------------+-------------+
2 rows in set (0.00 sec)

mysql> select ip,mask,ipcalc('n',ip,mask) as network,mask2prefix(mask) as prefix from ip;
+-------------+---------------+-------------+--------+
| ip          | mask          | network     | prefix |
+-------------+---------------+-------------+--------+
| 192.168.0.1 | 255.255.255.0 | 192.168.0.0 |     24 |
| 192.168.2.1 | 255.255.0.0   | 192.168.0.0 |     16 |
+-------------+---------------+-------------+--------+
2 rows in set (0.00 sec)

mysql> ALTER TABLE `ip` CHANGE `ip` `ip` VARCHAR(15) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL, CHANGE `mask` `mask` VARCHAR(15) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL
    -> ;
Query OK, 0 rows affected (0.00 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> select ip,mask,CONCAT(ipcalc('n',ip,mask),'/',mask) as network,mask2prefix(mask) as prefix from ip;
+-------------+---------------+---------------------------+--------+
| ip          | mask          | network                   | prefix |
+-------------+---------------+---------------------------+--------+
| 192.168.0.1 | 255.255.255.0 | 192.168.0.0/255.255.255.0 |     24 |
| 192.168.2.1 | 255.255.0.0   | 192.168.0.0/255.255.0.0   |     16 |
+-------------+---------------+---------------------------+--------+
2 rows in set (0.00 sec)

Selamat mencuba!