Using FDWs

If we want to move from MySQL/MariaDB to PostgreSQL, there is more than one way to succeed. The use of FDWs is an alternative to pg_chameleon and offers a way to quickly fetch the schema, as well as the data, and import it into PostgreSQL. The ability to connect MySQL and PostgreSQL has been around for quite a while, and therefore FDWs are definitely a field that can be exploited to your advantage.

Basically, the mysql_fdw extension works just like any other FDW out there. Compared to other, less well-known FDWs, the mysql_fdw extension is actually quite powerful and offers the following features:

  • Writing to MySQL/MariaDB
  • Connection pooling
  • The WHERE clause pushdown (which means that filters that are applied on a table can actually be executed remotely for better performance)
  • Column pushdown (only the columns that are needed are fetched from the remote side; older versions are used to fetch all the columns, which leads to more network traffic)
  • Prepared statements on the remote side

The way to use the mysql_fdw extension is to make use of the IMPORT FOREIGN SCHEMA statement, which allows you to move data over to PostgreSQL. Fortunately, this is fairly easy to do on a Unix system. Let's take a look at the steps in detail:

  1. The first thing we have to do is download the code from GitHub:
git clone https://github.com/EnterpriseDB/mysql_fdw.git
  1. Then, run the following commands to compile the FDW. Note that the paths might differ on your system. For this chapter, I have assumed that both MySQL and PostgreSQL are under the /usr/local directory, which might not be the case on your system:
$ export PATH=/usr/local/pgsql/bin/:$PATH 
$ export PATH=/usr/local/mysql/bin/:$PATH
$ make USE_PGXS=1
$ make USE_PGXS=1 install
  1. Once the code has been compiled, the FDW can be added to our database:
CREATE EXTENSION mysql_fdw;
  1. The next step is to create the server we want to migrate:
CREATE SERVER migrate_me_server 
FOREIGN DATA WRAPPER mysql_fdw
OPTIONS (host 'host.example.com', port '3306');
  1. Once the server has been created, we can create the desired user mappings:
CREATE USER MAPPING FOR postgres 
SERVER migrate_me_server
OPTIONS (username 'joe', password 'public');
  1. Finally, it's time to do the real migration. The first thing to do is import the schema. I suggest creating a special schema for the linked tables first:
CREATE SCHEMA migration_schema;
  1. When running the IMPORT FOREIGN SCHEMA statement, we can use this schema as the target schema in which all the database links will be stored. The advantage is that we can delete it conveniently after the migration.
  2. Once we are done with the IMPORT FOREIGN SCHEMA statement, we can create real tables. The easiest way to do this is to use the LIKE keyword that's provided by the CREATE TABLE clause. It allows us to copy the structure of a table and create a real, local PostgreSQL table. Fortunately, this also works if the table you are cloning is just an FDW. Here's an example of this:
CREATE TABLE t_customer 
(LIKE migration_schema.t_customer);
  1. Then, we can process the data:
INSERT INTO t_customer 
SELECT * FROM migration_schema.t_customer

This is actually the point where we can correct the data, eliminate chunk rows, or do a bit of processing on the data. Given the low-quality origin of the data, it can be useful to apply constraints and so on after moving the data for the first time. It might be less painful.

Once the data has been imported, we are ready to deploy all of the constraints, indexes, and so on. At this point, you will actually start to see some nasty surprises because, as I stated previously, you can't expect the data to be rock solid. In general, migration can be pretty hard in the case of MySQL.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset