Whether you've been using Zabbix for a while or you are looking toward setting up your first production instance, database management is important right from the start. A lot of the time, people set up their Zabbix database and don't know yet that it will be a big database. The Zabbix housekeeper just can't keep up when your database grows beyond a certain size and that's where we need to look toward different options.
In this chapter, we'll look into keeping our Zabbix database from using up 100% disk space when the Zabbix housekeeper is not keeping up. For MySQL users, we'll look into using database partitioning to keep our database in check. For PostgreSQL users, we'll look toward the brand-new TimescaleDB support. Last but not least, we'll also check out how to secure our connection between the Zabbix server and database.
We'll do all this in the following recipes:
Without further ado, let's get started on these recipes and learn all about managing our database.
We are going to need some new servers for these recipes. One Linux server needs to run Zabbix server 6 with MySQL (MariaDB) set up; we'll call this host lar-book-mysql- mgmt. We will also need a Linux server running Zabbix server 6 with PostgreSQL, which we'll call lar-book-postgresql-mgmt.
We'll also need two servers for creating a secure Zabbix database setup. One server will be running the MySQL (MariaDB) database; let's call this server lar-book-secure-db. Then, connecting externally to a Zabbix database, we'll have our Zabbix server, which we'll call lar-book-secure-zbx.
The code files can also be accessed from the GitHub repository here:
When working with a MySQL database, the biggest issue we face is how MySQL stores its data by default. There is no real order to the data that we can use if we want to drop large chunks of data. MySQL partitioning solves this issue; let's see how we can configure it to use for our Zabbix database.
Important Note
Here at Opensource ICT Solutions, we have fixed the script to work with MySQL 8. The script should work for any MySQL setup once more. Check out the link for more information: https://github.com/OpensourceICTSolutions/zabbix-mysql-partitioning-perl.
For this recipe, we are going to need a running Zabbix server with a MySQL database. I'll be using MariaDB in my example, but any MySQL flavor should be the same. The Linux host I'll be using is called lar-book-mysql-mgmt, which already meets the requirements.
If you are running these steps in a production environment, make sure to create your database backups first as things can always go wrong.
The RHEL-based command is as follows:
dnf install tmux
The Ubuntu command is as follows:
apt install tmux
tmux
Important Note
It's not required to run partitioning in a tmux window, but definitely smart. Partitioning a big database can take a long time. You could move your database to another machine with ample resources (CPU, memory, disk speed) to partition, or if that's not a possibility stop the Zabbix server process for the duration of the partitioning process.
mysql -u root -p
USE zabbix;
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'history';
You will receive an output like this:
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'history';
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'history_uint';
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'history_str';
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'history_text';
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'history_log';
ALTER TABLE 'history' PARTITION BY RANGE ( clock)
(PARTITION p2020_11_05 VALUES LESS THAN (UNIX_TIMESTAMP("2020-11-06 00:00:00")) ENGINE = InnoDB,
PARTITION p2020_11_06 VALUES LESS THAN (UNIX_TIMESTAMP("2020-11-07 00:00:00")) ENGINE = InnoDB,
PARTITION p2020_11_07 VALUES LESS THAN (UNIX_TIMESTAMP("2020-11-08 00:00:00")) ENGINE = InnoDB,
PARTITION p2020_11_08 VALUES LESS THAN (UNIX_TIMESTAMP("2020-11-09 00:00:00")) ENGINE = InnoDB,
PARTITION p2020_11_09 VALUES LESS THAN (UNIX_TIMESTAMP("2020-11-10 00:00:00")) ENGINE = InnoDB,
PARTITION p2020_11_10 VALUES LESS THAN (UNIX_TIMESTAMP("2020-11-11 00:00:00")) ENGINE = InnoDB,
PARTITION p2020_11_11 VALUES LESS THAN (UNIX_TIMESTAMP("2020-11-12 00:00:00")) ENGINE = InnoDB);
Tip
If we only have 7 days of history data, creating this list by hand is not that hard. If we want to do it on a big existing database, it can be a big list to edit by hand. It's easy to create a big list using software like Excel or by creating a small script.
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'trends';
SELECT FROM_UNIXTIME(MIN(clock)) FROM 'trends_uint';
ALTER TABLE 'trends' PARTITION BY RANGE ( clock)
(PARTITION p2020_11 VALUES LESS THAN (UNIX_TIMESTAMP("2020-12-01 00:00:00")) ENGINE = InnoDB,
PARTITION p2020_12 VALUES LESS THAN (UNIX_TIMESTAMP("2021-01-01 00:00:00")) ENGINE = InnoDB,
PARTITION p2021_01 VALUES LESS THAN (UNIX_TIMESTAMP("2021-02-01 00:00:00")) ENGINE = InnoDB);
wget https://raw.githubusercontent.com/OpensourceICTSolutions/zabbix-mysql-partitioning-perl/main/mysql_zbx_part.pl
Alternatively, you can download the partitioning script using the Packt GitHub here:
mkdir /usr/lib/zabbix/
mv mysql_zbx_part.pl /usr/lib/zabbix/
vim /usr/lib/zabbix/mysql_zbx_part.pl
We need to edit some text at the following part:
Tip
If you are using a version of Zabbix before 2.2 or a MySQL version before 5.6 or if you are running MySQL 8, then there are some extra lines of configuration that need to be commented and uncommented in the script. If this applies to you, read the comments in the mysql_zbx_part.pl script file and edit it. Additionally, check out the GitHub repo mentioned in the introduction of this recipe.
The RHEL-based commands are as follows:
dnf config-manager --set-enabled PowerTools
dnf update
dnf install perl-Sys-Syslog
dnf install perl-DateTime
The Ubuntu commands are as follows:
apt install liblogger-syslog-perl
apt install libdatetime-perl
chmod +x /usr/lib/zabbix/mysql_zbx_part.pl
/usr/lib/zabbix/mysql_zbx_part.pl
journalctl -t mysql_zbx_part
crontab -e
0 0 * * * /usr/lib/zabbix/mysql_zbx_part.pl
That concludes our Zabbix database partitioning setup.
Database partitioning seems like a daring task at first, but once you break it down into chunks, it is not that hard to do. It is simply the process of breaking down our most important Zabbix database tables into time-based partitions. Once these partitions are set up, we simply need to manage these tables with a script and we're ready.
Look at the following figure and let's say today is 07-11-2020. We have a lot of partitions managed by the script. All of our History data today is going to be written to the partition for this day and all of our Trends data is going to be written into the partition for this month:
The actual script does only two things. It creates new partitions and it deletes old partitions.
For deleting partitions, once a partition reaches an age older than specified in the $tables variable, it drops the entire partition.
For creating partitions, every time the script is run, it creates 10 partitions in the future starting from today. Except of course when a partition already exists.
This is better than using the housekeeper for one clear reason. It's simply faster! The Zabbix housekeeper goes through our database data line by line to check the UNIX timestamp and then it deletes that line when it reaches data older than specified. This takes time and resources. Dropping a partition, though, is almost instant.
One downside of partitioning a Zabbix database, though, is that we can no longer use the frontend item history and trend configuration. This means we can't specify different history and trends for different items; it's all global now.
When I first started using Zabbix, I did not have a book like this one. Instead, I relied heavily on the resources available online and my own skillset. There are loads of great guides for partitioning and other stuff available on the internet. If something isn't mentioned in this book, make sure to Google it and see if there's something available online. You might also want to check out some amazing books written by our Zabbix peers, and of course, if you've figured out something by yourself, sharing is caring!
TimescaleDB is an open source relational PostgreSQL database extension for time-based series data. Using PostgreSQL TimescaleDB is a solid way to work around using the Zabbix housekeeper to manage your PostgreSQL database. In this recipe, we will go over the installation of PostgreSQL TimescaleDB on a new server and how to set it up with Zabbix.
We will need an empty Linux server. I'll be using my server called lar-book- postgresql-mgmt.
We have a bit of a different process for RHEL-based and Ubuntu systems, which is why we have split this How to do it… section in two. We will start with Ubuntu systems.
echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -c -s)-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
wget --quiet -O -https://www.postgresql.org/media/keys/ ACCC4CF8.asc | apt-key add -
apt update
add-apt-repository ppa:timescale/timescaledb-ppa
apt update
apt install timescaledb-postgresql
systemctl enable postgresql
systemctl start postgresql
dnf -qy module disable postgresql
dnf install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
dnf install postgresql13 postgresql13-server
/usr/pgsql-13/bin/postgresql-13-setup initdb
vim /etc/yum.repos.d/timescale_timescaledb.repo
[timescale_timescaledb]
name=timescale_timescaledb
baseurl=https://packagecloud.io/timescale/timescaledb/el/7/$basearch
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/timescale/timescaledb/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300
dnf install timescaledb-postgresql-12
In this section, we'll go over how to set up TimescaleDB after finishing the installation process. There's a lot more to configure, so let's check it out:
timescaledb-tune
timescaledb-tune --pg-config=/usr/pgsql-12/bin/pg_config
systemctl restart postgresql
rpm -Uvh https://repo.zabbix.com/zabbix/6.0/rhel/8/ x86_64/zabbix-release-6.0-1.el8.noarch.rpm
dnf clean all
dnf install zabbix-server-pgsql zabbix-web-pgsql zabbix-apache-conf zabbix-agent2
The Ubuntu commands are as follows:
wget https://repo.zabbix.com/zabbix/6.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_6.0-1+ubuntu20.04_all.deb
dpkg -i zabbix-release_6.0-1+ubuntu20.04_all.deb
apt update
apt install zabbix-server-pgsql zabbix-frontend-phpphp-pgsql zabbix-apache-conf zabbix-agent
sudo -u postgres createuser --pwprompt zabbix
sudo -u postgres createdb -O zabbix zabbix
zcat /usr/share/doc/zabbix-server-pgsql*/create.sql.gz | sudo -u zabbix psql zabbix
vim /etc/zabbix/zabbix_server.conf
DBHost=
DBPassword=password
echo "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;" | sudo -u postgres psql zabbix
gunzip /usr/share/doc/zabbix-sql-scripts/postgresql/timescaledb.sql.gz
cat /usr/share/doc/zabbix-sql-scripts/postgresql/timescaledb.sql| sudo -u zabbix psql zabbix
vim /var/lib/pgsql/12/data/pg_hba.conf
# "local" is for Unix domain socket connections only
local all all
md5
# IPv4 local connections:
host all all 127.0.0.1/32
md5
# IPv6 local connections:
host all all ::1/128
md5
On RHEL-based systems:
systemctl restart zabbix-server zabbix-agent2 httpd php-fpm
systemctl enable zabbix-server zabbix-agent2 httpd php-fpm
On Ubuntu systems:
systemctl restart zabbix-server zabbix-agent2 apache2 php-fpm
systemctl enable zabbix-server zabbix-agent2 apache2 php-fpm
Using the TimescaleDB functionality with your Zabbix setup is a solid integration with your PostgreSQL database. The extension is supported by Zabbix and you can expect it to only get better in the near future.
Now, how TimescaleDB works is by dividing up your PostgreSQL hypertable into time-based chunks. If we look at the following figure, we can see how that looks:
These time-based chunks are a lot faster to drop from the database than using the Zabbix housekeeper. The Zabbix housekeeper goes through our database data line by line to check the UNIX timestamp and then it drops the line when it reaches data older than specified. This takes time and resources. Dropping a chunk though is almost instant.
Another great thing about using TimescaleDB with a Zabbix database is that we can still use the frontend item history and trend configuration. On top of that, TimescaleDB can compress our data, to keep databases smaller.
The downside is that we can't specify different history and trends for different items; it's all global now.
This recipe details the installation of PostgreSQL TimescaleDB. As this process is constantly changing, you might need to include some new information from the official TimescaleDB documentation. Check out their documentation here:
https://docs.timescale.com/latest/getting-started/installation/rhel-centos/installation-yum
Another great added feature for the Zabbix server is the ability to encrypt data between the database and Zabbix components. This is particularly useful when you are running a split database and the Zabbix server over the network. A Man in the Middle (MITM) or other attacks can be executed on the network to gain access to your monitoring data.
In this recipe, we'll set up MySQL encryption between Zabbix components and the database to add another layer of security.
We are going to need a Zabbix setup that uses an external database. I'll be using the Linux lar-book-secure-db and lar-book-secure-zbx hosts.
The new server called lar-book-secure-zbx will be used to connect externally to the lar-book-secure-db database server. The database servers won't run our Zabbix server; this process will run on lar-book-secure-zbx.
Make sure that MariaDB is already installed on the lar-book-secure-db host and that you are running a recent supported version that is able to use encryption. If you don't know how to upgrade your database, check out, in Chapter 11, Maintaining Your Zabbix Setup, the recipe named Upgrading Zabbix database from older MariaDB versions to MariaDB 10.5, or check the documentation online.
vim /etc/hosts
10.16.16.170 lar-book-secure-db
10.16.16.171 lar-book-secure-zbx
mysql -u root -p
create database zabbix character set utf8mb4 collate utf8mb4_ bin;
create user 'zabbix'@'10.16.16.171' identified BY 'password';
grant all privileges on zabbix.* to 'zabbix'@'10.16.16.171';
flush privileges;
mariadb_secure_installation
rpm -Uvh https://repo.zabbix.com/zabbix/6.0/rhel/8/x86_64/zabbix-release-6.0-1.el8.noarch.rpm
dnf clean all
wget https://downloads.mariadb.com/MariaDB/mariadb_repo_setup
chmod +x mariadb_repo_setup
./mariadb_repo_setup
dnf install zabbix-server-mysql zabbix-web-mysql zabbix-apache-conf zabbix-agent2 zabbix-sql-scripts mariadb-client
Use the following Ubuntu command:
apt install zabbix-server-mysql zabbix-frontend-php zabbix-apache-conf zabbix-agent2 mariadb-client
zcat /usr/share/doc/zabbix-sql-scripts/mysql/server.sql.gz | mysql -h 10.16.16.170 -uzabbix -p zabbix
vim /etc/pki/tls/openssl.cnf
countryName_default = XX
stateOrProvinceName_default = Default Province
localityName_default = Default City
0.organizationName_default = Default Company Ltd
organizationalUnitName_default.=
dir = /etc/pki/CA # Where everything is kept
mkdir -p /etc/pki/CA/private
openssl req -new -x509 -keyout /etc/pki/CA/private/cakey.pem -out /etc/pki/CA/cacert.pem -days 3650 -newkey rsa:4096
touch /etc/pki/CA/index.txt
echo 01 > /etc/pki/CA/serial
mkdir /etc/pki/CA/unsigned
mkdir /etc/pki/CA/newcerts
mkdir /etc/pki/CA/certs
openssl req -nodes -new -keyout /etc/pki/CA/private/zbx-srv_key.pem -out /etc/pki/CA/unsigned/zbx-srv_req.pem -newkey rsa:2048
openssl req -nodes -new -keyout /etc/pki/CA/private/mysql-srv_key.pem -out /etc/pki/CA/unsigned/mysql-srv_req.pem -newkey rsa:2048
The response will look like this:
Important Note
Our certificates need to be created without a password; otherwise, our MariaDB and Zabbix applications won't be able to use them. Make sure to specify the -nodes option.
openssl ca -policy policy_anything -days 365 -out /etc/pki/CA/certs/zbx-srv_crt.pem -infiles /etc/pki/CA/unsigned/zbx-srv_req.pem
openssl ca -policy policy_anything -days 365 -out/etc/ pki/CA/certs/mysql-srv_crt.pem -infiles/etc/pki/CA/ unsigned/mysql-srv_req.pem
mkdir /etc/my.cnf.d/certificates/
chown -R mysql. /etc/my.cnf.d/certificates/
scp /etc/pki/CA/private/mysql-srv_key.pem [email protected]:/etc/my.cnf.d/certificates/mysql-srv.key
scp /etc/pki/CA/certs/mysql-srv_crt.pem [email protected]:/etc/my.cnf.d/certificates/mysql-srv.crt
scp /etc/pki/CA/cacert.pem [email protected]:/etc/my.cnf.d/certificates/cacert.crt
chown -R mysql:mysql /etc/my.cnf.d/certificates/
chmod 400 /etc/my.cnf.d/certificates/mysql-srv.key
chmod 444 /etc/my.cnf.d/certificates/mysql-srv.crt
chmod 444 /etc/my.cnf.d/certificates/cacert.crt
vim /etc/my.cnf.d/server.cnf
bind-address=lar-book-secure-db
ssl-ca=/etc/my.cnf.d/certificates/cacert.crt
ssl-cert=/etc/my.cnf.d/certificates/mysql-srv.crt
ssl-key=/etc/my.cnf.d/certificates/mysql-srv.key
mysql -u root -p
alter user 'zabbix'@'10.16.16.152' require ssl;
flush privileges;
Make sure the IP matches the IP from the Zabbix server (and one for the Zabbix frontend, if separated), just like we did in Step 2.
systemctl restart mariadb
mkdir -p /var/lib/zabbix/ssl/
cp /etc/pki/CA/cacert.pem /var/lib/zabbix/ssl/
cp /etc/pki/CA/certs/zbx-srv_crt.pem/var/lib/zabbix/ssl/zbx-srv.crt
cp /etc/pki/CA/private/zbx-srv_key.pem/var/lib/zabbix/ssl/zbx-srv.key
vim /etc/zabbix/zabbix_server.conf
DBHost=lar-book-secure-db
DBName=zabbix
DBUser=zabbix
DBPassword=password
DBTLSConnect=verify_full
DBTLSCAFile=/var/lib/zabbix/ssl/cacert.pem
DBTLSCertFile=/var/lib/zabbix/ssl/zbx-srv.crt
DBTLSKeyFile=/var/lib/zabbix/ssl/zbx-srv.key
chown -R zabbix:zabbix /var/lib/zabbix/ssl/
chmod 400 /var/lib/zabbix/ssl/zbx-srv.key
chmod 444 /var/lib/zabbix/ssl/zbx-srv.crt
chmod 444 /var/lib/zabbix/ssl/cacert.pem
RHEL-based systems:
systemctl restart zabbix-server zabbix-agent2 httpd php-fpm
systemctl enable zabbix-server zabbix-agent2 httpd php-fpm
Ubuntu systems:
systemctl restart zabbix-server zabbix-agent2 apache2 php-fpm
systemctl enable zabbix-server zabbix-agent2 apache2 php-fpm
This was quite a long recipe, so let's break it down quickly:
Going through all these steps, setting up your Zabbix database securely can seem like quite a daunting task, and it can be. Certificates, login procedures, loads of settings, and more can all add up to become very complicated, which is why I'd always recommend diving deeper into encryption methods before trying to set this up yourself.
If your setup requires encryption, though, this recipe is a solid starting point for your first-time setup. It works very well in an internal setting, as we are using private certificates.
Make sure to renew them yearly, as they are only valid for 365 days.
All Zabbix components, except for communication between the Zabbix server and Zabbix frontend, can be encrypted as shown in the following diagram:
We've set up encryption between the following:
This means that when our Zabbix server or frontend requests or writes data to our database, it will be encrypted. Because our Zabbix applications are running on a different server than our Zabbix database, this might be important. For example, our setup might look like this:
Let's say the cloud is called Some company in a network that isn't managed by us. There are several switches and routers in this network that are used for numerous clients with their own VLANs. If one of these devices gets compromised somehow, all of our Zabbix data could be seen by others.
Even if the network equipment is ours, there might still be a compromised device in the network and our data can be seen. This is why you might want to add encryption, to add that extra layer of security. Whether it's breaches in other companies and their network that you want to secure against or whether it's against your own breaches, securing your database as we did in this recipe might just save you from leaking all that data.