TransWikia.com

Corruption in mariadb binlog

Database Administrators Asked by ctutte on October 18, 2020

I have an installation with mariadb version 10.0.21.

I need to backup the binlog file every 5 minutes to TSM, but as the binlog is being written every second, i am afraid of the file getting corrupted when copied.
Is it possible that this happens?
If the binlog gets corrupted (the end of it), will it corrupt the entire file or i can discard the last lines?
Any other solution proposed? We already have replication with master/slave, but for security and audit reasons, we need to keep a copy of the binlog.

Regards

2 Answers

I answered a similar question not a long time ago: https://dba.stackexchange.com/a/246571/30545, I paste it here for convenience:

Keeping compressed archives of binlogs is a very common way of achieving point in time recovery for your database, assuming you have a full backup from which you can roll forward.

Just copying away the binary logs with a remote transfer procedure (e.g. scp) would be fine- they can just be applied as is; however, it may create inconveniences if the binary logs are in use, being rotated/purged, and above all, being written because they are the last ones. While you could work around that- there is an easier method:

mysqlbinlog is an utility included in the mysql client code which not only allows to transform binlogs into sql, it also allows to remotely stream them by using the mysql protocol (so you don't need to setup additional infrastracture and code). In particular the -R option will allow you to connect to a remote server and stream its binlogs locally. Combined with the --raw and --stop-never options you will have a built-in option to stream in almost real time the binary logs in the original format, without having to handle the different files and losing events, etc. Although you may still want to wrap that so that you can handle connection errors and other problems that could arise on the remote copy.

How do I only get the binary logs since the exact time of the last mysqldump?

For that, you need to run mysqldump with --master-data or --dump-slave options (as well as single-transaction, or other way of achieving consistency) and gather the binlog or gtid coordinates. That way you can configure where to start gathering the binlogs (or easier, where to start applying those). A trick to make things even easier is to run FLUSH LOGS before a dump so you have less data to move around when doing a point in time recovery.

If another transaction started before that time and finished after it, will it still be included properly in the log?

It may or may not be in the log, but its coordinate according to mysqldump will not be included. Transactions are only included in the binlog on commit time. Meanwhile they are cached on memory or disk. While it is possible to split logs as you may be thinking, I would suggest it is easier to keep them as similar as possible to the original format of the msater, and use the coordinates to handle consistency of the different backups you take- for simplicity and so you can reuse the same file for the recovery of similar backups.

Can I use mysqlbinlog to only get the changes since the last mysqldump?

If still you prefer to keep them separate, use the rotation I suggest above, but you may need to increase the (unnecessary) database locking to make the rotation of the logs and the backups consistent. Or you can just use --start-position and --stop-position mysqlbinlog options for a one-time copy.

Answered by jynus on October 18, 2020

If You work under linux, You can use Persona Backup - https://www.percona.com/software/mysql-database/percona-xtrabackup for HOT bin-log backup.

Answered by a_vlad on October 18, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP