PDA

View Full Version : Have zmanda split files >5gb automatically



Antonio
April 25th, 2008, 01:24 PM
I want to configure zmanda to back up to an s3 mounted directory. I'm using 'in flight' bzip2 to handle compression - but even so a bzip2'd mysqldump image of some of our databases currently are 6.6GB in size.

Can I configure zmanda to do this? For example:

xxx.weekly:backup:INFO: Command used for logical backup is mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --routines --master-data=2 --user="xxxxxx" --password="*****" --host="xxxxxxxxxxxxxxx" --port="3306" --socket="/tmp/mysql-xxx.sock" --all-databases | "/util/bin/misc/compress-encrypt" > "/var/lib/mysql-zrm/ods.weekly/20080425135113/backup-sql"

I could try and hack the source or something to replace that '>' with another call to split - but is there a better / easier way?

zmanda_jacob
April 25th, 2008, 01:47 PM
The ZRM for MySQL 2.1 supports pre and post backup plugins. These can be simple scripts that execute the bzip2 command after a successful backup and then copy the backups to your mounted s3 directory. A sample post backup plugin is located in /usr/share/mysql-zrm/plugins and is called post-backup.pl.

zmanda_jacob
April 25th, 2008, 02:11 PM
Which version of the ZRM are you running?

Antonio
April 25th, 2008, 03:30 PM
The ZRM for MySQL 2.1 supports pre and post backup plugins. These can be simple scripts that execute the bzip2 command after a successful backup and then copy the backups to your mounted s3 directory. A sample post backup plugin is located in /usr/share/mysql-zrm/plugins and is called post-backup.pl.

Yes I know I can do that. This is what I've currently done:

- custom compression plugin that both bzips and encrypts the data from stdin -> stdout. I did this because I don't want zmanda to iterate back over what was already written to do the 'encryption' cycle. I want everything to be streamed from mysqldump -> compress -> encrypt -> write to local filesystem (but don't write files >5gb!).

I have a post-backup plugin that:

- iterates over the backup directory and splits any files >5gb into multiple pieces
- copies the directory over to s3
- deletes the entire directory


If I can have zmanda's backup directory just use our locally mounted s3 bucket directly it'd be perfect. Using the enterprise gui A non-dba developer would be able to restore from s3 - without me having to cobble something together that fetches, re-joins (if necessary) any split files, and copy back to /var/lib/mysql-zmanda/ directory to zmanda can take over.

I am using MySQL-zrm-2.0-1

zmanda_jacob
April 28th, 2008, 02:49 PM
Can you explain how you are mounting S3? The ZRM can write its backups to any directory which is mounted in the file system, so provided the S3 can be mounted it should not be a problem.

Antonio
April 28th, 2008, 04:01 PM
Currently I am using Jungledisk to mount a S3 bucket to /var/s3. Plan on switching to s3fuse in the future (http://code.google.com/p/s3fs-fuse/). Problem is that currently S3's max object size is 5GB. And I have not found a 'mount s3 as a filesystem' solution that allows files >5G.

I might end up hacking zrm, removing the > backup.sql and replacing it with a | split call.

Delta
May 28th, 2012, 06:20 AM
I have a post-backup plugin that:

- iterates over the backup directory and splits any files >5gb into multiple pieces


Well, i am searching for a way to split large files too, ist it possible that you send me your post-backup-plugin?