PDA

View Full Version : Backup large database



prandini
August 3rd, 2007, 01:41 AM
I am using ZRM to backup large databases, e.g. greater that 2GB
with mysqldump mode.
If I use it to backup db <2GB using mysqldump everything is ok,
otherwise I get an error...HELP

kkg
August 3rd, 2007, 03:10 AM
It will be good if you provide the verbose logs.

--kkg

prandini
August 11th, 2007, 12:55 AM
I am enclosing a new version of mysql-zrm-purge
and mysql-zrm-backup version 1.2
that solve the problem of BIG backups ( I mean > 2GB)
I hope that someone from Zmanda will have a look and
integrate the patches in next release...so I won't have
to patch again...

kkg
September 3rd, 2007, 10:20 PM
I am enclosing a new version of mysql-zrm-purge
and mysql-zrm-backup version 1.2
that solve the problem of BIG backups ( I mean > 2GB)
I hope that someone from Zmanda will have a look and
integrate the patches in next release...so I won't have
to patch again...

Hi,

Sorry for the late reply. Was busy with something else.

Thanks for the patch.

From what I can see the main changes you have made is

a) In mysql-zrm-purge, you have replaced 'unlink $_' by 'system("rm $_")'
b) In mysql-zrm-backup, you have replaced the calculation of file size from using '-s $_' to using a function that parses the output of 'ls -l'.
c) While calculating the md5 checksum, you have removed the -b option that was being passed to md5sum.

Could you explain the reasons for these changes?
--kkg

prandini
September 3rd, 2007, 10:37 PM
The main reason behind all those mods is the strange behaviour of perl
( but not so strange, if you think of it) with files greater in size than 2GB;
those sizes cannot be represented with a signed 32 bit value and it seems
that perl is failing with them, at least on the Linksys NSLU1 environment.
Let us examine each case:
a) it seems that unlink is unable to delete files greater than 2 GB while rm can
b) -s crashes when file sizes are greater than 2 GB, it seems unable to use
float numbers to return its result; while ls of course returns the right value
c) not all releases of md5sum have the -b option; so it fails in some cases;
while it seems to me that this option is mostly unnecessary, and so can
be safely removed
In conclusion: the reason is PORTABILITY. Otherwise it won't run in NSLU1
at least, that instead is a wonderfully cheap device for backups. And the
patches don't break operation in the other platforms anyway...
Now I am left with some problems with mysqldump on really large rows,
that get abruptly interrupted; AND with backup crash on mirrored MySQL
dbs when the other peer is not online.

kkg
September 3rd, 2007, 10:53 PM
Could you let me know which version of PERL you are running?

--kkg

prandini
September 3rd, 2007, 10:59 PM
This is perl, v5.8.7 built for armeb-linux

kkg
September 3rd, 2007, 11:29 PM
Hi,

I unfortunately do not have any device with armeb-linux :-(

I just ran the following program on SuSE using PERL 5.8.7
--------------------------------------------------------
#!/usr/bin/perl
#Prog name tp.pl

my $s = -s $ARGV[0];

print "$ARGV[0] = $s\n";
--------------------------------------------------------

The following is the output I got.
kkg@kkg:~> ./tp.pl vm.tar
Size of vm.tar = 7509084160
kkg@kkg:~> ls -l vm.tar
-rw-r--r-- 1 kkg kkg 7509084160 2007-09-04 12:47 vm.tar
kkg@kkg:~>

On SUSE there seems to be no problems with using -s. I also tried unlink and that also worked fine on SUSE for files greater than 2GB. I am also unable to find any documentation that talks about such a limitation.

So all I can think of is that this is probably something that is a very specific limitation on armeb-linux. It would be very helpful if you could point me to any documentation which lists the limitations of PERL 5.8.7 on armeb-linux.

Thanks!
--kkg

prandini
September 3rd, 2007, 11:54 PM
I searched a lot but I didn't find anything about such limitations.
I agree that it is probably an implementation's limitation; I did
the same checks on RHEL (not on Suse) with the same results.
Anyway it doesn't work correctly on NSLU1 and so I sent a patch to make
it work, as you have noticed; those problems are nevertheless
listed by many users of other perl versions (mostly 5.6.x) and so
it could be that the workarounds are useful to other people as
well. In any case they don't do any harm or slow things in a
noticeable way ( they are executed mostly only once).
But if you think they are not to be used I can agree on that as
well; I need them but ymmv.

kkg
September 4th, 2007, 12:36 AM
No problem. We will do some more analysis and testing and if it works well for us we will modify the code.

--kkg