View Full Version : Too many open file descriptors problem

March 27th, 2007, 08:59 AM
I would love to use ZRM, but, when I try to back up all of my databases, it chunks them all together and sends them all at once to mysqlhotcopy, leading to:

Tue Mar 27 11:25:55 2007: ERROR: Output of command: 'mysqlhotcopy' is {
DBD::mysql::db do failed: Out of resources when opening file './database/table.MYD' (Errcode: 24) at /usr/bin/mysqlhotcopy line 467.

Which means:
> perror 24
OS error code 24: Too many open files

Now, I'm set up to have the largest limit possible within linux:
> cat /proc/sys/fs/file-max

The only thing running on this server is mysql, and the table_cache is set to 512, and a current check on file descriptors open yields:
> cat /proc/sys/fs/file-nr
2265 0 65536
> lsof | wc -l

So, this just looks like a result of trying to back up a ton of databases, each with a fair number of tables in them, all at once.

What I need is to have an option to tell ZRM (or have ZRM be smart enough) to only send some number of databases through to MySQLHotCopy all at once.

I'm going to try and work on a patch for ZRM to do this, but I wanted to post here first to see (a) if it's going to be a waste of time because I'm missing something obvious, and (b) if someone's already working on it.

March 27th, 2007, 11:54 AM
Here's my hack to do it -- the replacement doMySqlHotCopy function:

sub doMySqlHotCopy()
my $hotcopy_cmd;
my $segment_size = 25;
my @localdblist = split(/ /,$_[0]);
while (@localdblist) {
$_[0] = '';
for (my $i = 0; ($i < $segment_size) && (@localdblist); $i++) {
$_[0] .= (shift @localdblist) . ' ';
## original code:
if( $inputs{"copy-plugin"} ){
$hotcopy_cmd = $inputs{"copy-plugin"};
$hotcopy_cmd = $MYSQLHOTCOPY;
my $p = &addMySQLParams($hotcopy_cmd);
$p .= " --quiet ";
my $command = " ".$_[0]." \"".$inputs{"destination"}."\"";
$command = $command." > ".$LOGGER. " 2>&1";
if( $verbose ) {
&printLog( "Command used for raw backup is ".$p.$command."\n" );
if( $abort_flag ){
&abortAndDie( );
my $ti = time();
my $r = system($p.$command);
$readLocksTime = $readLocksTime + (time() - $ti);
if( $abort_flag ){
&abortAndDie( );
if( $r > 0 ) {
&printCommandOutputToLog( "ERROR", "mysqlhotcopy", $LOGGER );
&printAndDie("mysqlhotcopy command did not succeed.\n Command used is ".$p.$command."\nReturn value is ".$r."\n"); \

}else {
if( $verbose ){
&printCommandOutputToLog( "INFO", $MYSQLHOTCOPY, $LOGGER );
} ## End while(@localdblist)
} ## End doMySqlHotCopy

March 27th, 2007, 08:50 PM

Thanks for the workaround. This is something that is discussed in the following bug and is something that we will try to fix in the next release.


The context of this bug is different where the issue was that all of the databases are locked at the same time by mysqlhotcopy. But the requirements are the same and reasoning is also the same.

The idea behind specifying all of the databases together is to ensure that all of the databases specified are locked together and backed up in a consistent manner without any of them going out of sync with each other. The primary reason why system administrators will want this is because there may be strong relationships between the different databases and they need to be backed up at the same time as otherwise the databases might get out of sync with each other.

ZRM also uses the same philosophy. Today if users don't have the need to backup
all of the databases together at one go, all they need to do is to create
different backup sets.

At the same time I have also commented in the bug the following

"But since this is a pain to do when you have a lot of databases, I think we can enhance ZRM to add a flag called --backup-separately which if specified, will make ZRM backup each database using a separate call to mysqlhotcopy."


March 28th, 2007, 03:39 AM
That seems like a good and reasonable solution -- I would suggest (and I'll add to your bug if I can) that you allow --backup-separately=X (with default of 1) to define doing more than one at a time, to gain more performance on the backup. I've got over 200 databases, and I imagine there would be a noticeable drop in speed if hotcopy had to run more than 200 times instead of the 9 that I can do it in right now.

In the alternative, I would *really* like to use your backup sets, but my database list is entirely dynamic; I have people adding databases all the time. I do enforce naming conventions on the databases, so if you allowed wildcards in database names in the backup sets, then I could use them... Although, I guess I could make a cron job to update the backup set conf files from the list of databases as well. Anyhow, it would be nice if you allowed wildcards there. I'll bug that if I can (as a feature)

March 28th, 2007, 03:43 AM
Please feel free to add to the bug, that will help us keep track of your requirements better.

BTW there is a bug open for wild card characters also.