PDA

View Full Version : Configuring centralized backup server



rvempati
May 30th, 2008, 04:25 PM
I am looking for a solution for my database backups. May be this is known solution using ZMANDA. But I tried to set this up with my knowledge and couldn't get it work with expected way.

I have two mysql database servers. Both database servers are of same configuration.

1) Master Database (enabled bin-logs, ofcource we need it for replication)
2) Slave Database.

We have around 200 databases with total size of 300GB.

I would like to take FULL BACKUPS using SLAVE and INCREMENTAL from MASTER.

Right now I installed and configured ZMANDA on both MASTER and SLAVE and taking backups individualy.

I tried to setup one ZMANDA backup server and configured to take both INCREMENTAL and FULL backups using SSH-COPY plugin. The process works great if I take backup for smaller databases or take backup for one database at a time.

Here is the problem I noticed.

If I take backup from centralized ZMANDA backup server using SSH-COPY plugin. First the process uses mysqlhotcopy to create the snap shot of all the databases under temporary directory and then uses ssh-copy plugin to copy to the destination directory on backup server. But as I mentioned earlier I have around 300GB of data and I do not have that much of free space on temporary directory partition. So the backups are not able to finish successfully. So the only option I have is taking backups individualy on local system, like FULL BACKUPS on SLAVE and INCREMENTAL on MASTER.

But as you know by taking backups individualy is not an easy way for restoration as it varies the bin-log file and position of the logs and I am having difficulties when it comes to recovery.

Can some one please give me the solution on how to setup a centralized backup server for my situation.

(OR)

How do I take individual backups using both MASTER and SLAVE with easy resotration.

thanks in advance.

-- Vempati

zmanda_jacob
June 2nd, 2008, 10:17 AM
One problem that I see is that you will need to also do periodic full backups of the master server in order to be able to restore with the incremental backups because the ZRM will need to first restore the full and then restore the incremental backups in the correct order to restore the database.

Would it be possible for you to turn binary logging on in your slave environment? This would allow you to back up using the replication option and allow you to do full and incremental backups on your slave server without impacting your master server. This would also simply doing a restore as you would only need to work with one server.

I would also recommend, since the 300Gb of data is too large for you to do in your backup environment, that you create more than one backup set and break up the data set into smaller pieces. Maybe 100 databases per set A-L, M-Z, or something like that which will allow you to backup in smaller pieces. Although the documentation does say that the temporary directory should have enough data to temporarily store the entire backup set, it will most likely not use all 300Gb of data. You can also, specify different locations for temporary space, so if you need to you can always add a USB drive or something like that for extra space.

rvempati
June 6th, 2008, 06:45 AM
1) One problem that I see is that you will need to also do periodic full backups of the master server in order to be able to restore with the incremental backups because the ZRM will need to first restore the full and then restore the incremental backups in the correct order to restore the database.

-- I am doing periodic full backups for every week on Saturday at mid-night. In case to restore the data from backups atleaset we can start from the last full backups.

2) Would it be possible for you to turn binary logging on in your slave environment? This would allow you to back up using the replication option and allow you to do full and incremental backups on your slave server without impacting your master server. This would also simply doing a restore as you would only need to work with one server.

-- Well, our system supports auto swith over of using slave in case the master database server crashes or not available due to some technical problems. Enabling binary logging on slave is not feasible for us, because at any time the slave server will act as a master. So we can not even make the slave as a backup server for data backups.

3) I would also recommend, since the 300Gb of data is too large for you to do in your backup environment, that you create more than one backup set and break up the data set into smaller pieces. Maybe 100 databases per set A-L, M-Z, or something like that which will allow you to backup in smaller pieces. Although the documentation does say that the temporary directory should have enough data to temporarily store the entire backup set, it will most likely not use all 300Gb of data. You can also, specify different locations for temporary space, so if you need to you can always add a USB drive or something like that for extra space.

-- We can make break up the data set into smaller pieces. But that doesn't solve the problem for us if we take backups from a centralized ZMANDA backup server, because it needs to create mysqlhotcopy on slave server under temporary backup partition and then do SSH-COPY onto backup server. At the same time we can not have ZMANDA on SLAVE as I explained above.

SOLUTION LOOKING: Instead of taking mysqlhotcopy on SLAVE or MASTER onto a temporary local disk can we take them directly onto a shared mount point which is accessible (shared) by Master, Slave and Backup server? This way we do not need to do one more SSH-COPY from temp to backup storage mount point.

Finally, We would like to purchase ZMANDA enterprise version for one server and use it on one centralized backup server to manage all the backups on different sets of database servers.

Please advise.

thanks
Vempati

zmanda_jacob
June 9th, 2008, 07:33 AM
In order to do an incremental backup, binary logging is required, so it sounds like you will have to do your full and incremental backups on one of the two servers because you have to take the full backup and incremental backups from the same server if you want to be able to restore properly using the ZRM.

Also, your idea about mounting shared storage for the backup will work, however the ZRM will still have to copy the data afterwards. Are you using the SSH-COPY plugin for security reasons? If not, the SOCKET-COPY plugin is actually faster. Have you been testing with the enterprise version? The SOCKET-COPY plugin is a part of the ZRM client package. It requires xinetd to function, but it will give you faster backups.

I also recommend doing snapshots for the backups as those will impact your running servers the least by doing the backup using filesystem snapshots. It requires your data and log files reside on LVM.