View Full Version : 60TB / 250 million file implementation?

January 12th, 2010, 07:31 PM
As my first post, thank you to all who contribute to the AMANDA project and community! I have been searching for a good backup strategy for a couple of months now and have finally gotten serious about it. I am not looking on HOW to implement it but more of a quick fact check to see if there are any obvious snags that I would be up against. My implementation would be the following:
- Main Storage System: 60TB IBRIX Storage system - http://en.wikipedia.org/wiki/IBRIX_Fusion This system is at 57TB containing over 250 Million files with roughly 20,000-30,000 thousand modifications/additions per week.
- Clients / OSes: Various Windows and Linux Boxes (Both metal and virtual)
- Backup system: The backup system will be the recommended install of OS Flavor and AMANDA with a SAS attached TL2000 Dell Backup Library System with 2 magazines and a billion tapes
- Backup Data Type: Most of our data is multimedia content in form of wma and mp3s. Little to no compression is expected.
- Network: Everything is connected through 1Gbps Ethernet to a Cisco 4006. There is approximately 6Gbps of network throughput from the IBRIX system to the system (Link Aggregation). The TL2000 is attached to a PowerEdge 860 with 2 x 1Gbps links. So theoretical throughput to the backup system is 2Gbps which equates to about 2.7 days of continual throughput on the backup system in a perfect world. Im guessing I would be happy with 4 or 5 days.
The question I guess I need to ask is Theoretically I should be able to pull this off?

April 26th, 2010, 04:52 PM

Well, I know this is a little late, but what you're wanting to do is possible. In general, breaking down the data into smaller pieces will be the key. What I mean by this is it sounds like the build of your data will be coming from a single host/filer. Backing up 250 million files in one piece will be more challenging than 60Tb. Rather than add a single entry for "/", you would fare better if you added entries for /subdir1, /subdir2, etc. If you can find a logical way to break down the data that will help. We have customers who are backing up this much data with a large number of files successfully(maybe not 250 million, but certainly a lot of files).


April 26th, 2010, 05:41 PM
Thanks for the post, - no reply is ever too late :cool: we will be implementing a proof of concept in the coming months and will post back feedback and suggestions of how to accomplish this.

October 1st, 2010, 08:19 AM
I was curious to hear if you ever ended up successfully deploying Zmanda in your environment.

We have about 52TB and 15 million files ourselves. We're getting ready to run our first test backups this weekend with Zmanda on Solaris.

If you did get it working, do you have any tips for other Zmanda users with large amounts of data?