Results 1 to 6 of 6

Thread: Large disk - very slow dumping (stays on 0.0% for 1 hour+)

  1. #1

    Question Large disk - very slow dumping (stays on 0.0% for 1 hour+)

    Hi all,

    Hope someone can give me some insight.

    We're backing up a directory with 1,000s of subdirectories, with a total size of about 1.5TB. This has been split up into several DLEs (s.t. each DLE is approx ~100-200GB) by using include/exclude lists in disklists.
    Compression and estimation is currently turned off (well, estimate is set to server).

    When we start a backup, for each DLE, it seems to spend an hour waiting on "Dumping 0.0%". After that it'll start an actual backup of the DLE. And repeat for each DLE.

    On the client at the point of waiting, it does seem to be running tar forked from a sendbackup. What's it doing? My gut instinct is that maybe it's generating the index or something, and because of the size of this folder (i.e. the many subdirs) it's causing problems?

    Any insight would be greatly appreciated - especially if anyone has experience of using Amanda with this size of data.


  2. #2


    Do you have estimate turned on? I would turn that off to see if that helps, as I ran into this problem when I was backing up my data directory which has a ton of small 1k files in it. After I turned off estimate for that DLE it became super fast.

  3. #3


    Quote Originally Posted by aram535 View Post
    Do you have estimate turned on?
    I have "estimate server" set in my dumptype definition, which means all DLEs are initlally assumed to be 1GB by the server. I don't think it can be turned off completely unless I'm wrong?

  4. #4


    Sorry you're right you can't turn off the estimate, I checked my config and set the etimeout to -120 ( 2 minutes per file system). I also have "estimate client", not sure if that helped or not.

  5. #5

    Question amfetchdump

    I'm fairly confident now the issue is with generating the index.

    I know I can set "index no" in the dumptype to turn this off. However, I also believe this will stop amrecover from working (because there's no index, heh).

    Am I right in believing I can still use amfetchdump to restore backups? (though I presume, only the *entire* DLEs, rather than individual parts).

    Clarification would be appreciated

    Last edited by rikbrown; January 14th, 2009 at 03:10 AM. Reason: forgot a title!

  6. #6
    Join Date
    Mar 2007
    Chicago, IL


    Try turning index off for a run, to see if it helps. Unfortunately, tar is just slow on huge directories (well, the filesystem itself is often slow in such circumstances), so this may not help. But it will eliminate at least one variable.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts