Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19

Thread: default maxdumpsize is not working ?

  1. #11
    Join Date
    Nov 2005
    Location
    Canada
    Posts
    1,049

    Default

    Attached patch fix the "There are 3452494G of dumps left in the holding disk." message.
    Attached Files Attached Files

  2. #12
    Join Date
    Oct 2008
    Location
    Campinas-SP
    Posts
    34

    Default no way !

    Hi !

    No way ! maxdumpsize is not working, for sure !

    What I did: I set maxdumpsize to 15000000000 to limit backups to 15G (I am still using a 36/72G tape) just to avoid to use 2 tapes per day and no luck !
    Even so, today backup took 72019M/51204M (original/compressed) ! See below.

    STATISTICS:
    Total Full Incr.
    -------- -------- --------
    Estimate Time (hrs:min) 0:04
    Run Time (hrs:min) 3:49
    Dump Time (hrs:min) 2:30 1:39 0:51
    Output Size (meg) 51204.6 27332.5 23872.1
    Original Size (meg) 72019.0 46598.2 25420.9
    Avg Compressed Size (%) 58.7 58.6 59.6 (level:#disks ...)
    Filesystems Dumped 48 17 31 (1:30 2:1)
    Avg Dump Rate (k/s) 5830.9 4706.8 8025.4
    After every DLE was backups more than once, the planner is not selecting DLEs to limit the total backup under maxdumpsize limit.

    This is very frustrating. I am spending twice tapes I expected.
    And I am sure this problem is new to this version. I didn't had this problem with the previous version.

    What I should to do ? Should I post this problem in developers list ?

    I don't have a second tape on a second system. This server is the production machine and I can't install a beta version or something like that in this machine....

  3. #13
    Join Date
    Nov 2005
    Location
    Canada
    Posts
    1,049

    Default

    I think maxdumpsize is in k by default.
    'maxdumpsize 15000000000' is 15000GB
    It is always safer to write the unit you want:
    maxdupmsize 15GB
    maxdumpsize 15000MB
    maxdumpsize 15000000KB

  4. #14
    Join Date
    Mar 2007
    Location
    Chicago, IL
    Posts
    688

    Default

    Looking at amdump.1 in the zip file you attached, I still see a lot of "(new disk, can't switch to degraded mode)" The planner then tries to delay those dumps to get its total size back down, but can't: "[dumps too big, 381630 KB, but cannot incremental dump new disk]" so it cancels those dumps altogether. The size after this is done is:
    Code:
      delay: Total size now 34902048.
    which is just under your maxdumpsize, as requested. Grepping out the PARTDONE lines from the dump and adding them up, I get 35312787k, which is pretty close to the mark - certainly within the margin of error of the compression estimates.

    All of this is to say that, as of posting those logs, martineau was exactly right -- Amanda is doing what you've asked of it. Most recently, you've said all DLEs have been dumped several times, and now Amanda is not hitting its target sizes. Please post a new amdump log so we can see what's going on.

  5. #15
    Join Date
    Oct 2008
    Location
    Campinas-SP
    Posts
    34

    Default

    Thanks marineu, thnaks dustin !

    martineau, you are right. maxdumpsize have a implicit multiplier which is k. So I put a multiplier for myself to force an specific multiplier. (I think this must be fixed. The comment in advanced.conf says it is the number of bytes, It does not mention any implicit multiplier.)
    Code:
    root@bigslam:/var/log/amanda/diario>amgetconf diario maxdumpsize
    31457280
    root@bigslam:/var/log/amanda/diario>grep maxdumpsize /etc/amanda/diario/advanced.conf 
    maxdumpsize 30GB   # Maximum number of bytes the planner will schedule	
    root@bigslam:/var/log/amanda/diario>
    Dustin, after almost two weeks and 18 tapes there is no new disks anymore. Even so, planner is selecting too much data to backup. In my opinion, maxdumpsize must be respected by the planner, no matter how much data needs to be backuped.

    However, this is not the case anymore.
    I am sending the log files for the last 2 runs. Please, check it !

    I appreciate very much your help guys. I am really annoyed with this behaviour of amanda. As I said before, my previous setup was running fine using just one tape per day. I upgrade due to a bug in the previous version of amrecover with files/folders with accent chars. We have a internal policy that says backups must extend to 30 days at least, so I had to buy a bunch of new tapes 'cause I am using 2 tapes per day.

    have a nice day !

    PS: Please, let me known if any of you need more info, like log files of other days or configuration details, ok ? See'ya.
    Attached Files Attached Files

  6. #16
    Join Date
    Nov 2005
    Location
    Canada
    Posts
    1,049

    Default

    I found the following in a report:
    -----------------
    small estimate: bigslam var 0
    est: 1G out 10G
    -----------------
    As I already told you, amanda use the estimate (1G) but the result is 10 times larger. You should expect to use 9G more tape space than expected.
    Do bigslam a 64bit system? gtar have a problem to backup one of the big log file (I don't remember which one), you could exclude it.

    As many of your dle are smaller than 1G, you should change 'displayunit' to 'm', it will make the report more readable.

  7. #17
    Join Date
    Mar 2007
    Location
    Chicago, IL
    Posts
    688

    Default

    I fixed advanced.conf in Subversion r1400. Thanks for the tip!

  8. #18
    Join Date
    Oct 2008
    Location
    Campinas-SP
    Posts
    34

    Default

    Quote Originally Posted by martineau View Post
    I found the following in a report:
    -----------------
    small estimate: bigslam var 0
    est: 1G out 10G
    I made some changes in the var DLE, excluding some big and useless folders. How can I check right now if it solved the problem with the estimate size of backup, without to wait the backup report from tonight ?

    The var DLE now looks like:
    Code:
    bigslam         var             /var                    {
            root-apice
            exclude file optional append "./lib/samba/netlogon/profiles"
            exclude file optional append "./lib/dhcp/dev"
            exclude file optional append "./lib/named/dev"
            exclude file optional append "./lib/amanda/holdings"
            exclude file optional append "./run"
            exclude file optional append "./spool/amavis"
            exclude file optional append "./spool/postfix"
    
    }

  9. #19
    Join Date
    Oct 2008
    Location
    Campinas-SP
    Posts
    34

    Thumbs up finally solved !

    Hey guys,

    Looks like the problem was solved by excluding the holding disk folder from the backup.
    This was the main problem after all. The holding disk was empty at the beginning of a backup, but during the backup it grows and mess with the estimate made by planner.

    Thanks to martineau that pointed a problem with the estimates in var DLE.

    It took some time to figure out this, because my previous disklist I was excluding the path "./amanda/holdings" but the correct path to the holding disk is "./lib/amanda/holdings" and as result, the holding disk were not excluded, which explain the big difference between the estimated size at the beginning of the backup and the real size during the backup, when several files are in the holding disk waiting to go to the tape !

    Besides that, don't you think maxdumpsize and runtapes MUST BE respected at all cost ?

    Anyway, thanks a lot to help me to fix this issue.

    PS: Did you see another post from me about an error in amcheckdump when checking a samba DLE ? I appreciate any comments about this error too

    cheers,
    Last edited by marozsas; November 26th, 2008 at 05:16 AM.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •