Page 2 of 2 FirstFirst 12
Results 11 to 17 of 17

Thread: Incremental backup using zfs send

  1. #11
    Join Date
    May 2008
    Posts
    45

    Default

    Hi martineau,

    Quote Originally Posted by martineau View Post
    Thanks for the patch, I started to look at it and I have 1 questions?

    Why zfs_purge_snapshot is called before the backup is tried?
    I think it should be called only if the backup succeed and just before zfs_rename_snapshot.

    If amanda try a level 0 and it failed, it can try a higher level on the next run.
    I think it was because I managed to get in a failure mode where I had a snapshot that had not resulted in a valid backup and the next time I tested, amanda choose the next higher level, as in I have valid 0, 1 and 3 but no valid 2 backup. So by doing it this way, I made sure the next highest backup level was the failed one, in the example above, a level 2 backup.

    Hope this makes sense ;-)

    There is still a failure mode when there will be an lost current snapshot that is not removed if the client crashes before amanda finishes and we may need to remove that in the zfs_purge_snapshot as we otherwise will have one failed backup before the client is in sync again.

    /glz
    Last edited by glowkrantz; January 16th, 2009 at 05:48 AM. Reason: More info

  2. #12
    Join Date
    May 2008
    Posts
    45

    Default

    Hi Nick,

    Quote Originally Posted by nick.smith@techop.ch View Post
    Hi all,

    I think the current proposal will have significant performance issues with the piping the output through wc (espcially during the estimate phase!).

    Could you not use :

    $cmd = "$self->{pfexec_cmd} $self->{zfs_path} get -Hp -o value used $self->{filesystem}\@$self->{snapshot}"

    for snapshots with level > 0?

    Regards,

    Nick
    I understand and am searching all over the place for other ways t do it.

    This method is anyhow faster than with snapshot/gtar, especially on compressed filesystems. We have a few of them, handling Oracle and PostgreSQL backups and logs and with 2.2 times compression ratio on 30 to 40G referenced, it's almost a factor of 3.

    I have no actual numbers for my test system just now but I was able to go back to the default etime when I switched from snapshot/gtar to this setup.

    /glz

  3. #13

    Default

    Quote Originally Posted by martineau View Post
    You didn't try to remove files.

    If you remove /rpool/test/sol-nv-b101-x86-dvd.iso.1 before creating level 3 snaphost, then the reference for level 3 snapshot will be 7.37G, you wil get:
    # zfs list -r rpool/test
    NAME USED AVAIL REFER MOUNTPOINT
    rpool/test 10.4G 305G 10.4G /rpool/test
    rpool/test@0 17K - 2.15G -
    rpool/test@1 17K - 5.22G -
    rpool/test@2 17K - 7.37G -
    rpool/test@3 0 - 7.37G -

    but the backup will be 2.15G, how do you compute it?
    Ah well yes, very good point!

    I'll play around a bit more and see if examining both the used and refer values for all the snapshots we can get a solution.

    Not directly related but of interest: I asked the zfs-discuss mailing list about what output 'zfs send -v snapshot-name' should produce and got the following :

    Quote Originally Posted by chris.kirby@sun.com
    Nick,

    Specifying -v to zfs send doesn't result in much extra information, and only in certain cases. For example, if you do an incremental send, you'll get this piece of extra output:

    # zfs send -v -I tank@snap1 tank@snap2 >/tmp/x
    sending from @snap1 to tank@snap2
    #

    zfs recv -v is a bit more chatty, but again, only in certain cases.

    Output from zfs send -v goes to stderr; output from zfs recv -v goes to stdout.

    -Chris
    Regards,

    Nick

  4. #14
    Join Date
    Nov 2005
    Location
    Canada
    Posts
    1,049

    Default

    Quote Originally Posted by glowkrantz View Post
    Hi martineau,

    I think it was because I managed to get in a failure mode where I had a snapshot that had not resulted in a valid backup and the next time I tested, amanda choose the next higher level, as in I have valid 0, 1 and 3 but no valid 2 backup. So by doing it this way, I made sure the next highest backup level was the failed one, in the example above, a level 2 backup.

    Hope this makes sense ;-)

    There is still a failure mode when there will be an lost current snapshot that is not removed if the client crashes before amanda finishes and we may need to remove that in the zfs_purge_snapshot as we otherwise will have one failed backup before the client is in sync again.

    /glz
    amanda should not increase the level if the backup was invalid.
    Do you remember what was invalid?
    Did it was a dump to holding disk or direct to tape?
    Do the backup was successful but the server was not able to save it to holding disk or tape?
    Or it was a zfs error and zfs didn't sent the error to the server?
    Or something else?

  5. #15
    Join Date
    May 2008
    Posts
    45

    Default

    I think you are correct, this is not a problem. My failed sequence was probably due to a bug in the original script I had and after getting that right, I never removed the purge phase. The way it's done now will force Amanda to go one level lower than needed after a failed backup. The purge was originally there to remove any stray current snapshots but got changed while I was testing the scripts to do the delete of the expected snapshot.

    I have created a new version working the expected way, I will submit it as soon as it has a few backups under it's belt.

    /glz

  6. #16
    Join Date
    Nov 2005
    Location
    Canada
    Posts
    1,049

    Default

    I committed your previous patch with many fix.
    If you want to send me a new patch, send it relative to the current source tree.

  7. #17
    Join Date
    May 2008
    Posts
    45

    Default

    While testing 20090122, I have found one more thing and that's the level 0 estimate on a compressed file system. We need to multiply with the compression ration to get closer to the actual backup size.

    Attached patch adds this to the estimate call.

    Otherwise, this seems to be working just fine.

    /glz
    Attached Files Attached Files
    Last edited by glowkrantz; January 26th, 2009 at 10:52 PM. Reason: Better english, spelling

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •