Results 1 to 2 of 2

Thread: RAIT Performance Issue

  1. #1

    Unhappy RAIT Performance Issue

    Hi,

    I've been using Amanda 3.4.3 & LTO6 drive w/ Fibre channel for a few months. Since write speed of LTO6 drive (=160 MB/sec) is not high enough for us, Iím investigating into RAIT. Iím expecting that total 160 x 2 = 320MB/sec of write speed would be achievable when using #3 LTO6 drives.
    Iíve completed RAIT configuration with these #3 drives and found that simultaneous recording was successfully achieved. However, the problem is that write performance was worse than expected.
    I tried measuring write speed of each tape drive with ďiostat-scsi.stpĒ and it showed 84MB/sec a drive in average. This means that effective write speed was: 84 x 2 = 168 MB/sec - worse than expected 320 MB/sec.
    Since I thought that there was a bottleneck at HBA or holding disk, I tried the same test with using #2 drives and RAIT configuration. In this test, I found that each write speed was almost 160 MB/sec. This means that the system has a bandwidth of 320MB/sec from holding disk to tape, at least. According to this result, effective write speed in #3-drive configuration above should have been: 320 x 2 / 3 = 213 MB/sec, at least.
    My configurations are as follows. I really appreciate any information or suggestion on this.

    ---Amanda.conf (excerpt)---
    inparallel 10 # maximum dumpers that will run in parallel
    dumporder "BTBTBTBTBTBT" # specify the priority order of each dumper
    taperalgo first # The algorithm used to choose which dump image to send
    runtapes 3 # number of tapes to be used in a single run of amdump
    tapedev "rait:{/dev/st0,/dev/st1,/dev/st2}"

    holdingdisk hd1 {
    comment "main holding disk"
    directory "/dumps/amanda" # where the holding disk is
    use -100 Mb # how much space can we use on it
    chunksize 3Gb # size of chunk if you want big dump to be
    }

    define tapetype LTO6x3 {
    comment "Created by amtapetype; compression enabled"
    length 7328456544 kbytes
    filemark 0 kbytes
    speed 510000 kps
    }

    define dumptype comp-user-tar {
    user-tar
    compress client fast
    priority high
    global
    program "GNUTAR"
    comment "root partitions dumped with tar"
    compress none
    index
    # exclude list "/etc/amanda/exclude.gtar"
    }

    --- HW configuration---
    #2 HBAs: One HBA has one port, the other has 2 ports, 8Gbps
    Holding Disk: RAM disk, DDR2. Mounted as tempfs
    #3 LTO6 Drives
    CentOS 7

    ---Other Test Condition---
    Dump Size: ~14 GB

    Thanks,
    Yuichi

  2. #2
    Join Date
    Nov 2005
    Location
    Canada
    Posts
    1,047

    Default

    RAIT must be used if you want redundant data, not to increase throughput.
    Your RAIT with 2 drives tested 160MB/s from holding to 2x160MB/s (320MB/s) on tapes
    your RAIT with 3 drives tested 320MB/s from holding to 3x160MB/s (480MB/s) on tapes

    Can you monitor the taper process to find its bottleneck, memory, cpu, IO, holding disk, tape drive, ...
    You can test the speed of the holding disk be reading a large file with dd. Don't forget to flush the kernel disk cache before each test.

    You should increase the device block size:
    DEVICE_PROPERTY "BLOCK-SIZE" "1m"

    If you do not need redundancy, then you could test:
    tapedev "rait:{/dev/st0,/dev/st1,/dev/null}"
    tapedev "rait:{/dev/st0,/dev/st1,/dev/st2,/dev/null}"

    You could also configure all tapes indenpendently, sending a different dle to each tape, it can help if you have many dles.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •