View Full Version : disk IO problem with split_diskbuffer

April 10th, 2008, 06:15 AM
I have the following in my dump type:

tape_splitsize 40 Gb
split_diskbuffer "/dumps/amanda/split_diskbuffer"

While running amdump for a 2TB filesystem, it is observed, with iostat, that AMANDA is simultaneously reading and writing the filesystem where split_diskbuffer is located.

On our system, /dumps/amanda/split_diskbuffer is a dedicated disk (a logical volume on dedicated drives in a HP array). There are no other logical volumes on the same spindle.

What can be done to ensure that AMANDA uses the disks in a more sensible way?

Do I need to have 40GB of RAM and buffer the split filesystem in RAM?

I can get 80MByte/sec over the network. The 2TB array is on a Solaris 10 machine with a massive RAID array. Yet my backups are struggling along at just 20Mbyte/sec or less.

avg-cpu: %user %nice %system %iowait %steal %idle
0.50 0.00 7.96 25.62 0.00 65.92

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
cciss/c0d0 0.00 0.00 0.00 0 0
cciss/c0d1 576.24 18712.87 32938.61 18900 33268
dm-0 0.00 0.00 0.00 0 0
dm-1 0.00 0.00 0.00 0 0
dm-2 0.00 0.00 0.00 0 0
dm-3 0.00 0.00 0.00 0 0
dm-4 8465.35 18712.87 32435.64 18900 32760

April 10th, 2008, 06:55 AM
What version of Amanda are you running?

April 10th, 2008, 08:22 AM
[email protected]:~$ dpkg --list | grep amanda
ii amanda-client 1:2.5.2p1-2 Advanced Maryland Automatic Network Disk Arc
ii amanda-common 1:2.5.2p1-2 Advanced Maryland Automatic Network Disk Arc
ii amanda-server 1:2.5.2p1-2 Advanced Maryland Automatic Network Disk Arc

April 11th, 2008, 05:10 AM
Given then quantity of data you're working with, it's best to use the latest and greatest. Version 2.6.0 includes a completely rewritten Device API, including a new implementation of splitting.

I don't believe that 2.5.2 has any reason to read from disk while using split buffers, but I am even more confident that 2.6.0 does not.