RAID

From Edgar BV Wiki
Revision as of 13:19, 2 March 2007 by Red (talk | contribs) (New page: It may be too late for this re-sync, but in the future do this: echo 100000 > /proc/sys/dev/raid/speed_limit_max The default speed limit is 10000, that's what you are getting. So you have...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

It may be too late for this re-sync, but in the future do this: echo 100000 > /proc/sys/dev/raid/speed_limit_max

The default speed limit is 10000, that's what you are getting. So you have reached a software imposed limit, not a hardware limit.

http://www.spinics.net/lists/raid/threads.html

http://www.silentpcshop.nl/escalade.asp 8500-8


http://www.cse.unsw.edu.au/~neilb/source/mdadm/

http://www.tldp.org/HOWTO/Root-RAID-HOWTO.html

RAID-5 (block level striping and distributed parity) size: (N-1)*S N = number of drives S = size of smallest drive

RAID-1 (mirroring) Each disk is duplicated but size = (N/2)*S

RAID-0 (striping) good for non-critical data, no redundancy, but best performance

RAID-10 combination of RAID-0 and 1 - first they are mirrored and then striped. Best option (super high redundancy + performance) but expensive to build...

JBOD (just a bunch of drives) each drive is seen as such.

Tuning Linux VM parameters may help to increase the read performance, depending your RAID type, application, and other factors. You can try this setting and see if it helps increase performance in your situation.

The settings are: To make the change without having to reboot (change will not survive after a reboot), type the following from a command prompt:

echo "512" >/proc/sys/vm/min-readahead echo "512" >/proc/sys/vm/max-readahead

To make the change permanent, modify /etc/sysctl.conf and add the following lines:

vm.max-readahead=512 vm.min-readahead=512

In addition, you can modify the bdflush parameter:

sysctl -w "vm.bdflush=10 500 0 0 500 3000 10 20 0"

Other information on Linux system tuning is available from:


Creating Software RAID


use mdadm for RAID-1 (mirror) mdadm -C /dev/md0 -l 1 -n 2 /dev/hdb /dev/hdc

Creates a RAID device on /dev/md0, type raid-1, two devices and then the devices to be used.

mdadm -D /dev/md0 mdadm -E /dev/hdb or mdadm -E /dev/hdc mdadm -Q /dev/md0

Gives details - note, it will give State: dirty - this is as it should be, the only time the State should be clean is when the raid device is offline.

watch -n 1 cat /proc/mdstat

also gives details on the rebuild status

To test the array: mdadm /dev/md0 -f /dev/hdb

marks that array as being faulty

mdadm /dev/md0 -r /dev/hdb

removes the disk from the array

mdadm /dev/md0 -a /dev/hdb

adds the disk back to the array

echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf mdadm --detail --scan >> mdadm.conf

creates a config file


GROWING RAID (in size)

==========

(requires kernel 2.6) first take one of the hd devices off the raid then mdadm --grow /dev/md0 --size=max then wait for the sync. Then remove this device and re-attach the old device (-a) OR stop the array (--stop) Regrow the array re-attach the other drive and the raid will rebuild.

mdadm -A /dev/md0 /dev/hdb /dev/hdc -v or mdadm -A /dev/md0 /dev/hdb -v mdadm -A /dev/md0 /dev/hdc -v

will help to see which drive hasn't been grown yet with an mdadm -D /dev/md0

When you grow the raid array the first time it will assume the larger size which means it won't allow the second disc to join it giving weird errors when trying to attach that disk because they do say they are supposed to be identical. lvm gets confused by this as well. If both discs are in the array it will only grow one of them and not the other, causing it to drop the disc from the array.

WARNING: YOU HAVE TO BACK UP THE DATA AND RESTORE IT BECAUSE LVM SUCKS. Change md_component_detection = 1 in /etc/lvm/lvm.conf to 1 because for some reason it defaults to 0. in theory you could do: vgcfgbackup lvmdiskscan will show a different size to vgdisplay. To grow the metadata you should be able to use pvcreate --setphysicalvolumesize 152G /dev/md0 -ff (where 152G is what lvmdiskscan shows) But that doesn't work. If something goes wrong you can do a pvdisplay to find the uuid, insert that into the /etc/lvm/backup/tripserv_vol file and then do a vgcfgrestore tripserv_vol to get back to the old situation. As you're only changing the metadata the data itself should be untouched.

To get pvdisplay and lvmdiskscan to sync, install lvm2 from debian testing and use the pvresize tool (not available in Sarge, see installing packages from testing on stable machines using apt.txt in knowledgebase) pvresize /dev/md0 which actually works. The lv will resize with the pv.

For monitoring the Compaq Proliant machines:

(debian packages cpqarrayd and

http://starbreeze.knoware.nl/~hugo/compaq/ http://h18000.www1.hp.com/products/servers/linux/softwaredrivers.html

(http://www.joelschneider.net/compaq_proliant_1500_debian_potato.html - just use the df2 disks)