Hi,

I recently setup a RAID5 array with 6 200GB drives. I've been using it
succesfully for about 2 months now. Today in an effort to educate
myself a little more and to set up email notification in case of a
drive failure I ran a few commands. (Output to follow.) The output
from these commands raised some flags in my head. I looks like one of
my drives failed already.

1. Can someone please interpret and advise on how to proceed.
2. What steps do I need to take to convert the drive letter names to
their UUID?
3. Any other probing I should do to help diagnose?

BTW: I'm running SUSE 10.2. I have a total of 7 drives. 6 of which
are part of the array.


###### cat /proc/mdstat
Personalities : [raid5] [raid4]
md0 : active raid5 hda1[0] hdm1[4] hdg1[3] hde1[2] hdc1[1]
976751360 blocks level 5, 128k chunk, algorithm 2 [6/5] [UUUUU_]

unused devices: <none>


###### mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Dec 10 00:40:22 2006
Raid Level : raid5
Array Size : 976751360 (931.50 GiB 1000.19 GB)
Device Size : 195350272 (186.30 GiB 200.04 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Jan 17 09:37:17 2007
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 128K

UUID : ac448627:1e5cd283:9947c0c4:b6c73c4f
Events : 0.267694

Number Major Minor RaidDevice State
0 3 1 0 active sync /dev/hda1
1 22 1 1 active sync /dev/hdc1
2 33 1 2 active sync /dev/hde1
3 34 1 3 active sync /dev/hdg1
4 88 1 4 active sync /dev/hdm1
5 0 0 5 removed


###### df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hdi2 20972152 2303188 18668964 11% /
udev 258216 192 258024 1% /dev
/dev/hdi3 94137984 7978812 86159172 9% /home
/dev/md0 976721544 629630984 347090560 65% /content