We have a server that we are trying to setup and verify software raid 5 on,
we setup the raid as follows :-

Load nssmu from console, go into raid, create raid 5 device, add the 4x750G
drives (698000 seg size) to the raid, hit go and it builds the raid. So far
so good. Then we create a pool (Datapool), and on that a volume (Data),
which all goes fine. We can map a drive to data and copy data on and off
with a workstation, so everything seems to be working ok.

However we want to test the redundancy and give the techs here some
experience of replacing a failed drive, when we do this we cannot get the
raid to re-build.

We have tried 2 scenarios :-

Firstly we downed the sevrer and swapped one of the drives for a blank one,
when the server came back up the raid came back in degraded mode. However
when you go to look at the raid there are only 3 of the 4 segments listed,
so we cannot delete the 'bad' segment, trying to expand the raid with F3,
allows us to add the free space, but when we hit F3 again to expand it we ge
the error '559 unable to expand raid device' (that may be 565).

Secondly we tried actually disconnecting a disk, and got a different result
depending on which disk we tried. When I pulled one of the disks the raid
completely disapeared, where as one of the others yeailded the same result
as above.

So does anyone know how we should proceed, as obviously we want to varify
that we can recover if one of the disks actually does fail.

We will also try upgrading to the latest support pack and see if that makes
any difference.