Just recently, I upgraded the RAID controller & hard drives in my BM server.
I re-installed nw65sp3 from scratch, setup bm3.8, applied bm38sp3, used CJ's
proxy.cfg, setup my settings, and my ACL rules. It ran perfectly for about
4 days and then crashed hard - I received messages saying that my cache
volumes were deactivated due to device driver failure or similar. I have 6
cache volumes (traditional, 8k, no compression/suballocation, dos ns only,
18GB size). I read that over 8GB cache volumes is not recommended, but I
used to have 6 18GB volumes previously and everything worked fine.

Whenever I reboot the server, this happens again - sometimes the same day,
sometimes the next day. The server restarts itself and hangs around or
before scanning for devices and partitions. I even rebuilt the server a 2nd
time with the same results. Is this the RAID adapter causing this? I have
2 other identical adapters that work fine with netware. They are IBM
ServeRAID 3L's with latest firmware from IBM's support site. SYS is 2 18GB
drives in a RAID1 container and CACHE1-6 are individual RAID0 containers.
--
Josh Messerschmitt
Certified Novell Engineer