This is my third post concerning problems with this simple 3-node OES2
SP1 cluster. I hope this isn't a harbinger of things to come.

This is all Dell equipment with an EMC fiber channel SAN. One of my
people built the SLES 10 SP2 servers (no OES yet) and Dell techs came
in and configured them for the SAN. They installed EMC's PowerPath
multipathing management software, fully licensed. I've built
Linux-based OES clusters using DMPIO before with HP EVAs, but never
used PowerPath.

I installed OES2 SP1 and fully patched it. We then configured the SAN
Luns, presented them, and got PowerPath to see them. They show up as
/dev/emcpowera (or b or c, etc) for the device and /dev/emcpowera1 for
the partition.

The odd thing is with four paths and two HBAs, I also see all of the
individual devices for each path (/dev/sdf, /dev/sdg, etc). A long
list of them. Don't know if this is normal.

I wouldn't mind any of this EXCEPT. When I migrate a resource from a
node, the NSS volume remains fully accessible on that node. Indeed,
the volume remains accessible on any node it's ever landed on.

Let me make this clear. An "nss /pools" shows the pool as deactivated.
An "ncpcon volumes" does not list the volume name. Yet, if I cd to
/media/nss/volname I see the full content of the volume including
subdirectories and files. What's more, I can create a file on it. This
is true even if the resource is offline.

Essentially, the nss cluster volumes are writable from any node the
resource has ever landed on, even though nss shows the pool as
deactivated and the volume not mounted.

Now I may be a tad paranoid, but I see this as a wonderful opportunity
to corrupt my data.

Does anyone have any suggestions? And, please, don't tell me this is