Hi All!

I have a 2 node cluster that the volume resource only failsover from 2
to 1, never from 1 to 2. Identical servers (IBM 345), QL2300.ham 6.80.03.
On a cold rebot of the 1 server (that it can't failover to) it mounts fine
if not already mounted on the 2 server. It also mounts correctly when
manualy migrated to both servers. The end result is the comotose state and
can be manually offined and onlined after the failover just fine though.


clstrlib /hmo=off set in ldncs.ncf on both


Scripts:
Load:

set allow ip address duplicates=on
nss /poolactivate=SHARK
mount DATA VOLID=254
CLUSTER CVSBIND ADD APPSVS 172.17.10.96
NUDP ADD APPSVS 172.17.10.96
add secondary ipaddress 172.17.10.96
load broker .ficoh-cl-broker.ficoh /allowdup /ipaddress=172.17.10.96
CIFS ADD .CN=APPSVS.O=DEV.T=DEV-TREE.
load ndpsm .ndps-manager.ficoh /dbvolume=nocheck /ipaddress=172.17.10.96
set allow ip address duplicates=off

Unload:

unload ndpsm
unload broker Delay 15
del secondary ipaddress 172.17.10.96
NUDP DEL APPSVS 172.17.10.96
CLUSTER CVSBIND DEL APPSVS 172.17.10.96
CIFS DEL .CN=APPSVS.O=DEV.T=DEV-TREE.
nss /pooldeactivate=SHARK /overridetype=question


Thanks for any thoughts!

Joe