We use a 2-node cluster, running nw6sp5 with file, dhcp/dns and ndps
services. Shared storage is a nw65 iscsi target.

Ndps is obviously well configured, regarding the different tids and
documentation. There is a dedicated nss pool and volume for ndps with
all cluster-related nds objects, proper IP settings and rights. Some of
the printer agents serving as spooler for old-fashioned print queues...

When there is a failover or a migration between the two nodes, the rms
service can't be enabled by the broker with different error messages.
They all face down to the fact, that the broker uses the "old" server
name as a reference for the volume. Broker and manager have been
definitely configured with the corresponding cluster objects for volume
and server. If you switch back, there is also an error message and a
prompt for the directory path. Submit the path and rms loads.

The nds attributes of the broker object shows the right value for the
volume, but the host value changes, depending on the node, where ndps is
actually loaded.

Any hint or idea?