I'm having some intermittent problems with connectivity on my network.
There are three NW 6.5 SP6 servers on the network. One of these is only
used for backups, while the other two are what I would term the
production servers. On these two machines we have a two-node cluster
set up with multiple cluster volumes.

Periodically, it seems that some users can't get to one or more of the
network resources to which they need access. The weird thing about this
issue is that there doesn't seem to be any rhyme or reason to what does
or doesn't work. I'd say that 90% of the time everything works the way
it should. When things aren't working, the symptoms typically show up
as failed drive mappings during the login process. Even stranger (to me
anyway) is that there is absolutely no pattern to what drives aren't
able to be mapped. You can sit down at a machine and log in one time
and not get the mappings that reside on one of our physical servers
and/or cluster volumes and then restart the machine only to see a
completely different set of failed drive mappings on different physical
servers and/or cluster volumes. Then, twenty minutes or so later,
everything is back working fine again. And to make matters even
stranger, it's not consistent for all machines either. I have one
machine that I use for testing that has literally never had a problem
at all . . . even when other users are experiencing problems. Once the
mappings are made, you retain access to them for as long as your
session lasts. If you don't get them at login (through the drive
mappings), you can't get to them in any other way either (through
Network Neighborhood, etc.). I've checked, rechecked our SLP settings
to make sure that all of our servers are communicating and that they
all know where all of the resources are supposed to be. The servers are
completely consistent with regard to where things should be located.

Does anyone have any ideas about possible causes and/or solutions to
this issue?