We're about to upgrade all of our hubs and switches. We have a gigabit
capable switch for use with our servers. There is room left in the budget
to replace nics in the slightly older servers (which are fast ethernet),
but we have a concern about duplex settings.

In the past, we got bit by duplex mismatches. Transfer rates between
specific servers was horrible (some data was even lost) until we hard set
the speed and duplex settings to match

The new switch is fully auto everything, and we'll be picking up identical
NICs for our core netware servers. Based on everything I've read, we
should leave all devices set to auto negotioate speed and duplex settings
for optimal performance.

There are lots of tids that contradict this. It seems that the
implementation of autonegotiation has historically been incosistent and

Is this still the case?

I've also read that you can't hard set speed duplex with the gigabit nics
that we are looking at (intel server nics, don't have the model handy).

Any thoughts/experieces with this?

Thanks in advance,