We have a network that isn't quite covered by the documentation or the
clustering DHCP appnote. I have worked out this solution in the lab and it
seems to work, but I was hoping all y'all gurus out there could poke holes
in it for me.

Our existing DHCP Netware 5.1 server has one NIC with TCP/IP bindings like
this:

server IP subnet mask purpose of subnet
172.30.100.3 255.255.255.0 server subnet, no DHCP scope
172.30.101.3 255.255.255.0 bootp/DHCP print servers, unix hosts
172.30.102.3 255.255.255.0 manually assigned DHCP clients
172.30.104.3 255.255.252.0 dynamic DHCP scope, 4 class C subnets
in a supernet (104.xx-107.xx).

We have found that with our routers, it's critical for the DHCP server to
have a "leg" in each of these networks. When we tried (several years ago)
to have only one IP address bound on the DHCP server, and to configure the
router's BOOTP helper with this address, we end up with network traffic
storms due to all the extra DHCP traffic.

The goal then is to cluster this DHCP service. (We'll do it with three
nodes, but I'll write it here with 2 for simplicity.) I tried at first like
this, thinking that I would need a secondary ipaddress for each subnet:

TCPIP bindings:
cluster node 1 cluster node 2
172.30.100.4 172.30.100.5
172.30.101.4 172.30.101.5
172.30.102.4 172.30.102.5
172.30.104.4 172.30.104.5

--cluster load script:--
NSS /poolactivate=DHCPVOL
mount DHCPVOL volid=252
CLUSTER CVSBIND ADD MYSERVER 172.30.100.10
NUDP ADD MYSERVER 172.30.100.10
add secondary ipaddress 172.30.100.10
add secondary ipaddress 172.30.101.10
add secondary ipaddress 172.30.102.10
add secondary ipaddress 172.30.104.10

cluster dhcp cn=server.ou=orgunit.o=org.t=tree
dhcpsrvr -d3
--end cluster load script--

The documentation says to use --servaddr A.B.C.D as a parameter to the
dhcpsrvr load line to force dhcp to talk over the virtual server IP address.
But in my case I don't need just one, but 4 different addresses on different
subnets to respond to dhcp requests, so I just left it off. What happens
then is that clients get a proper DHCP lease, but no matter what I tried,
ipconfig /all showed their DHCP server as being the 10x.4 or 10x.5 address,
NOT the virtual 10x.10 address.

If the cluster fails over, client leases are still valid. When the client
need a lease renewal, it will try to talk to their known server, which will
fail, and then broadcast for a new server. The new server responds ok and
everything works. So I have simply decided to drop all the secondary
ipaddresses, and simply allow the two cluster nodes to talk over DHCP using
their own addresses. It will result in a little bit more traffic as clients
find the new server after a failover, but really it's no different from
manually unloading dhcpsrvr on one traditional Netware server and reloading
it with the same scopes on another one, except that we are using a virtual
NCP server.

So here is the modified cluster load script without all those secondary
ipaddresses:

--cluster load script:--
NSS /poolactivate=DHCPVOL
mount DHCPVOL volid=252
CLUSTER CVSBIND ADD MYSERVER 172.30.100.10
NUDP ADD MYSERVER 172.30.100.10
add secondary ipaddress 172.30.100.10

cluster dhcp cn=server.ou=orgunit.o=org.t=tree
dhcpsrvr -d3
--end cluster load script--

Hope this was clear - if anyone else is supporting a similar network, or
sees any problem with this setup I would appreciate a response.

Phillip E. Thomas