The Novell documentation says that a mixed OES 1 and OES 2 Linux cluster
is supported and this will enable you to perform a rolling upgrade of all
the nodes gradually.

I tried this out in a lab (upgrading one OES 1 node to OES 2 and leaving
the other intact) and was really impressed at how well it works, and the
lab was quite scrappy in many ways!

Unfortunately when I came to do this on the live system it worked
appallingly. Both cluster nodes seemed totally confused about each
other's presence and this resulted in the resources being listed as
"comatose" on one server and "running" on the other. And although the NSS
volumes were mounted on the "running" server they were not bound in NCP
server and so were inaccessible to users over NCP. This state of affairs
persisted with the online status of resources flapping every hour or so.
I'm puzzled why it went so well in the lab? The upgrade had proceeded
without a single error and so I had assumed I was going to have a short
day - how stupid of me!

Anyway if anyone has any light to shed on this I'd much appreciate it.
But otherwise just take it as advice to avoid such a scenario.

To save anyone wasting time suggesting silly stuff: eDirectory is totally
healthy (according to all standard health checks), timesync all good,
hardware fine, both servers can see the SAN, patch levels right up to
date. The production OES1 servers _were_ functioning all very well (apart
from the occasional NSS related kernel panic/oops - which is why I was so
keen to move to OES2 in the first place).