After applying "May 2013 Scheduled Maintenance for OES2SP3 8566" on OES2 SP3, old cluster volumes go comatose.
If I remove the "novcifs --add --vserver", old cluster volumes go online properly, but clients can only connect by IP address (not anymore with host name).

Before the patches, I needed a "sleep" to wait until "novcifs -sl" lists the volume, otherwise resource goes comatose.

Cifs Log :
WARNING: CODIR: AllocateDirCacheEntry: Invalid Patth /media/nss/ETU/
ERROR: CODIR: GetEntryFromDirCache: Failed to allocate a dir cache entry
WARNING: CODIR: Failed to find cache entry for given path.
ERROR: ENTRY: CIFSNDSReadFromNDS: Error adding a new share (CIFSNDSPutSharePointInfo): -1
CRITICAL: CLI: AddServer: CIFSNDSReadFromNDS(): Unable to read attributes of ".cn=CL_ETU_SERVER.o=hec.t=HEC_TREE." from NDS, error = -1

The "dir cache entry" does not seem to be the culprit, because on the same server an old cluster volume goes comatose and a new cluster volume goes online properly.

This error does not occur with new nss cluster volumes, created under Netware OES.
What I call "old cluster volumes", are volume created long time ago (Netware 3 or 4), not clustered at that time.

By comparing object attributes of the cluster volume server object (CL_..._SERVER) of old (ie "ETU" volume name) and new (ie "ADM" volume name), I notice these differences :
- nfapCIFSShares :
->ETU : 'ETU:\' 'ETU' 0 'ETU'
->ADM : 'ADM' 'ADM' 0 'NSS Volume'
- ACL :
->ETU : has no "[root] - nfapCIFSServerName - 3"
->ADM has the root compare and read rights on nfapCIFSServerName attribute.

How to add [root] trustee to make a try ?
Has anyone experienced such issue ?

Best regards.