Hi,


I have a 2 node cluster that we have upgraded from OES11 to OES11 sp1 at the beginning of august

Last week we create a new ressource on the primary node (let's say NODE 1), but when we want to migrate this new ressource to the other node (let's say NODE 2), the ressource became comatose.

On node 2 what i can see in /var/log/messages is the following


Aug 20 16:42:17 node2 ncs-resourced: Try LDAP for POOLDATA20_SERVER
Aug 20 16:42:17 node2 ncs-resourced: LDAP failed: <class 'ldap.SERVER_DOWN'>
Aug 20 16:42:53 node2 ncs-resourced: Error preprocessing script POOLDATA20_SERVER.load
Aug 20 16:42:53 node2 ncs-resourced: POOLDATA20_SERVER.load: CRM: Tue Aug 20 16:42:53 2013
Aug 20 16:42:53 node2 ncs-resourced: POOLDATA20_SERVER.load: /bin/sh: /var/run/ncs/POOLDATA20_SERVER.load: No such file or directory
Aug 20 16:42:53 node2 ncs-resourced: resourceMonitor: POOLDATA20_SERVER load status=127
Aug 20 16:42:54 node2 ncs-resourced: Error preprocessing script POOLDATA20_SERVER.unload
Aug 20 16:42:54 node2 ncs-resourced: POOLDATA20_SERVER.unload: CRM: Tue Aug 20 16:42:54 2013
Aug 20 16:42:54 node2 ncs-resourced: POOLDATA20_SERVER.unload: /bin/sh: /var/run/ncs/POOLDATA20_SERVER.unload: No such file or directory
Aug 20 16:42:54 node2 ncs-resourced: resourceMonitor: POOLDATA20_SERVER unload status=127

I try to change the configuration using a new.conf file liket it is in the documentation :

CONFIG_NCS_CLUSTER_DN="cn=svr1_oes2_cluster.o=cont ext"
CONFIG_NCS_LDAP_INFO="ldaps://10.1.1.102:636,ldaps://10.1.1.101:636"
CONFIG_NCS_ADMIN_DN="cn=admin.o=context"
CONFIG_NCS_ADMIN_PASSWORD="password"

As the root user, enter the following command at a command prompt:

/opt/novell/ncs/install/ncs_install.py -l -f new.conf on node1 and on node2

and then cluster exec "/opt/novell/ncs/bin/ncs-configd.py -init"


I reboot node2 but it is exaclty the same.


Any idea ?

Stéphane