I'm muddling my way through setting up Sentinel here to learn more about it. I'm effectively new to Sentinel, having not looked at it for several years now. I started with the Sentinel 7.4.0 VMware ovf, and that's up and running fine. I have an eDirectory / IDM identity vault that I'm configuring to send events to the Sentinel server.

Initially, I stumbled in to the mismatched SSL configuration, similar to this thread:


So auditds is loaded in eDirectory, and the Platform agent is caching events in files on the local system. Sentinel is logging the SSL handshake errors. Looks just like TID #7014219.

So, I did the "upgrade to latest version" dance. I got eDirectory up to Hot Fix 2. I ensured that the eDirectory instrumentation was updated. I upgraded to Platform Agent Platform-Agent_2011.1r2.zip. As far as I can see, I'm on the latest of everything on the platform side, no changes to the Sentinel server. I restarted everything. And the Sentinel logs continued to report the SSL handshake error.

At this point, I could see successful audit event transfers to Sentinel. For new events only. If I caused the platform to generate an audit event (login, etc.), I'd see it in Sentinel. But the platform agent was still attempting to send those previously cached events, and the Sentinel server was still rejecting them with SSL handshake errors.

On a hunch, I tried the "workaround" in TID 7014219, allowing <1024-bit keys. I restarted Sentinel, and the platform agent successfully transferred all of the cached events. I changed the Java.Security configuration back to disallowing <1024-bit keys, restarted Sentinel again, and new events continue to transfer ok.

I haven't found any reference to the PA encrypting the cached data and trying to use the old certificate when transferring it to Sentinel, but I think that's what must have been happening. Can anybody confirm that it does this? If so, that'd be interesting. Or, alternately, confirm that it does not, in which case I haven't a clue why it was doing what it was doing.