With the release of NFR2 and the redesign of the back-end, I was optimistic that this was an upgrade with interesting potential.

My experience however has been less than stellar. I have 6 main fileservers, ranging from 1TB to 6TB of data (up to 10 million files on the largest). NFR version 1 seemed to handle this fairly well - the time it took to inventory a server was appropriate for it's size, the navigation of the scan data was sufficiently fast, the server that the engine ran on didn't have to be way overbuilt for the task, etc. NFR 2 has introduced problems of scalability in my usage so far, and I'm hoping that I've done something wrong.

I've tried both a Windows Server 2008 R2 system and a Windows 7 system as the engine. Agents installed on all fileservers that need scanning. The agent portion of the scan runs fast enough - but the time to import the data into the database has been upwards of over 48 hours on a couple of scan imports. Then after the data is in the engine, the "Folder Summary" report is unusable as it errors out - probably because it can't retrieve the data quickly enough.

Does anyone have tuning suggestions for the built-in postgresql database, or other suggestions to get this in a workable condition? If not, we'll just have to drop this product off our contract as useless.