We have a large (larger) eDirectory deployment that is primarily
read-only (although there are brief periods of time, generally
off-hours, where large numbers of changes get pushed into eDirectory
from PeopleSoft through IDM). For the most part though, we service about
a quarter-billion LDAP queries per day. This is spread across ten
eDirectory instances running on five servers (two instances per server).
The average dibset size is around 4.5GB.

The largest partition contains about 300,000 objects. There are about
800,000 objects in the tree. There are eight partitions total in the

Each of the servers have 32 GB of RAM. The hardware and software is 64
bit. eDirectory is the only process running on these SuSE 11.1 servers
outside of the base server build.

Most (if not all) the documentation we've found from Novell regarding
configuring and tuning the memory settings seems to be tailored towards
smaller memory models and dibs, and only 32 bit technology. Some of it
seems to be contradictory to a degree. For instance, some says to set
the maximum cache size to four or five times the size of the dib. Others
recommend to not set the static limit over 50-75% of total physical
memory and to avoid exceeding over 1 gig of memory allocated to
eDirectory database cache. As you can see, even setting it 1:1 would put
the maximum cache setting at 4.5GB

The reason for this post is to ask what the best setting would be for
obtaining 99% cache hits, or as close to 99% cache hits as possible. We
can obviously cache the entire dib (three or more times), however we
read that you can give it *too* much cache, and that it slows down
processing, as eDirectory still has to go to disk (?) to find what it's
looking for. From TID 3178089:

"It is a common misconception that more memory is better.
Remembering that the main bottleneck is the file system, it does make
sense to load as much of the directory data as you can into memory.
However, too much memory allocated toward Novell eDirectory can cause
unwanted effects. By default, eDirectory database cache will consume up
to 80% of available RAM. Often times, in large environments, this is too
much. It becomes very costly for the server to manage a large amount of
memory. As items are cached, the cache must be continually scanned for
required entries. If the entries are not available, the disk must be
accessed to get them.

If, for instance, there is a 4 GB database and the hardware limits
memory to 2 GB for database cache, it would be unwise to allocate all of
the 2 GB for database cache. The reason for this is that each entry can
potentially be written to cache 3 or more times. This means that
eDirectory would need up to 16 GB to cache the entire database. Basic
mathematics suggests that eDirectory will be going to disk to get
entries more than cache. It does not make sense to spend most of the
time scanning large amounts of memory and then going to disk

This is confusing.

We do have the 4GB+ dibs mentioned here. We also have up to 15GB of RAM
available for each instance (which would leave 2GB for the OS - which
should be plenty). Setting a maximum cache limit through to 750MB or
even 1GB seems inherently low. How would setting it to 4GB cause
eDirectory to have to go to disk to get the required entries?

The ultimate goal here is to reduce the number of faults per request.
Here's an idea of what we're currently seeing. This is with the maximum
cache size set to dib X 2, or almost 8GB:

Database Cache Statistics
Entry Cache
Block Cache


Hit Looks


Fault looks

Requests Serviced from Cache

Setting the maximum cache size to 750MB though seems worse.

Thanks for any help or suggestions that may be offered.


samthendsgod's Profile: https://forums.netiq.com/member.php?userid=206
View this thread: https://forums.netiq.com/showthread.php?t=46116