Performance Tuning
This chapter describes some best practices for tuning your Lightweight Directory Access Protocol (LDAP). The IBM Tivoli Directory Server (ITDS) out of the box will fit most small directories, but for most enterprise directory examples you will need to do more tuning. This chapter goes over some of the main issues that come from Performance Tuning. We cover LDAP Cache, DB2 settings, and special OS-related settings that need to be addressed.
Performance Tuning is a art form; there is no cookie cutter approach that fits all directories for all occasions. But there are basic starting settings that can be made that will get you in the ball park.
The best friend to LDAP is memory; this is the one thing that can help right away with any directory.
32 bit OS memory limits are usually 4 GB depending on the OS.
64 bit OS memory limits are usually 16GB depending on the OS.
See your own OS manufacturer to find out the limits that you have.
Tuning for optimal performance is primarily a matter of adjusting the relationships between the LDAP server and DB2 according to the nature of your workload. Because each workload is different, instead of providing exact values for tuning settings, guidelines are provided, where appropriate, for how to determine the best settings for your system.
Table 16-1 Tasks
Step
Task
See section
1
If the IBM Tivoli Directory server has never been started. Start it now to complete the server configuration. The initial DB2 database tables are not created until the first startup of the Directory server.
16.4.3
2
Optionally, back up the IBM Directory Server using DB2 backup. It is always a good idea to back up the IBM Directory Server before you make any major change. In this case, the change is performance tuning.
16.9.6
3
If this is a UNIX operating system, do the performance tuning tasks. These performance tuning tasks vary by operating system but they are mostly related to system resource limits.
16.10 for AIX
16.11 for Solaris
4
Check whether the IBM Directory Server change log is configured. By default it is not. It should only be used if required by other intergrator products. Performance is faster without the change log.
16.15.1
5
Check whether the IBM Directory Server audit-log is turned off. Performance is faster with the audit log turned off. Only use it for troubleshooting or if you require it for security monitoring.
16.15.2
6
Check to see if you are going to use Transaction and Event Notification; if you are not going to use these then turn them off. They are on by default.
16.3
7
Perform IBM Directory LDAP cache settings.
16.2
8
Perform slapd and ibmslapd conf file changes.
16.4
9
Perform the DB2 parameter performance tuning tasks.
16.5
10
Perform reorgchk and reorg of indexes and tables as needed.
16.7.2
11
Check to see if you have the needed indexes. Performance is greatly enhanced by having the right indexes in the DB2.
16.7.3
12
Use the monitoring outputs to help you make decisions on changes to the ldap and DB2 settings.
16.16
16.1 ITDS application components
Between a LDAP client and the server you have a network. One of the things you need to make sure of is how the network is put together. The biggest problem with networks is when you have different levels of speed and either Half duplex or Full duplex connections. Windows workstations most always default to auto setting; this can cause a number of problems in a network. You should always hard code your network cards to the speed and type, either half or full duplex connections. Do not ever use Auto mode. There will be times when windows boots up auto mode that it will not get the connection correctly and set you up in Half duplex mode when the switch or server is set to Full duplex and vice vesa. One tell-tail sign of this is that file transfer speeds are slow and there is high rate of collisions on an Ethernet network. If you are going to use Full duplex then every thing that talks to that server needs to be in Full duplex point to point. Have your network person check this out and make sure they hard code the workstations, switches, and servers to the same speed and connection type.
The query follows a path from the IBM Tivoli Directory Server client to the LDAP server, to DB2, to the physical disks in search of entries that match the query’s search filter settings. The shorter the path to matching entries, the better overall performance you can expect from your system.
For example, if a query locates all the matching entries in the LDAP server within the LDAP Cache then access to DB2 and the disks is not necessary. If the matching entries are not found in the LDAP server cache, the query continues on to DB2 buffer pools and, if necessary, to the physical disks as a last resort. Because of the time and resources it takes to retrieve data from disk, it is better from a performance standpoint to allocate a significant amount of memory to the LDAP server caches and DB2 buffer pools.
16.2 ITDS LDAP caches
IBM Tivoli Directory Server LDAP caches and DB2 buffer pools store previously retrieved data and can significantly improve performance by reducing disk access. When requested data is found within a cache or buffer pool, it is called a cache click. A cache miss occurs when requested data is not located in a cache or buffer pool.
A cache miss is not necessary bad, it becomes bad when the miss rate continues to rise when the maximum cache level is reached. This says it has to go elsewhere to get it information.
Because the type of information in each cache and buffer pool is different, it is useful to understand why and when each cache is accessed. We will cover DB2 buffer pools in “DB2 tuning” on page 491.
16.2.1 LDAP caches
LDAP caches are fast storage buffers in memory used to store LDAP information such as queries, answers, and user authentication for future use. Tuning the LDAP caches is crucial to improving performance.
An LDAP search that accesses the LDAP cache can be faster than one that requires a connection to DB2, even if the information is cached in DB2. For this reason, tuning LDAP caches can improve performance by avoiding calls to the database.
The LDAP caches are especially useful for applications that frequently retrieve repeated cached information. Keep in mind that every workload is different, and some experimentation will likely be required in order to find the best settings for your workload.
Note that cache sizes for the filter cache, ACL cache, and entry cache are measured in numbers of entries.
LDAP Caches have changed over time with each major version change. In Version that are older then SecureWay 3.2.2 When there was any type of write to the database all the LDAP cache would be invalidated and would need to re-load its cache back again. This cause performance issues that would warrant you to not used any LDAP Cache. With SecureWay 3.2.2 and newer the only cache that would get invalidated would be Filter cache. This is still true today. But in future releases it is expected that the Filter cache would be smarter in how it invalidated itself. With the release of IDTS 5.2 a new Cache was put into the product called attribute cache. This will help this problem but it will not fix it. We will cover each of these LDAP caches in this section.
Changes for all these different caches are made in the LDAP config file. For version less then 3.2.2 it is called slapd.conf. For Versions 3.2.2 through 4.11 it is called slapd32.conf. And for 5.1 and 5.2 it is called ibmslapd.conf. For each of the different versions you can find this file in.../ldap/etc directory on all platforms.
 
Note: On Windows systems, the etcslapd32.conf or etcibmslapd.conf file is not located at the root of the disk drive. You must search each disk to find it.
16.2.2 LDAP filter cache
When the client issues a query for data and the query first goes to filter cache. This cache contains cached entry IDs. There are two things that can happen when a query arrives at the filter cache:
The IDs that match the filter settings used in the query are located in the filter cache. If this is the case, the list of the matching entry IDs is sent to the entry cache.
The matching entry IDs are not cached in the filter cache. (cache miss) In this case, the query must access DB2 in search of the desired data. When it gets the information back from DB2 it will update the filter cache and the list of the matching entry ID’s are sent to the entry cache. This will continue till you reach the maximum cache limits set in the ldap config file.
To determine how big your filter cache should be, run your workload with the filter cache set to different values and measure the differences in operations per second. This setting is measured in the number of entries and this is the maximum number of entries in the Filter Cache.
In 5.1 and later it is called:
ibm-slapdFilterCacheSize: 25000
In 4.1 and earlier it is called:
ibm-slapdSetEnv: RDBM_FCACHE_SIZE=25000
We will cover later how you can monitor this to see if it is reaching the maximum limits.
One thing to remember is that you want to minimize and batch updates (add, modify, modrdn, delete) when possible. This will lessen the problem with the filter cache being invalidated with any update.
There is no performance benefit in allocating any memory to the filter cache if even a small fraction of the operations in the workload are updates. If this proves to be the case for your workload, the only way to retain the performance advantage of a filter cache when updates are involved is to batch your updates. This allows long intervals during which there are only searches. If you cannot batch updates, specify a filter cache size of zero and allot more memory to other caches and buffer pools.
16.2.3 Filter cache bypass limits
The filter cache bypass limit configuration variable limits the number of entries that can be added to the filter cache. For example, if the bypass limit variable is set to 1,000, search filters that match more than 1,000 entries are not added to the filter cache. This prevents large, uncommon searches from overwriting useful cache entries. To determine the best filter cache bypass limit for your workload, run your workload repeatedly and measure the throughput. Setting the limit too low downgrades performance by preventing valuable filters from being cached. Setting the filter bypass limit to approximately 100 appears to be the best size for most workloads. Setting it any larger benefits performance only slightly. If you set it to a value of “0” it is no limit. This setting is measured in the number of entries.
In 5.1 and later it is called:
ibm-slapdFilterCacheBypassLimit: 100
In 4.1 and earlier it is called:
ibm-slapdSetEnv: RDBM_CACHE_BYPASS_LIMIT=100
And with 4.1 and earlier you also have the following that will to the same with the entry cache:
ibm-slapdSetEnv: RDBM_ENTRY_CACHE_BYPASS=YES
If this variable is set (to anything) the entries associated with a search that matched more than RDBM_CACHE_BYPASS_LIMIT entries will also not be cached in the Entry cache.
With 5.1 and later they is no other entry to set this anymore.
16.2.4 LDAP entry cache
The entry cache contains cached entry data. Entry IDs are sent to the entry ache. If the entries that match the entry IDs are in the entry cache, then the results are returned to the client. If the entry cache does not contain the entries that correspond to the entry IDs, the query goes to DB2 in search of the matching entries. To determine how big your entry cache should be, run your workload with the entry cache set to different sizes and measure the differences in operations per second. You can use the cn=monitor command (this will be talked about later in this section. to help set this to a good level for your application.
This setting is measured in the number of entries and this shows the maximum number of entries in the Entry Cache.
In 5.1 and later it is called:
ibm-slapdEntryCacheSize: 25000
In 4.1and earlier it is called:
ibm-slapdSetEnv: RDBM_CACHE_SIZE=25000
16.2.5 Measuring filter and entry cache sizes
Filter cache and entry cache sizes are measured in numbers of entries. When determining how much memory to allocate to your LDAP caches, it can be useful to know how big the entries in your cache are. The following example shows how to measure the size of cached entries:
Note that this example calculates the average size of an entry in a sample entry cache, but the average filter cache entry size can be calculated similarly.
1. From the LDAP server:
a. Set the filter cache size to zero.
b. Set the entry cache size to a small value; for example, 200.
c. Start ibmslapd (or slapd for 4.1 or earlier).
2. From the client:
a. Run your application.
b. Find the entry cache population (call this population1) using the following command:
 • On a Unix server:
ldapsearch -h servername -s base -b cn=monitor objectclass=* | grep
entry_cache_current
 • For a Windows server use the following command and search for entry_cache_current:
ldapsearch -h servername -s base -b cn=monitor objectclass=*
3. From the LDAP Server:
a. Find the memory used by ibmslapd (or 4.1 or earlier slapd - call this ibmslapd1):
 • On AIX operating systems, use ps v.
 • On Windows operating systems, use the VM size column in the Task Manager.
b. Stop ibmslapd (on 4.1 and earlier slapd).
c. Increase the size of the entry cache but keep it smaller than your working set.
d. Start ibmslapd (on 4.1 and earlier slapd).
4. Run your application again and find the entry cache population (call this population2). See step 2b for the command syntax.
5. Find the memory used by ibmslapd (on 4.1 and earlier slapd - call this ibmslapd2). See step 3a for the command syntax.
6. Calculate the size of an entry cache entry using the following formula:
(ibmslapd size2 - ibmslapd size1) / (entry cache population2 - entry cache population1)
For example, using this formula with the 500,000-entry database results in the following measurement:
(192084 KB - 51736 KB) / (48485 - 10003) = 3.65 KB per entry
16.2.6 LDAP ACL Cache
ACL Cache was not able to be changed till 4.1 and later. This is use to hold the users ACLs for that are in the LDAP. The current default setting should be enough for you needs. There is a monitor output that we will cover later in this section that you can see if it needs to be raised or not. There are two settings for ACL, one is if you want to use ACL cache or not and the other is to set the maximum ACL cache that can be used.
ibm-slapdACLCache: TRUE
ibm-slapdACLCacheSize: 25000
16.2.7 Setting other LDAP cache configuration variables
You can set LDAP configuration variables using the Web Administration Tool or the command line.
Using the Web Administration Tool
To set LDAP configuration variables using the Web Administration Tool:
1. Expand the Manage server properties category in the navigation area of the Web Administration tool.
2. Click Performance.
3. You can modify any of the following configuration variables:
 – Cache ACL information. This option must be selected for the Maximum number of elements in ACL cache settings to take effect.
 – Maximum number of elements in ACL cache (ACL cache size). The default is 25,000.
 – Maximum number of elements in entry cache (entry cache size). Specify the maximum number of elements in the entry cache. The default is 25,000.
 – Maximum number of elements in search filter cache (filter cache size).
The search filter cache consists of the requested search filters and resulting entry identifiers that matched. On an update operation, all filter cache entries are invalidated. The default is 25,000.
 – Maximum number of elements from a single search added to search filter cache (filter cache bypass limit). If you select Elements, you must enter a number. The default is 100. Otherwise select Unlimited. Search filters that match more entries than the number specified here are not added to the search filter cache.
4. When you are finished, click OK to apply your changes, or click Cancel to exit the panel without making any changes.
Using the command line
To set LDAP configuration variables using the command line, issue the following command:
ldapmodify -D AdminDN -w Adminpassword -i filename
Where the file filename contains:
 
Note: Make sure that there is a “-” between each attribute that is being changed on the same DN, and there is no space between each line of the same DN. There should only be one space between each DN.
dn: cn=Directory,cn=RDBM Backends,cn=IBM Directory,cn=Schemas,cn=Configuration
changetype: modify
replace: ibm-slapdDbConnections
ibm-slapdDbConnections: 30
dn: cn=Front End, cn=Configuration
changetype: modify
replace: ibm-slapdACLCache
ibm-slapdACLCache: TRUE
-
replace: ibm-slapdACLCacheSize
ibm-slapdACLCacheSize: 25000
-
replace: ibm-slapdEntryCacheSize
ibm-slapdEntryCacheSize: 25000
-
replace: ibm-slapdFilterCacheSize
ibm-slapdFilterCacheSize: 25000
-
replace: ibm-slapdFilterCacheBypassLimit
ibm-slapdFilterCacheBypassLimit: 100
16.2.8 LDAP Attribute Cache (only on 5.2 and later)
To help with the problems that Filter Cache has with invalidating its cache with any update the attribute cache was built into ITDS 5.2. The attribute cache stores configured attributes and their values in memory. When a search is performed using a filter that contains all cached attributes, and the filter is of a type supported by the attribute cache manager, the filter can be resolved in memory. Resolving filters in memory leads to improved search performance.
When the client issues a query for some data, the first place the query goes is the attribute cache. There are two things that can happen when a query arrives at the attribute cache:
All attributes used in the search filter are cached and the filter is of a type that can be resolved by the attribute cache manager. If this is the case, the list of matching entry IDs is resolved in memory using the attribute cache manager.
The attribute cache manager can resolve simple filters of the following types:
 – Exact match filters
 – Presence filters
The attribute cache manager can resolve complex filters only if they are conjunctive. In addition, the sub filters within the complex filters must be of the following types:
 – Exact match filters
 – Presence filters
 – Conjunctive filters
Filters containing attributes with language tags are not resolved by the attribute cache manager.
For example, if the attributes objectclass, uid, and cn are all cached, the following filters can be resolved in memory within the attribute cache manager:
 – (cn=Karla)
 – (cn=*)
 – (&(objectclass=eperson)(cn=Karla))
 – (&(objectclass=eperson)(cn=*)(uid=1234567))
 – (&(&(objectclass=eperson)(cn=*))(uid=1234567))
 – (&(uid=1234567)(&(objectclass=eperson)(cn=*)))
Either some or all of the attributes used in the search filter are not cached or the filter is of a type that cannot be resolved by the attribute cache manager. If this is the case, the query is sent to the filter cache for further processing.
 
Note: If there are no attributes in the attribute cache, the attribute cache manager determines this quickly, and the query is sent to the filter cache.
For example, if the attributes objectclass, uid, and cn are the only cached attributes, the following filters will not be able to be resolved in memory by the attribute cache manager:
(sn=Smith)
(cn=K*)
(|(objectclass=eperson)(cn~=Karla))
(&(objectclass=eperson)(cn=K*)(uid=1234567))
(&(&(objectclass=eperson)(cn<=Karla))(uid=1234567))
(&(uid=1234567)(&(objectclass=eperson)(sn=*)))
Determining which attributes to cache
To determine which attributes to cache, experiment with adding some or all of the attributes listed in the cached_attribute_candidate_click attribute to the attribute cache. Then run your workload and measure the differences in operations per second. Keep in mind that you only want to cache those attributes that are being used in your search’s. This will take up more memory and it will be dynamically updated when there is a change in the values of those attributes that are cached.
16.2.9 Configuring attribute caching
You can configure attribute caching for the directory database, the changelog database, or both. Typically, there is no benefit from configuring attribute caching for the changelog database unless you perform very frequent searches of the changelog.
Using the Web Administration Tool
To configure the attribute cache using the Web Administration Tool:
1. Expand the Manage server properties category in the navigation area of the Web Administration Tool and select the Attribute cache tab.
2. You can change the amount of memory in bytes available to the directory attribute cache by changing the Directory cached attribute size (in kilobytes) field. The default is 16,384 KB (16 MB).
3. You can change the amount of memory in bytes available to the changelog attribute cache by changing the Changelog cached attribute size (in kilobytes) field. The default is 16,384 KB (16 MB).
 
Note: This selection is disabled if a changelog has not been configured.
To add attributes to the attribute cache:
1. Select the attribute that you want to add as a cached attribute from the Available attributes menu. Only those attributes that can be cached are displayed in this menu; for example, sn.
 
Note: An attribute remains in the list of available attributes until it has been placed in both the cn=directory and the cn=changelog containers.
2. Click either ‘Add to cn=directory’ or ‘Add to cn=changelog’. The attribute is displayed in the appropriate list box. You can list the same attribute in both containers.
 
Note: ‘Add to cn=changelog’ is disabled if a changelog has not been configured.
3. Repeat this process for each attribute you want to add to the attribute cache.
4. When you are finished, click Apply to save your changes without exiting, or click OK to apply your changes and exit, or click Cancel to exit this panel without making any changes.
Using the command line
To configure the attribute cache through the command line, issue the following command:
ldapmodify -D <adminDN> -w<adminPW> -i<filename>
Where <filename> contains the following, for example.
For the directory database:
 
Note: Make sure that there is a “-” between each attribute that is being changed on the same DN, and there is no space between each line of the same DN. There should only be one space between each DN.
dn: cn=Directory, cn=RDBM Backends, cn=IBM Directory, cn=Schemas, cn=Configuration
changetype: modify
add: ibm-slapdCachedAttribute
ibm-slapdCachedAttribute: sn
-
add: ibm-slapdCachedAttribute
ibm-slapdCachedAttribute: cn
-
add: ibm-SlapdCachedAttributeSize
ibm-SlapdCachedAttributeSize: 16384
For the changelog database:
 
Note: Make sure that there is a “-” between each attribute that is being changed on the same DN, and there is no space between each line of the same DN. There should only be one space between each DN
dn: cn=changelog, cn=RDBM Backends, cn=IBM Directory, cn=Schemas, cn=Configuration
changetype: modify
add: ibm-slapdCachedAttribute
ibm-slapdCachedAttribute: changetype
-
add: ibm-SlapdCachedAttributeSize
ibm-SlapdCachedAttributeSize: 16384
See the IBM Tivoli Directory Server Version 5.2 Administration Guide for more information.
16.3 Transaction and Event Notification
Transaction processing enables an application to group a set of entry updates together in one operation. Normally each individual LDAP operation is treated as a separate transaction with the database. Grouping operations together is useful when one operation is dependent on another operation because if one of the operations fails, the entire transaction fails. Transaction settings determine the limits on the transaction activity allowed on the server.
ibm-slapdTransactionEnable: TRUE
If you do not need to rely on one transaction to happen before another one or that each transaction stands on its own you will need to turn this off. This is done by setting this to FALSE:
ibm-slapdTransactionEnable: FALSE
Some other setting that deal with Transaction processing:
ibm-slapdMaxNumOfTransactions: 20
ibm-slapdMaxOpPerTransaction: 5
ibm-slapdMaxTimeLimitOfTransactions: 300
The event notification function allows a server to notify a registered client that an entry in the directory tree has been changed, added, or deleted. This notification is in the form of an unsolicited message. When an event occurs, the server ends a message to the client as an LDAP v3 unsolicited notification. The messageID is 0 and the message is in the form of an extended operation response. The responseName field is set to the registration OID. The response field contains the unique registration ID and a timestamp for when the event occurred. The time field is in UTC time format.
ibm-slapdEnableEventNotification: TRUE
If you do not need to have any event notification function then you will need to turn this off. This is done by setting this to FALSE:
ibm-slapdEnableEventNotification: FALSE
Some other setting that deal with event notification:
ibm-slapdMaxEventsPerConnection: 100
ibm-slapdMaxEventsTotal: 0
16.4 Additional slapd and ibmslapd settings
This section provides additional slapd and ibmslapd settings that can be used for tuning.
16.4.1 Tune the IBM Directory Server configuration file
In this section we discuss tuning the IBM Directory Server configuration file.
ibm-slapdSizeLimit
Change the slapd size limit for LDAP searches Edit either the slapd32.conf or the ibmslapd.conf configuration file, and then change the ibm-slapdSizeLimit parameter to a number other than 0 (unlimited). Setting this value affects all LDAP searches. If this value is set to unlimited (0), the time to compile and list all users increases, thus affecting system performance.
Tune this parameter so that a reasonable number is used for all LDAP searches. For example, setting ibm-slapdSizeLimit parameter affects the number of DN’s that are listed by using the ldapsearch command.
ibm-slapdDbConnections and ibm-slapdSetEnv
Increase the number of IBM Directory Server connections to DB2 by editing the slapd32.conf or ibmslapd.conf file and increase the ibm-slapdDbConnections.
For AIX: With IDS 4.1 and later memory loopback is auto installed so you can use more then 256 meg of memory with LDAP. With using loopback you can increase the number of DB connections to 30. This used say in other documentation that this should be 15 or even 9. This was changed with memory loopback. With 3.2.2 and earlier without loopback you had a maximum of 8. If you manually configured for loopback then you could use 30+. But what we found in testing that not much was gained using anything grater then 32.
All other operating systems are set to 30.
If you do a migration from a older version to a newer version this sometimes keeps the old settings in the config file. So it is best to take a look at this and make sure it has been updated to the higher level.
The number of DB2 connections determines the amount of processing concurrency between the IBM Directory Server and DB2, one database connection will be established for each worker thread. If the number of DB2 connections is increased beyond its maximum value, the IBM Directory Server will revert to the maximum value.
With Secureway 3.2.2 and earlier you also will want to change the ibm-slapdSetEnv: SLAPD_WORKERS setting to match what your ibm-slapdDbConnections is. These need to match, what you don't want to happen is to have more worker threads then you have db connections. This will cause a performance issue with worker threads waiting for work to be done.
With IBM Directory 4.1 and later you only have ibm-slapdDbConnections this is also used for the number of worker threads.
16.4.2 Suffixes
Preferably, add only one suffix for all user directory objects. Example as follows:
dn: cn=Directory,cn=RDBM Backends,cn=IBMDirectory,cn=Schemas,cn=Configuration
ibm-slapdSuffix: localhost
ibm-slapdSuffix: user_suffix
Where user_suffix is the suffix to be used for user objects like o=ibm,c=us.
Note that it is recommended that only one suffix be used for user objects. You can separate the user namespace within the suffix by using multiple directory container objects. If more than one suffix is used, additional directory searches are necessary to find user objects, which slows down IBM Tivoli Directory server performance. For one or two additional suffixes, the performance slows down by approximately 10 percent.
Order the suffixes in the IBM Directory Server configuration file After the set of suffixes to be added has been determined, order them in the slapd32.conf or ibmslapd.conf file for best performance. The goal is to get the IBM Directory Server to return suffixes that are most likely to contain authenticating users first.
For ITDS Version 4.1 and earlier ldapsearch searches suffixes in the order in which the IBM Tivoli Directory server returns suffixes are returned in the reverse order as they appear in the configuration file.
For ITDS 5.x, suffixes are returned in the same order as they appear in the configuration file.
16.4.3 Recycle the IBM Directory Server
To recycle the IBM Directory Server to make it aware of any changes, do one of the following:
To stop the IBM Directory Server, do the following:
On UNIX systems with ITDS Version 4.1 and earlier, enter the following:
ps –ef | grep slapd
# find the slapd process id
kill -9 slapd process id slapd
On UNIX systems with ITDS Version 5.1 and newer, enter the following:
ps –ef | grep ibmslapd
# find the ibmslapd process id
kill -9 ibmslapd process id ibmslapd
To start the IBM Tivoli Directory server, do the following:
On UNIX systems with ITDS Version 4.1 and earlier, enter the following:
slapd
On UNIX systems with ITDS Version 5.1 and newer, enter the following:
ibmslapd
On Windows systems, stop and start the IBM Tivoli Directory Server service.
16.4.4 Verify suffix order
To verify that the suffixes are ordered for performance, enter the following on one line:
ldapsearch -h ldap_host -D cn=root -w ldap_passwd -s base -b "" "objectclass=*
Where cn=root is the IBM Directory Server root administrator user, ldap_host is the host name of the IBM Directory Server, and ldap_passwd is the IBM Directory Server root administrator’s password. you will get back the suffix order as how it is read along with some other server information.
There are several additional settings that affect performance by putting limits on client activity, minimizing the impact to server throughput and resource usage, such as:
With ITDS 5.1 and earlier:
ibm-slapdTimeLimit: 900
ibm-slapdIdleTimeOut: 300
With ITDS 5.2 and later:
ibm-slapdPagedResAllowNonAdmin: TRUE
ibm-slapdPagedResLmt: 3
ibm-slapdSortKeyLimit: 3
ibm-slapdSortSrchAllowNonAdmin: TRUE
 
Note: Default values are shown.
16.5 DB2 tuning
IBM Tivoli Directory Server uses DB2 as the data store and Structured Query Language (SQL) as the query retrieval mechanism. While the LDAP server caches LDAP queries, answers, and authentication information, DB2 caches tables, indexes, and statements. Best practices state that you should only have the LDAP instance on the DB2 that resides with the LDAP server. You should not share this DB2 with any other application. One reason is that the license that comes with the DB2 with IBM Tivoli Directory Server is only the license to be used by LDAP. The main reason not to share is that it will be a performance issue. You will be setting this DB2 to run LDAP and it is not a relational database so the settings will affect each other if you share the DB2 with other applications and not just the Directory Server.
Many DB2 configuration parameters affect either the memory (buffer pools) or disk resources. Since disk access is usually much slower than memory access, the key database performance tuning objective is to decrease the amount of disk activity.
We will be covering the following types of DB2 tuning:
DB2 buffer pool tuning
Other DB2 configuration parameters
Optimization and organization (reorgchk and reorg)
Backing up and restoring the database (backup and restore)
16.5.1 Warning when IBM Directory Server is running
DB2 parameter tuning commands make use of DB2 terminate. If the IBM Directory Server slapd or ibmslapd process is running when this command is issued, it will render the server partially functional. Any cached searches appear to respond correctly. Other searches might simply return with no results or error messages might appear. The recovery is to recycle the IBM Directory Server.
It is best to stop the IBM Directory Server when changing the DB2 tuning parameters.
For detailed information about IBM DB2 commands, see the IBM DB2 documentation at the following Web site:
 
Attention: Only users listed as database administrators can run the DB2 commands. Be sure the user ID running the DB2 commands is a user in the dbsysadm group (UNIX operating systems) or a member of the Administrator group (Windows operating systems.) This includes the DB2 instance owner and root.
If you have any trouble running the DB2 commands, check to ensure that the DB2 environment variables have been established by running db2profile (if not, the db2 get and db2 update commands will not work). The script file db2profile is located in the sqllib subdirectory under the instance owner’s home directory. If you need to tailor this file, follow the comments inside the file to set your instance name, user paths, and default database name (the default path is /home/ldapdb2/sqllib/db2profile.) It is assumed that the user is logged in as ibm-slapdDbUserId. If logged in as the root user on a UNIX operating system, it is possible to switch to the instance owner as follows:
su - instance_owner
Where instance_owner is the defined owner of the LDAP database.
To log on as the database administrator on a Windows 2000 operating system, run the following command:
runas /user:instance_owner db2cmd
The instance_owner is the defined owner of the LDAP database.
 
Note: If you have problems connecting to the database on Windows systems, check the DB2INSTANCE environment variable. By default this variable is set to DB2. However, to connect to the database, the environment variable must be set to the database instance name.
set DB2INSTANCE=instance_name
The instance_name is by default called: ldapdb2.
For most changes that you make you need to connect to the instance:
db2 connect to database_name
Where database_name is the name of the database. The default is ldapdb2.
For additional stability and performance enhancements, upgrade to the latest version of DB2.
 
Note: Changes to DB2 configuration parameters do not take effect until the database is restarted with db2stop and db2start.
16.5.2 DB2 buffer pool tuning
DB2 buffer pool tuning is one of the most significant types of DB2 performance tuning. A buffer pool is a data cache between LDAP and the physical DB2 database files for both tables and indexes. DB2 buffer pools are searched when entries and their attributes are not found in the entry cache. Buffer pool tuning typically needs to be done when the database is initially loaded and when the database size changes significantly.
There are several considerations to keep in mind when tuning the DB2 buffer pools; for example:
If there are no buffer pools, all database activity results in disk access.
If the size of each buffer pool is too small, LDAP must wait for DB2 disk activity to satisfy DB2 SQL requests.
If one or more buffer pools is too large, memory on the LDAP server might be wasted.
If the total amount of space used by the LDAP caches and both buffer pools is larger than physical memory available on the server, operating system paging (disk activity) will occur.
To get the current DB2 buffer pool sizes, run the following commands:
db2 connect to database_name
db2 "select bpname,npages,pagesize from sysibm.sysbufferpools"
Where database_name is the name of the database.
Example 16-1 shows the default settings for the example above.
Example 16-1 Display DB2 buffer pool size default settings
BPNAME NPAGES PAGESIZE
 
IBMDEFAULTBP 29500 4096
LDAPBP 1230 32768
 
2 record(s) selected
This gives you appox 160 Mb of buffers used by the LDAP with in DB2.
With the start of SecureWay 3.2.2 and through IBM Tivoli Directory Server 5.2, the LDAP directory database (DB2) has two buffer pools: LDAPBP (32 K page size) and IBMDEFAULTBP (4 K page size). The size of each buffer pool needs to be set separately, but the method for determining how big each should be is the same: Run your workload with the buffer pool sizes set to different values and measure the differences in operations per second.
 
Note: DB2 does not allow buffer pools to be set to zero.
16.5.3 LDAPBP buffer pool size
This buffer pool contains cached entry data (ldap_entry) and all of the associated indexes. LDAPBP is similar to the entry cache, except that LDAPBP uses different algorithms in determining which entries to cache. It is possible that an entry that is not cached in the entry cache is located in LDAPBP.
To determine the best size for your LDAPBP buffer pool, run your workload with the LDAPBP buffer pool size set to different values and measure the differences in operations per second.
16.5.4 IBMDEFAULTBP buffer pool size
DB2 system information, including system tables and other information that is useful in resolving filters, is cached in the IBMDEFAULTBP buffer pool. You might need to adjust the IBMDEFAULTBP cache settings for better performance in the LDAPBP.
To determine the best size for your IBMDEFAULTBP buffer pool, run your workload with the buffer pool sizes set to different values and measure the differences in operations per second.
16.5.5 Setting buffer pool sizes
As a general guideline, a 3 to 1 ratio between memory allocated to the IBMDEFAULTBP (4-K pages) and LDAPBP (32-K pages) is good for performance. By default, the IBMDEFAULTBP is created with a size of 29500 (4-K) pages. By default, the LDAPBP buffer pool is created with a size of 1230 (32-K) pages.
On an LDAP Server with minimal memory configuration, this allocates roughly 60 percent of physical memory to the DB2 buffer pools
Use the alter bufferpool command to set the IBMDEFAULTBP and LDAPBP buffer pool sizes. The following examples show some IBMDEFAULTBP and LDAPBP buffer pools setting and what there real memory usage is:
LDAP 3.2.2 and later defaults has memory usage: 154 MB
 – db2 alter bufferpool ibmdefaultbp size 29500
 – db2 alter bufferpool ldapbp size 1230
3 to 1 memory usage ratio has memory usage: 259.375 MB
 – db2 alter bufferpool ibmdefaultbp size 49800
 – db2 alter bufferpool ldapbp size 2075
Doubling previous 3 to 1 ratio example has memory usage: 518.75 MB
 – db2 alter bufferpool ibmdefaultbp size 99600
 – db2 alter bufferpool ldapbp size 4150
Doubling again has memory usage: 1037.5 MB
 – db2 alter bufferpool ibmdefaultbp size 199200
 – db2 alter bufferpool ldapbp size 8300
16.5.6 Warnings about buffer pool memory usage
If any of the buffer pools are set too high, DB2 can fail to start due to insufficient memory. If this occurs there might be a core dump file, but usually there is no error message.
On AIX systems, the system error log might report a memory allocation failure. To view this log, enter the following:
errpt –a | more
Restoring a database that was backed up on a system with buffer pool sizes that are too large for the target system might cause the restoration to fail.
If DB2 fails to start due to buffer pool sizes being too large, redo the DB2 tuning parameters.
16.5.7 Other DB2 configuration parameters
There are a number of other configuration settings that can be looked at exspecially some of the Heap settings. Functional problems can occur if one of the heap configuration parameters is to low or to hi. we have included some of the settings that are worth looking at. Some of these settings rely on other settings being set, these will be noted when that accrues.
 
Note: If DB2 recognizes that a parameter is configured insufficiently, the problem is posted to the diagnostic log (db2diag.log). We have included some of the error codes that you might get with some of these settings.
16.5.8 Warning about MINCOMMIT
Do not set the MINCOMMIT DB2 tuning to anything other than 1. In previous versions of documentation it has said to set the MINCOMMIT parameter to 25. A setting other than 1 might cause time-outs on update operations and might slow down the performance of these updates when you are using replication.
16.5.9 More DB2 configuration settings
In this section we discuss more DB2 configuration settings.
Utility Heap Size configuration parameter - util_heap_sz
Some information is:
Default [Range]: 5000 [16 - 524 288]
Unit of Measure: Pages (4 KB)
When Allocated: As required by the database manager utilities
When Freed: When the utility no longer needs the memory
This parameter indicates the maximum amount of memory that can be used simultaneously by the BACKUP, RESTORE, and LOAD (including load recovery) utilities.
Our recommendation is to use the default value unless your utilities run out of space, in which case you should increase this value. If memory on your system is constrained, you may wish to lower the value of this parameter to limit the memory used by the database utilities. If the parameter is set too low, you may not be able to concurrently run utilities. You need to set this parameter large enough to accommodate all of the buffers that you want to allocate for the concurrent utilities. To set this heap to the following:
db2 update database configuration for ldapdb2 using UTIL_HEAP_SZ 5000
Application Control Heap Size configuration parameter - app_ctl_heap_sz
The information is:
Database server with local and remote clients: 128 [1-64 000]
Database server with local clients:
 – 64 [1-64 000] (for non-UNIX platforms)
 – 128 [1-64 000] (for UNIX-based platforms)
Partitioned database server with local and remote clients: 512 [1-64 000]
Unit of Measure: Pages (4 KB)
When Allocated: When an application starts
When Freed: When an application completes
This parameter specifies the average size of the shared memory area allocated for an application. There is one application control heap per connection per partition.
The application control heap is required primarily for sharing information between agents working on behalf of the same request.
This heap is also used to store descriptor information for declared temporary tables. The descriptor information for all declared temporary tables that have not been explicitly dropped is kept in this heap's memory and cannot be dropped until the declared temporary table is dropped.
Our recommendation is to initially start with the default value (128). You may have to set the value higher if you are running complex applications, if you have a system that contains a large number of database partitions, or if you use declared temporary tables. The amount of memory needed increases with the number of concurrently active declared temporary tables. A declared temporary table with many columns has a larger table descriptor size than a table with few columns, so having a large number of columns in an application's declared temporary tables also increases the demand on the application control heap. To set this heap to the following:
db2 update database configuration for ldapdb2 using APP_CTL_HEAP_SZ 128
If the APP_CTL_HEAP_SZ is set to an inadequate value, the following error message is issued when you import data into a database from shape files:
GSE0214N An INSERT statement failed. SQLERROR = SQL0973N Not enough storage is available in the "APP_CTL_HEAP" heap to process the statement.
Application Heap Size configuration parameter - applheapsz
The information is:
Default [Range]
 – 32-bit Database server with local and remote clients: 256 [16 - 60 000]
 – 64-bit Database server with local and remote clients: 256 [16 - 60 000]
 – 32-bit Partitioned database server with local and remote clients: 64 [16 - 60 000]
 – 64-bit Partitioned database server with local and remote clients: 128 [16 - 60 000]
Unit of Measure: Pages (4 KB)
When Allocated: When an agent is initialized to do work for an application
When Freed: When an agent completes the work to be done for an application
This parameter defines the number of private memory pages available to be used by the database manager on behalf of a specific agent or subagent.
The heap is allocated when an agent or subagent is initialized for an application. The amount allocated will be the minimum amount needed to process the request given to the agent or subagent. As the agent or subagent requires more heap space to process larger SQL statements, the database manager will allocate memory as needed, up to the maximum specified by this parameter.
Our recommendation is for most DB2 applications use an APPLHEAPSZ parameter value of at least 2048. Increase the value of this parameter if your applications receive an error indicating that there is not enough storage in the application heap.
The application heap (applheapsz) is allocated out of agent private memory.
If the APPLHEAPSZ is set to an inadequate value, the following error message is issued when you try to enable a database for spatial operations:
GSE0009N Not enough space is available in DB2's application heap.
 
GSE0213N A bind operation failed. SQLERROR = SQL0001N Binding or precompilation did not complete successfully. SQLSTATE=00000.
Sort Heap Size configuration parameter - sortheap
The information is:
Default [Range]
 – 32-bit platforms: 256 [16 - 524 288]
 – 64-bit platforms: 256 [16 - 4 194 303]
Unit of Measure: Pages (4 KB)
When Allocated: As needed to perform sorts
When Freed: When sorting is complete
This parameter defines the maximum number of private memory pages to be used for private sorts, or the maximum number of shared memory pages to be used for shared sorts. If the sort is a private sort, then this parameter affects agent private memory. If the sort is a shared sort, then this parameter affects the database shared memory. Each sort has a separate sort heap that is allocated as needed, by the database manager. This sort heap is the area where data is sorted. If directed by the optimizer, a smaller sort heap than the one specified by this parameter is allocated using information provided by the optimizer.
Our recommendation is 2500.
When working with the sort heap, you should consider the following: Appropriate indexes can minimize the use of the sort heap.
Hash join buffers and dynamic bitmaps (used for index ANDing and Star Joins) use sort heap memory. Increase the size of this parameter when these techniques are used.
Increase the size of this parameter when frequent large sorts are required.
When increasing the value of this parameter, you should examine whether the sheapthres parameter in the database manager configuration file also needs to be adjusted.
The sort heap size is used by the optimizer in determining access paths. You should consider rebinding applications (using the REBIND command) after changing this parameter.
SQL5155W means that the update completed successfully. The current value of SORTHEAP may adversely affect performance.
Explanation: The value of SORTHEAP is currently greater than half the value of the database manager configuration parameter SHEAPTHRES. This may cause performance to be less than optimal.
User Response: Increase the value of the database manager configuration parameter SHEAPTHRES and/or decrease the value of SORTHEAP so that SHEAPTHRES is at least twice as large as SORTHEAP.
A larger ratio is desirable in most cases. See the Administration Guide for recommendations on configuration parameter tuning.
SQL0955C: Sort memory cannot be allocated to process the statement. Reason code = reason-code.
Explanation:
Insufficient virtual memory is available to the database agent for sort processing, as indicated by the reason code 1.
Insufficient private process memory.
Insufficient shared memory in the database-wide shared memory area designated for sort processing.
The statement cannot be processed but other SQL statements may be processed.
User response: One or more of the following:
Decrease the value of the sort heap parameter (sortheap) in the corresponding database configuration file.
For reason code 1, increase the private virtual memory available, if possible. For example, on UNIX systems you can use the ulimit command to increase the maximum size of the data area for a process.
For reason code 2, increase the size of the database-wide shared memory area designated for sort processing. To increase the size of this area without affecting the sortheap threshold for private sorts, increase the value of the SHEAPTHRES_SHR database configuration parameter.
To increase both the size of the database-wide shared memory area designated for sort processing as well as the sortheap threshold for private sorting, increase the value of the SHEAPTHRES database manager configuration parameter and set SHEAPTHRES_SHR to 0.
SQL3537N: Sort memory could not be allocated during the execution of the LOAD utility.
Explanation: Insufficient process virtual memory is available to the LOAD utility for sort processing.
User Response: Terminate the application on receipt of this message. Ensure there is enough virtual memory available for sort processing.
Possible solutions include:
Disconnect all applications from the database and decrease the size of the sort heap parameter (sortheap) in the corresponding database configuration file.
Remove background processes and/or terminate other currently executing applications.
Increase the amount of virtual memory available.
Sort Heap Threshold configuration parameter - sheapthres
The information is:
Configuration Type: Database manager
Default [Range]
 – UNIX 32-bit platforms: 20 000 [250 -- 2 097 152]
 – Windows platforms: 10 000 [250 -- 2 097 152]
 – 64-bit platforms: 20 000 [250 -- 2 147 483 647]
 – Unit of Measure: Pages (4 KB)
For shared sorts, this parameter is a database-wide hard limit on the total amount of memory consumed by shared sorts at any given time. When this limit is reached, no further shared-sort memory requests will be allowed.
The Sort Heap Threshold parameter, as a database manager configuration parameter, applies across the entire DB2 instance. The only way to set this parameter to different values on different nodes or partitions, is to create more than one DB2 instance. This will require managing different DB2 databases over different database partition groups. Such an arrangement defeats the purpose of many of the advantages of a partitioned database environment.
Recommendation: Ideally, you should set this parameter to a reasonable multiple of the largest sortheap parameter you have in your database manager instance. This parameter should be at least two times the largest sortheap defined for any database within the instance.
If you are doing private sorts and your system is not memory constrained, an ideal value for this parameter can be calculated using the following steps:
Calculate the typical sort heap usage for each database:
(typical number of concurrent agents running against the database)
multiplied by
(sortheap, as defined for that database)
Calculate the sum of the above results, which provides the total sort heap that could be used under typical circumstances for all databases within the instance.
You should use benchmarking techniques to tune this parameter to find the proper balance between sort performance and memory usage.
You can use the database system monitor to track the sort activity, using the post threshold sorts (post_threshold_sorts) monitor element. It is set by doing the following command:
db2 update dbm cfg using sheapthres 20000
Statement Heap Size configuration parameter - stmtheap
The information is:
Default [Range]: 2048 [128 - 65 535]
Unit of Measure: Pages (4 KB)
When Allocated: For each statement during precompiling or binding
When Freed: When precompiling or binding of each statement is complete
The statement heap is used as a work space for the SQL compiler during compilation of an SQL statement. This parameter specifies the size of this work space.
This area does not stay permanently allocated, but is allocated and released for every SQL statement handled. Note that for dynamic SQL statements, this work area will be used during execution of your program; whereas, for static SQL statements, it is used during the bind process but not during program execution.
Recommendation: In most cases the default value of this parameter will be acceptable. If you have very large SQL statements and the database manager issues an error (that the statement is too complex) when it attempts to optimize a statement, you should increase the value of this parameter in regular increments (such as 256 or 1024) until the error situation is resolved. This is the command you used to set it:
db2 update database configuration for ldapdb2 using stmtheap 2048
SQL0101N: The statement is too long or too complex.
Explanation: The statement could not be processed because it exceeds a system limit for either length or complexity, or because too many constraints or triggers are involved.
If the statement is one that creates or modifies a packed description, the new packed description may be too large for its corresponding column in the system catalogs.
Note that where character data conversions are performed for applications and databases running under different codepages, the result of the conversion is exceeding the length limit.
User response: Either:
Break the statement up into shorter or less complex SQL statements.
Increase the size of the statement heap (stmtheap) in the database configuration file.
Reduce the number of check or referential constraints involved in the statement or reduce the number of indexes on foreign keys.
Reduce the number of triggers involved in the statement.
SQL0437W: Performance of this complex query may be sub-optimal. Reason code: reason-code.
Explanation: The statement may achieve sub-optimal performance since the complexity of the query requires resources that are not available or optimization boundary conditions were encountered.
The following is a list of reason codes:
1. The join enumeration method was altered due to memory constraints.
2. The join enumeration method was altered due to query complexity.
3. Optimizer cost underflow.
4. Optimizer cost overflow.
5. Query optimization class was too low.
6. Optimizer ignored an invalid statistic.
The statement will be processed.
User Response: One or more of the following:
Increase the size of the statement heap (stmtheap) in the database configuration file (Reason code 1).
Break the statement up into less complex SQL statements (Reason codes 1,2,3,4).
Ensure predicates do not over-specify the answer set (Reason code 3).
Change the current query optimization class to a lower value (Reason codes 1,2,4).
Issue Runstats for the tables involved in the query (Reason codes 3,4).
Change the current query optimization class to a higher value (Reason code 5).
Reissue RUNSTATS for both the tables involved in the query and their corresponding indexes, that is, use the AND INDEXES ALL clause so that table and index statistics are consistent (Reason code 6)
Package Cache Size configuration parameter - pckcachesz
The information is:
Default [Range]
 – 32-bit platforms: -1 [-1, 32 -- 128 000]
 – 64-bit platforms: -1 [-1, 32 -- 524 288]
Unit of Measure: Pages (4 KB)
When Allocated: When the database is initialized
When Freed: When the database is shut down
This parameter is allocated out of the database shared memory, and is used for caching of sections for static and dynamic SQL statements on a database. In a partitioned database system, there is one package cache for each database partition.
Caching packages allows the database manager to reduce its internal overhead by eliminating the need to access the system catalogs when reloading a package; or, in the case of dynamic SQL, eliminating the need for compilation. Sections are kept in the package cache until one of the following occurs:
The database is shut down.
The package or dynamic SQL statement is invalidated.
The cache runs out of space.
This caching of the section for a static or dynamic SQL statement can improve performance especially when the same statement is used multiple times by applications connected to a database. This is particularly important in a transaction processing application.
By taking the default (-1), the value used to calculate the page allocation is eight times the value specified for the maxappls configuration parameter. The exception to this occurs if eight times maxappls is less than 32. In this situation, the default value of -1 will set pckcachesz to 32.
Recommendation: When tuning this parameter, you should consider whether the extra memory being reserved for the package cache might be more effective if it was allocated for another purpose, such as the buffer pool or catalog cache. For this reason, you should use benchmarking techniques when tuning this parameter. If the cache is too large, memory is wasted holding copies of the initial sections.
The following monitor elements can help you determine whether you should adjust this configuration parameter:
pkg_cache_lookups (package cache lookups)
pkg_cache_inserts (package cache inserts)
pkg_cache_size_top (package cache high water mark)
pkg_cache_num_overflows (package cache overflows)
 
Note: The package cache is a working cache, so you cannot set this parameter to zero. There must be sufficient memory allocated in this cache to hold all sections of the SQL statements currently being executed. If there is more space allocated than currently needed, then sections are cached. These sections can simply be executed the next time they are needed without having to load or compile them.
The limit specified by the pckcachesz parameter is a soft limit. This limit may be exceeded, if required, if memory is still available in the database shared set. You can use the pkg_cache_size_top monitor element to determine the largest that the package cache has grown, and the pkg_cache_num_overflows monitor element to determine how many times the limit specified by the pckcachesz parameter has been exceeded. The command for setting this is:
db2 update database configuration for ldapdb2 using pckcachesz 380
Statistics Heap Size configuration parameter - stat_heap_sz
The information is:
Default [Range]: 4384 [1096 - 524 288]
Unit of Measure: Pages (4 KB)
When Allocated: When the RUNSTATS utility is started
When Freed: When the RUNSTATS utility is completed
This parameter indicates the maximum size of the heap used in collecting statistics using the RUNSTATS command.
Recommendation: The default value is appropriate when no distribution statistics are collected or when distribution statistics are only being collected for relatively narrow tables. The minimum value is not recommended when distribution statistics are being gathered, as only tables containing 1 or 2 columns will fit in the heap.
You should adjust this parameter based on the number of columns for which statistics are being collected. Narrow tables, with relatively few columns, require less memory for distribution statistics to be gathered. Wide tables, with many columns, require significantly more memory. If you are gathering distribution statistics for tables which are very wide and require a large statistics heap, you may wish to collect the statistics during a period of low system activity so you do not interfere with the memory requirements of other users.
Maximum Percent of Lock List Before Escalation config parameter - maxlocks
The information is:
Default [Range]
UNIX: 10 [1 - 100]
Windows: 22 [1 - 100]
Unit of Measure: Percentage
Lock escalation is the process of replacing row locks with table locks, reducing the number of locks in the list. This parameter defines a percentage of the lock list held by an application that must be filled before the database manager performs escalation. When the number of locks held by any one application reaches this percentage of the total lock list size, lock escalation will occur for the locks held by that application. Lock escalation also occurs if the lock list runs out of space.
The database manager determines which locks to escalate by looking through the lock list for the application and finding the table with the most row locks. If after replacing these with a single table lock, the maxlocks value is no longer exceeded, lock escalation will stop. If not, it will continue until the percentage of the lock list held is below the value of maxlocks. The maxlocks parameter multiplied by the maxappls parameter cannot be less than 100.
Recommendation: The following formula allows you to set maxlocks to allow an application to hold twice the average number of locks:
maxlocks = 2 * 100 / maxappls
Where 2 is used to achieve twice the average and 100 represents the largest percentage value allowed. If you have only a few applications that run concurrently, you could use the following formula as an alternative to the first formula:
maxlocks = 2 * 100 / (average number of applications running concurrently)
One of the considerations when setting maxlocks is to use it in conjunction with the size of the lock list (locklist). The actual limit of the number of locks held by an application before lock escalation occurs is:
maxlocks * locklist * 4096 / (100 * 36) on a 32-bit system
maxlocks * locklist * 4096 / (100 * 56) on a 64-bit system
Where 4096 is the number of bytes in a page, 100 is the largest percentage value allowed for maxlocks, and 36 is the number of bytes per lock on a 32-bit system, and 56 is the number of bytes per lock on a 64-bit system. If you know that one of your applications requires 1 000 locks, and you do not want lock escalation to occur, then you should choose values for maxlocks and locklist in this formula so that the result is greater than 1 000. (Using 10 for maxlocks and 100 for locklist, this formula results in greater than the 1 000 locks needed.)
If maxlocks is set too low, lock escalation happens when there is still enough lock space for other concurrent applications. If maxlocks is set too high, a few applications can consume most of the lock space, and other applications will have to perform lock escalation. The need for lock escalation in this case results in poor concurrency.
You may use the database system monitor to help you track and tune this configuration parameter.
SQL5135N: The settings of the maxlocks and maxappls configuration parameters do not use all of the locklist space.
Explanation: The number of active processes (maxappls) times the maximum percentage of lock list space for each application (maxlocks) must be greater than or equal to 100. That is:
maxappls * maxlocks >= 100
This ensures that all of the allocated locklist space can be used.
User Response: Increase the setting for maxappls, maxlocks, or both.
Size of Log Files configuration parameter - logfilsiz
The information is:
Default [Range]
 – UNIX: 1000 [4 -- 262 144]
 – Windows: 250 [4 -- 262 144]
Unit of Measure: Pages (4 KB)
This parameter defines the size of each primary and secondary log file. The size of these log files limits the number of log records that can be written to them before they become full and a new log file is required.
The use of primary and secondary log files as well as the action taken when a log file becomes full are dependent on the type of logging that is being performed:
Circular logging: A primary log file can be reused when the changes recorded in it have been committed. If the log file size is small and applications have processed a large number of changes to the database without committing the changes, a primary log file can quickly become full. If all primary log files become full, the database manager will allocate secondary log files to hold the new log records.
Log retention logging: When a primary log file is full, the log is archived and a new primary log file is allocated.
Recommendation: You must balance the size of the log files with the number of primary and secondary log files.
The value of the logfilsiz should be increased if the database has a large number of update, delete and/or insert transactions running against it which will cause the log file to become full very quickly.
 
Note: The upper limit of log file size, combined with the upper limit of the number of log files (logprimary + logsecond), gives an upper limit of 256 GB of active log space.
A log file that is too small can affect system performance because of the overhead of archiving old log files, allocating new log files, and waiting for a usable log file.
The value of the logfilsiz should be reduced if disk space is scarce, since primary logs are preallocated at this size.
A log file that is too large can reduce your flexibility when managing archived log files and copies of log files, since some media may not be able to hold an entire log file.
If you are using log retention, the current active log file is closed and truncated when the last application disconnects from a database. When the next connection to the database occurs, the next log file is used. Therefore, if you understand the logging requirements of your concurrent applications you may be able to determine a log file size which will not allocate excessive amounts of wasted space.
Recommendation: For most enterprise operations the default will not be enough this should be set to 10000. This can be set by the following command:
db2 update database configuration for ldapdb2 using LOGFILSIZ 10000
SQL1762N: Unable to connect to database because there is not enough space to allocate active log files.
Explanation: There is not enough disk space to allocate active log files. Possible reasons include the following.
There is insufficient space available on the device used to store the recovery logs.
If userexits are enabled, the userexit program may be failing due to an incorrect path, incorrect install directory, sharing violation, or other problem.
User Response: Based on the cause: Ensure that there is sufficient space on the device for the primary logs, as DB2 may require extra space to allocate new logs so that the database will start with at least LOGPRIMARY log files. Do not delete recovery logs to free space, even if they appear inactive.
Ensure the userexit program is operating correctly by manually invoking it. Review the instructions provided in the sample userexit source code for compiling and installing the userexit program. Ensure that the archive destination path exists.
As a last resort, try reducing the values for LOGPRIMARY and/or LOGFILSIZ database configuration parameters so that a smaller set of active log files are used. This will reduce the requirement for disk space.
Reissue the connect statement after determining and correcting the problem.
Number of Primary Log Files config parameter - logprimary
The information is:
Default [Range]: 3 [2 - 256]
Unit of Measure: Counter
When Allocated: The database is created a log is moved to a different location (which occurs when the logpath parameter is updated). Following a increase in the value of this parameter (logprimary), during the next database connection after all users have disconnected. A log file has been archived and a new log file is allocated (the logretain or userexit parameter must be enabled). If the logfilsiz parameter has been changed, the active log files are re-sized during the next database connection after all users have disconnected.
When Freed: Not freed unless this parameter decreases. If decreased, unneeded log files are deleted during the next connection to the database, otherwise you need to restart the database to free up space or set it up with a smaller primary log.
The primary log files establish a fixed amount of storage allocated to the recovery log files. This parameter allows you to specify the number of primary log files to be preallocated.
Under circular logging, the primary logs are used repeatedly in sequence. That is, when a log is full, the next primary log in the sequence is used if it is available. A log is considered available if all units of work with log records in it have been committed or rolled-back. If the next primary log in sequence is not available, then a secondary log is allocated and used. Additional secondary logs are allocated and used until the next primary log in the sequence becomes available or the limit imposed by the logsecond parameter is reached. These secondary log files are dynamically deallocated as they are no longer needed by the database manager.
The number of primary and secondary log files must comply with the following:
If logsecond has a value of -1, logprimary <= 256.
If logsecond does not have a value of -1, (logprimary + logsecond) <= 256.
Recommendation: The value chosen for this parameter depends on a number of factors, including the type of logging being used, the size of the log files, and the type of processing environment (for example, length of transactions and frequency of commits).
Increasing this value will increase the disk requirements for the logs because the primary log files are preallocated during the very first connection to the database.
If you find that secondary log files are frequently being allocated, you may be able to improve system performance by increasing the log file size (logfilsiz) or by increasing the number of primary log files.
For databases that are not frequently accessed, in order to save disk storage, set the parameter to 2. For databases enabled for roll-forward recovery, set the parameter larger to avoid the overhead of allocating new logs almost immediately.
You may use the database system monitor to help you size the primary log files. Observation of the following monitor values over a period of time will aid in better tuning decisions, as average values may be more representative of your ongoing requirements.
sec_log_used_top (maximum secondary log space used)
tot_log_used_top (maximum total log space used)
sec_logs_allocated (secondary logs allocated currently)
Space requirements for log files. You will require 32 KB of space for log control files. You will also need at least enough space for your active log configuration, which you can calculate as:
(logprimary + logsecond) * (logfilsiz + 2) * 4096
Where logprimary is the number of primary log files, defined in the database configuration file. logsecond is the number of secondary log files, defined in the database configuration file; in this calculation, logsecond cannot be set to -1 (When logsecond is set to -1, you are requesting an infinite active log space.)
logfilsiz is the number of pages in each log file, defined in the database configuration file 2 is the number of header pages required for each log file 4096 is the number of bytes in one page. If the database is enabled for circular logging, the result of this formula will provide a sufficient amount of disk space.
If the database is enabled for roll-forward recovery, special log space requirements should be taken into consideration.
With the logretain configuration parameter enabled, the log files will be archived in the log path directory. The online disk space will eventually fill up, unless you move the log files to a different location.
With the userexit configuration parameter enabled, a user exit program moves the archived log files to a different location. Extra log space is still required to allow for:
Online archived logs that are waiting to be moved by the user exit program
New log files being formatted for future use
If the database is enabled for infinite logging (that is, you set logsecond to -1), the userexit configuration parameter must be enabled, so you will have the same disk space considerations. DB2(R) will keep at least the number of active log files specified by logprimary in the log path, so you should not use the value of -1 for logsecond in the above formula. Ensure you provide extra disk space to allow the delay caused by archiving log files.
If you are mirroring the log path, you will need to double the estimated log file space requirements.
See logfilsiz above for any errors that might come up other then the one below.
SQL5101N: The entries in the database configuration file define log file parameters (logprimary and logsecond) that are not in the valid range.
Explanation: The requested change would cause the total number of logfiles to be out of range. The following condition must always be true:
logprimary + logsecond <= 128
The requested change is not made.
User response: Do one or both of the following:
Decrease the number of primary log files.
Decrease the number of secondary log files.
Number of Secondary Log Files config parameter - logsecond
The information is:
Default [Range]: 2 [-1; 0 - 254]
Unit of Measure: Counter
When Allocated: As needed when logprimary is insufficient (see detail below)
When Freed: Over time as the database manager determines they will no longer be required.
This parameter specifies the number of secondary log files that are created and used for recovery log files (only as needed). When the primary log files become full, the secondary log files (of size logfilsiz) are allocated one at a time as needed, up to a maximum number as controlled by this parameter. An error code will be returned to the application, and the database will be shut down, if more secondary log files are required than are allowed by this parameter.
If you set logsecond to -1, the database is configured with infinite active log space. There is no limit on the size or the number of in-flight transactions running on the database. If you set logsecond to -1, you still use the logprimary and logfilsiz configuration parameters to specify how many log files DB2 should keep in the active log path. If DB2 needs to read log data from a log file, but the file is not in the active log path, DB2 will invoke the userexit program to retrieve the log file from the archive to the active log path. (DB2 will retrieve the files to the overflow log path, if you have configured one.) Once the log file is retrieved, DB2 will cache this file in the active log path so that other reads of log data from the same file will be fast. DB2 will manage the retrieval, caching, and removal of these log files as required.
If your log path is a raw device, you must configure the overflowlogpath configuration parameter in order to set logsecond to -1.
By setting logsecond to -1, you will have no limit on the size of the unit of work or the number of concurrent units of work. However, rollback (both at the savepoint level and at the unit of work level) could be very slow due to the need to retrieve log files from the archive. Crash recovery could also be very slow for the same reason. DB2 will write a message to the administration notification log to warn you that the current set of active units of work has exceeded the primary log files. This is an indication that rollback or crash recovery could be extremely slow.
To set logsecond to -1 the userexit configuration parameter must be set to yes.
Recommendation: Use secondary log files for databases that have periodic needs for large amounts of log space. For example, an application that is run once a month may require log space beyond that provided by the primary log files. Since secondary log files do not require permanent file space they are advantageous in this type of situation. You will be best to set this to a high level or -1 to allow for growth when needed, such when you are loading a lot of users or making big replication loads. One thing you do not what to do is run out of log space or your database can be damage with lost or corrupted data. You can change this settings with this command:
db2 update database configuration for ldapdb2 using LOGSECOND 50
Change the Database Log Path config parameter - newlogpath
Default [Range]: Null [any valid path or device]
This parameter allows you to specify a string of up to 242 bytes to change the location where the log files are stored. The string can point to either a path name or to a raw device. If the string points to a path name, it must be a fully qualified path name, not a relative path name.
 
Note: In a partitioned database environment, the node number is automatically appended to the path. This is done to maintain the uniqueness of the path in multiple logical node configurations
If you want to use replication, and your log path is a raw device, the overflowlogpath configuration parameter must be configured.
To specify a device, specify a string that the operating system identifies as a device. For example:
On Windows: \.d: or \.PhysicalDisk5
 
Note: You must have Windows NT Version 4.0 with Service Pack 3 or later installed to be able to write logs to a device.
On UNIX-based platforms: /dev/rdblog8
 
Note: You can only specify a device on AIX, Windows 2000, Windows NT, Solaris Operating Environment, HP-UX, and Linux platforms.
The new setting does not become the value of logpath until both of the following occur:
The database is in a consistent state, as indicated by the database_consistent parameter.
All users are disconnected from the database.
When the first new connection is made to the database, the database manager will move the logs to the new location specified by logpath. There might be log files in the old log path. These log files might not have been archived. You might need to archive these log files manually. Also, if you are running replication on this database, replication might still need the log files from before the log path change. If the database is configured with the User Exit Enable (userexit) database configuration parameter set to Yes, and if all the log files have been archived either by DB2 automatically or by yourself manually, then DB2 will be able to retrieve the log files to complete the replication process. Otherwise, you can copy the files from the old log path to the new log path.
If logpath or newlogpath specifies a raw device as the location where the log files are stored, mirror logging, as indicated by mirrorlogpath, is not allowed. If logpath or newlogpath specifies a file path as the location where the log files are stored, mirror logging is allowed and mirrorlogpath must also specify a file path.
Recommendation: Ideally, the log files will be on a physical disk which does not have high I/O. For instance, avoid putting the logs on the same disk as the operating system or high volume databases. This will allow for efficient logging activity with a minimum of overhead such as waiting for I/O. By Default the LDAP Directory is put on the same disk as the LDAP instance so you should move this to another disk.
You may use the database system monitor to track the number of I/Os related to database logging.
The monitor elements log_reads (number of log pages read) and log_writes (number of log pages written) return the amount of I/O activity related to database logging. You can use an operating system monitor tool to collect information about other disk I/O activity, then compare the two types of I/O activity.
With LDAP we highly recommend that you have your log files on another volume or drive other then the LDAP database instance. Use the following command to set the path to the DB2 log file directory:
UPDATE DATABASE CONFIGURATION FOR database_alias USING NEWLOGPATH path
 
Note: Be sure the database instance owner has write access to the specified path or the command fails.
SQL0993W: The new path to the log (newlogpath) in the database configuration file is not valid.
Explanation: The path to the log file is not valid for one of the following reasons:
The path does not exist.
A file with the correct name was found in the specified path, but it is not a log file for this database.
The database manager instance id does not have permission to access the path or a log file.
The requested change is not made.
User Response: To change the path to the log file, submit a database configuration command with a valid value.
16.5.10 Configuration script
This is the bat file you can change it as needed on a Windows server:
# Restrictions: This must be run under db2 command window in the
# sqllibin directory. This script must be run under the context of the
# ldapdb2 user. It does not require write authority to the current directory.
db2 get database configuration for ldapdb2 > c:db2cfgbefor.out
db2 update database configuration for ldapdb2 using SORTHEAP 2500
db2 update database configuration for ldapdb2 using MAXLOCKS 80
db2 update database configuration for ldapdb2 using MINCOMMIT 1
db2 update database configuration for ldapdb2 using UTIL_HEAP_SZ 5000
db2 update database configuration for ldapdb2 using LOGFILSIZ 10000
db2 update database configuration for ldapdb2 using LOGPRIMARY 5
db2 update database configuration for ldapdb2 using LOGSECOND 50
db2 update database configuration for ldapdb2 using APPLHEAPSZ 2048
db2 connect to ldapdb2
# mem usage: 259.375 MB for servers that have 1 gig of memory
db2 alter bufferpool ibmdefaultbp size 49800
db2 alter bufferpool ldapbp size 2075
# mem usage: 1037.5 MB for servers that have 3 gig of memory
#db2 alter bufferpool ibmdefaultbp size 199200
#db2 alter bufferpool ldapbp size 8300
db2 terminate
db2 force applications all
sleep 1
db2stop
db2start
db2 connect to ldapdb2
db2 get database configuration for ldapdb2 > c:db2cfgafter.out
db2 "select bpname,npages,pagesize from syscat.bufferpools"
db2 terminate
This file can be adapted to fit a Unix box with some minor changes. This will give you a report of how it was configured before the changes were made. (c:db2cfgbefore.out) and also have a report on how it looks after the changes were made. (c:db2cfgafter.out).
16.6 Directory size
It is very important that you due your homework and look at what size you directory is at now and what it will be in the future. Performance degrades significantly as your database size grows, it becomes necessary to readjust the sizes of the LDAP caches and DB2 buffer pools from time to time. There is also directory monitoring and maintiance that needs to be done from time to time depending on how offen you are adding or changing your directory and if you are using replication.
16.7 Optimization and organization
DB2 uses a sophisticated set of algorithms to optimize the access to data stored in a database. These algorithms depend upon many factors, including the organization of the data in the database, and the distribution of that data in each table. Distribution of data is represented by a set of statistics maintained by the database manager.
In addition, IBM Tivoli Directory Server creates a number of indexes for tables in the database. These indexes are used to minimize the data accessed in order to locate a particular row in a table.
In a read-only environment, the distribution of the data changes very little. However, with updates and additions to the database, it is not uncommon for the distribution of the data to change significantly. Similarly, it is quite possible for data in tables to become ordered in an inefficient manner.
To remedy these situations, DB2 provides tools to help optimize the access to data by updating the statistics and to reorganize the data within the tables of the database.
16.7.1 Optimization
Optimizing the database updates statistics related to the data tables, which improves performance and query speed. Optimize the database periodically or after heavy database updates (for example, after importing database entries). The Optimize database task in the IBM Tivoli Directory Server Configuration Tool uses the DB2 runstats command to update statistical information used by the query optimizer for all the LDAP tables.
 
Note: The reorgchk command also updates statistics. If you are planning to do a reorgchk, optimizing the database is unnecessary.
Optimize the database using the Configuration Tool
To do this:
1. Start the Configuration Tool by typing ldapxcfg on the command line.
2. Click Optimize database on the left side of the window.
3. On the Optimize database window, click Optimize.
After a message displays indicating the database was successfully optimized, you must restart the server for the changes to take effect.
Optimize the database using the command line
Run the following command from the db2 command line. Each command needs to be all on one line:
DB2 RUNSTATS ON TABLE table_name AND DETAILED INDEXES ALL SHRLEVEL REFERENCE
Run the following commands for more detailed lists of runstats that might improve performance (remember the each of these commands need to be on the same line):
DB2 RUNSTATS ON TABLE table_name WITH DISTRIBUTION AND DETAILED INDEXES ALL SHRLEVEL REFERENCE
 
DB2 RUNSTATS ON TABLE ldapdb2.objectclass WITH DISTRIBUTION AND DETAILED INDEXES ALL SHRLEVEL REFERENCE
Where table_name is the name of the table. You can use ALL for all tables.
16.7.2 reorgchk and reorg
Tuning the organization of the data in DB2 using the reorgchk and reorg commands is important for optimal performance. The reorgchk command updates statistical information to the DB2 optimizer to improve performance, and reports statistics on the organization of the database tables.
The reorg command, using the data generated by reorgchk, reorganizes table spaces to improve access performance and reorganizes indexes so that they are more efficiently clustered. The reorgchk and reorg commands can improve both search and update operation performance.
 
Note: Tuning organizes the data on disk in a sorted order. Sorting the data on disk is beneficial only when accesses occur in a sorted order, which is not typically the case. For this reason, organizing the table data on disk typically yields little change in performance.
Performing a reorgchk
After appox 10,000 number of updates have been performed against DB2, table indexes can become sub-optimal and performance can degrade. The db2 reorgchk command is one of the most important and often over looked because it is not a one-time tuning item. As updates are performed on the DB2 database, the statistical information about the tables will not be up to date. The db2 reorgchk command update the important statistics that are used by the DB2 optimizer. To perform a DB2 reorgchk, do the following:
db2 connect to ldapdb2
db2 reorgchk update statistics on table all
Where ldapdb2 is the name of your database.
To generate a reorgchk output file (recommended if you plan to run the reorg command) add the name of the file to the end of the command for a unix OS, for example:
db2 reorgchk update statistics on table all > /tmp/reorgchk.out
You can create a bat file for windows like this one and call it reorgchk.bat, you can change it as needed to fit your environment. Run this on the db2 command line:
db2 connect to ldapdb2
db2 reorgchk update statistics on table all > c: eorgchk.out
db2 terminate
The output looks like the following:
E:PROGRA~1SQLLIBBIN>reorgchk
E:PROGRA~1SQLLIBBIN>db2 connect to ldapdb2
Database Connection Information
Database server = DB2/NT 7.2.8
SQL authorization ID = ADMINIST...
Local database alias = LDAPDB2
E:PROGRA~1SQLLIBBIN>db2 reorgchk update statistics on table all 1>c: eorgchk.out
E:PROGRA~1SQLLIBBIN>db2 terminate
DB20000I The TERMINATE command completed successfully.
E:PROGRA~1SQLLIBBIN>
Performing a reorg
After you have generated organizational information about the database using reorgchk, the next step in reorganization is finding the tables and indexes that need reorganizing and attempting to reorganize them. This can take a long time. The time it takes to perform the reorganization process increases as the DB2 database size increases.
In general, reorganizing a table takes more time than running statistics. Therefore, performance might be improved significantly by running statistics first.
Check the output of the reorgchk in the c: directory called reorgchk.out if you ran the script above.
If you look at the output and see “*” in the last column you should do a reorg of that table or index. To tell what is a table and what is an index just look on the output. The output has two sections. The first section talks to Tables the next section talks to Indexes.
Generally speaking, because most data in LDAP is accessed by index, reorganizing tables is usually not as beneficial as reorganizing indexes.
Reorgchk output showing a table that needs to be reorganized
The following is example output from the reorgchk that shows a table that needs to be reorganized:
SYSIBM SYSINDEXES 282 90 17 29 184710 31 100 58 *-*
Reorgchk output showing an index that needs to be reorganized
The following is an example output from the reorgchk command that shows a table that needs to be reorganized:
Table: LDAPDB2.ACLPROP
LDAPDB2 ACLPROP_INDEX 63982 231 3 5 63982 100 87 101 *--
Table: LDAPDB2.DESCRIPTION
LDAPDB2 DESCRIPTION 32516 216 3 6 32516 99 51 175 --*
Table: LDAPDB2.USER_BOBCS_EMPLID
LDAPDB2 RUSER_BOBCS_EMPLID 19430 148 3 14 19430 2 70 129 *-*
Procedure to perform a reorganization using the reorg command
Follow these steps to perform a reorganization using the reorg command.
Open up a db2 command window.
Enter the following commands using the examples from above output:
db2 connect to ldapdb2
db2 reorg table SYSIBM.SYSINDEXES
db2 reorg table LDAPDB2.ACLPROP index LDAPDB2.ACLPROP_INDEX
db2 reorg table LDAPDB2.DESCRIPTION index LDAPDB2.DESCRIPTION
db2 reorg table LDAPDB2.USER_BOBCS_EMPLID index LDAPDB2.RUSER_BOBCS_EMPLID
The output looks like this:
E:PROGRA~1SQLLIBBIN>db2 connect to ldapdb2
Database Connection Information
Database server = DB2/NT 7.2.8
SQL authorization ID = ADMINIST...
Local database alias = LDAPDB2
E:PROGRA~1SQLLIBBIN>db2 reorgchk update statistics on table all > e:migration eorgchk.out
E:PROGRA~1SQLLIBBIN>db2 reorg table SYSIBM.SYSINDEXES
DB20000I The REORG TABLE command completed successfully
E:PROGRA~1SQLLIBBIN>db2 reorg table LDAPDB2.ACLPROP index LDAPDB2.ACLPROP_INDEX
DB20000I The REORG TABLE command completed successfully.
E:PROGRA~1SQLLIBBIN>db2 reorg table LDAPDB2.DESCRIPTION index LDAPDB2.DESCRIPTION
DB20000I The REORG TABLE command completed successfully
E:PROGRA~1SQLLIBBIN>db2 reorg table LDAPDB2.USER_BOBCS_EMPLID index LDAPDB2.RUSER_BOBCS_EMPLID
DB20000I The REORG TABLE command completed successfully.
Keep in mind that reorgchk needs to be run periodically. For example, reorgchk might need to be run after a large number of updates have been performed.
 
Note: LDAP tools such as ldapadd, ldif2db, and bulkload can potentially do large numbers of updates that require a reorgchk. The performance of the database should be monitored and a reorgchk performed when performance starts to degrade.
A reorgchk must be performed on all LDAP replicas because each replica uses a separate database. The LDAP replication process does not include the propagation of database optimizations.
After you reorg all the ones that needed to be reorged you need to run reorgchk again. The output from reorgchk can then be used to determine whether the reorganization worked and whether it introduced other tables and indexes that need reorganizing.
Some guidelines for performing a reorganization are:
If the number on the column that has an asterisk is close to the recommended value described in the header of each section and one reorganization attempt has already been done, you can probably skip a reorganization on that table or index.
In the table LDAPDB2.LDAP_ENTRY there exists a LDAP_ENTRY_TRUNC index and a SYSIBM.SQL index. Preference should be given to the SYSIBM.SQL index if attempts to reorganize them seem to alternate between one or the other needing reorganization.
Reorganize all the attributes that you want to use in searches. In most cases you will want to reorganize to the forward index, but in cases with searches beginning with ‘*’, reorganize to the reverse index. For example:
Table: LDAPDB2.SECUUID
LDAPDB2 RSECUUID <. This is a reverse index
LDAPDB2 SECUUID <. This is a forward index
LDAPDB2 SECUUIDI <. This is an update index
16.7.3 Indexes
Indexing results in a considerable reduction in the amount of time it takes to locate requested data. For this reason, it can be very beneficial from a performance standpoint to index all attributes used in searches. You can use the audit log to find out what attributes are being used in searches. Then you will need to use the following to find out if that attribute is indexed.
Use the following DB2 commands to verify that a particular index is defined. In the following example, the index being checked is for the attribute principalName:
db2 connect to database_name
db2 list tables for all | grep -i principalName
db2 describe indexes for table database_name.principalName
Where database_name is the name of your database.
If the second command fails or the last command does not return three entries, the index is not properly defined. The last command should return results similar to Example 16-2.
Example 16-2 Showing defined indexes
IndexScheme Index Name Unique Rule Number of Columns
 
LDAPDB2 PRINCIPALNAME[ D 1
LDAPDB2 PRINCIPALNAME D 2
LDAPDB2 RPRINCIPALNAME D 2
 
3 record(s) selected
To have IBM Tivoli Directory Server create an index for an attribute the next time the server is started, do one of the following:
To create an index using the Web Administration Tool:
a. Expand Schema management in the navigation area, and click Manage attributes.
b. Click Edit attribute.
c. On the IBM extensions tab, select the Equality check box under Indexing rules, and you can check for a number of types of indexes from this screen.
To create an index from the command line, issue the following command:
ldapmodify -f /ldap/etc/addindex.ldif
The addindex.ldif file should look like this:
 
Note: Make sure that there is a “-” between each attribute that is being changed on the same DN, and there is no space between each line of the same DN. Also for any objectclass, attributetypes entries they need to be all on the same line in the document. There should only be one space between each DN.
dn: cn=schema
changetype: modify
replace: attributetypes
attributetypes: ( 1.3.18.0.2.4.318 NAME ( ‘principalName’ ‘principal’ ) DESC
‘A naming attribute that may be used to identify eUser object entries.’ EQUALITY
1.3.6.1.4.1.1466.109.114.2 ORDERING 2.5.13.3 SUBSTR 2.5.13.4 SYNTAX
1.3.6.1.4.1.1466.115.121.1.15 USAGE userApplications )
-
replace: ibmattributetypes
ibmattributetypes: ( 1.3.18.0.2.4.318 DBNAME( ‘principalName’ ‘principalName’ )
ACCESS-CLASS normal LENGTH 256 EQUALITY ORDERING SUBSTR APPROX )
16.7.4 Distributing the database across multiple physical disks
As the database grows, it becomes necessary and desirable to distribute the database across multiple physical disk drives. You can achieve better performance by spreading entries across multiple disks. In terms of performance, one 20 GB disk is not as good as two 10 GB disks. The following sections describe how to configure DB2 to distribute the ldapdb2 database across multiple disks.
IBM Directory tablespaces
When IBM Directory creates a database for the directory, it uses the db2 create database command to create a database named ldapdb2. IBM Directory Server creates this database with four System Managed Space (SMS) tablespaces. You can view the tablespaces by using the following DB2 commands run under the context of the DB2 instance owner, typically the ldapdb2 user:
db2 connect to ldapdb2
db2 list tablespaces
The following examples show UNIX tablespace output for IBM Directory:
Tablespaces for Current Database
Tablespace ID = 0
Name = SYSCATSPACE
Type = System managed space
Contents = Any data
State = 0x0000
Detailed explanation:
Normal
Tablespace ID = 1
Name = TEMPSPACE1
Type = System managed space
Contents = Temporary data
State = 0x0000
Detailed explanation:
Normal
Tablespace ID = 2
Name = USERSPACE1
Type = System managed space
Contents = Any data
State = 0x0000
Detailed explanation:
Normal
Tablespace ID = 3
Name = LDAPSPACE1
Type = System managed space
Contents = Any data
State = 0x0000
Detailed explanation:
Normal
IBM Directory is stored in the user tablespace (USERSPACE1) and in the IBM Directory tablespace (LDAPSPACE). By default, there is only one container or directory for the user tablespace. To view the details about the user tablespace, enter a DB2 command similar to the following:
db2 list tablespace containers for 2
Example output is as follows:
Tablespace Containers for Tablespace 2
Container ID = 0
Name = /ldapdb2/NODE0000/SQL00001/SQLT0002.0
Type = Path
The container or directory that DB2 uses for tablespace 2 is /ldapdb2/SQL00001/SQLT0002.0. It contains some of the ldapdb2 database tables. Tablespace 3 contains the remainder of the database tables, the biggest of which is the ldap_entry table. The ldap_entry table contains the majority of the IBM Directory data.
16.7.5 Create file systems and directories on the target disks
The first step in distributing the DB2 database across multiple disk drives is to create and format the file systems and directories on the physical disks that the database is to be distributed among.
Guidelines are as follows:
Because DB2 distributes the database equally across all directories, it is a good idea to make all of the file systems, directories, or both, the same size.
All directories to be used for the DB2 database must be completely empty. AIX and Solaris systems create a lost+found directory at the root of any file system. Instead of deleting the lost+found directory, create a subdirectory at the root of each file system to be used for distributing the database. For example, create a subdirectory named c in each filesystem where the DB2 database is to be stored.
Create two additional directories under the c directory: one for holding tablespace 2 and the other for tablespace 3. For example, these directories might be named 2 and 3. Then specify these directories on the set tablespace commands as discussed in “Perform a redirected restore of the database.”
The DB2 instance user must have Write permission on the created directories. For AIX and Solaris systems, the following command gives the proper permissions:
chown ldapdb2 directory_name
The following are platform-specific guidelines:
For the AIX operating system, create the file system with the Large File Enabled option. This option is one of the options on the Add a Journaled File System smit menu.
For AIX and Solaris systems, set the file size limit to unlimited or to a size large enough to allow for the creation of a file as large as the file system. On AIX systems, the /etc/security/limits file controls system limits and -1 means unlimited. On Solaris systems, the ulimit command controls system limits.
16.7.6 Backing up the existing database
To back up the existing database, follow these steps:
1. Stop the IBM Directory Server process (slapd or ibmslapd).
2. To close all DB2 connections, enter the following:
db2 force applications all
db2 list applications
A message similar to the following is displayed:
SQL1611W No data was returned by Database System Monitor.
3. To initiate the backup process, enter the following:
db2 backup db ldapdb2 to [file system | tape device]
When the database has been backed up successfully, a message similar to the following is displayed:
Backup successful. The timestamp for this backup image is : 20000420204056
 
Note: Ensure that the backup process was successful before proceeding. The next step destroys the existing database in order to recreate it. If the backup was not successful, the existing database is lost. You can verify the success of the backup by restoring to a separate system.
16.7.7 Perform a redirected restore of the database
A DB2 redirected restore restores the specified database tablespace to multiple containers or directories. In the following example, assume that the following directories for containing tablespace 2 were created, are empty, and have the correct permissions to allow write access by the DB2 instance owner, typically the ldapdb2 user:
/disks/1/c/2
/disks/2/c/2
/disks/3/c/2
/disks/4/c/2
/disks/5/c/2
In the following example, assume the following directories for tablespace 3 were created:
/disks/1/c/3
/disks/2/c/3
/disks/3/c/3
/disks/4/c/3
/disks/5/c/3
Redirected restore
To do a redirected restore:
1. To start the DB2 restore process, enter the following:
db2 restore db ldapdb2 from [location of backup] replace existing redirect
Messages similar to the following are displayed:
SQL2539W Warning! Restoring to an existing database that is the same as the backup image database. The database files will be deleted.
SQL1277N Restore has detected that one or more tablespace containers are inAccessible, or has set their state to ’storage must be defined’.
DB20000I The RESTORE DATABASE command completed successfully.
2. To define the containers for tablespace 2 and for tablespace 3, enter the following:
db2 "set tablespace containers for 2 using (path
’/disks/1/c/2’, path ’/disks/2/c/2’, path ’/disks/3/c/2’,
path ’/disks/4/c/2’, path ’/disks/5/c/2’)"
 
db2 "set tablespace containers for 3 using (path
’/disks/1/c/3’, path ’/disks/2/c/3’, path ’/disks/3/c/3’,
path ’/disks/4/c/3’, path ’/disks/5/c/3’)"
 
Note: If many containers are defined, these commands can become so long as to not fit within the limits of a shell command. In this case, you can put the command in a file and run within the current shell using the dot notation. For example, assume that the commands are in a file named set_containers.sh. The following command runs it in the current shell:
. set_containers.sh
After completion of the DB2 set tablespace command, a message similar to the following is displayed:
DB20000I The SET TABLESPACE CONTAINERS command completed successfully.
If you receive the following message:
SQL0298N Bad container path. SQLSTATE=428B2
 
Note: A newly created file system on AIX and Solaris contains a directory named lost+found. You should create a directory at the same level as lost+found to hold the tablespace and then reissue the set tablespace command. If you experience problems, see the DB2 documentation. The following files might also be of interest:
ldapdb2_home_dir /sqllib/Readme/en_US/Release.Notes
ldapdb2_home_dir /sqllib/db2dump/db2diag.log
The db2diag.log file contains some fairly low-level details that can be difficult to interpret.
This indicates that one of the containers is not empty, or that Write permission is not enabled for the DB2 instance owner, typically the ldapdb2 user.
3. Continue the restore to new tablespace containers. This step takes the most time to complete. The time varies depending on the size of the directory. To continue the restore to the new tablespace containers, enter the following:
db2 restore db ldapdb2 continue
If problems occur with the redirected restore and you want to restart the restore process, it might be necessary to enter the following command first:
db2 restore db ldapdb2 abort
16.8 DB2 backup and restore
The fastest way to back up and restore the database is to use DB2 backup and restore commands. LDAP alternatives, such as db2ldif and ldif2db, are generally much slower in comparison.
The only disadvantage to using the db2 backup and db2 restore commands is that the backed-up database cannot be restored across dissimilar hardware platforms. For example, you cannot back up an AIX database and restore the database to a Solaris system. An alternative to the db2 backup and db2 restore commands is an LDAP Information File (LDIF) export and import. These commands work across dissimilar hardware platforms, but the process is slower. For more information about the use of these commands, see the DB2 documentation.
An important advantage of using db2 backup and db2 restore commands is the preservation of DB2 configuration parameters and db2 reorgchk database optimizations in the backed-up database. The restored database has the same performance tuning tasks as the backed-up database. This is not the case with LDAP db2ldif and ldif2db.
Be aware that if you restore over an existing database, any performance tuning tasks on that existing database are lost. Check all DB2 configuration parameters after performing a restore. Also, if you do not know whether a db2 reorgchk was performed before the database was backed up, run db2 reorgchk after the restore. The DB2 commands to perform backup and restore operations are as follows:
db2 force applications all
db2 backup db ldapdb2 to directory_or_device
db2 restore db ldapdb2 from directory_or_device replace existing
Where directory_or_device is the name of a directory or device where the backup is stored.
The most common error that occurs on a restore is a file permission error. Following are some reasons why this error might occur:
The DB2 instance owner does not have permission to access the specified directory and file. One way to solve this is to change directory and file ownership to the DB2 instance owner. For example, enter the following:
chown ldapdb2 fil_or_dev
The backed-up database is distributed across multiple directories, and those directories do not exist on the target system of the restore. Distributing the database across multiple directories is accomplished with a redirected restore. To solve this problem, either create the same directories on the target system or perform a redirected restore to specify the proper directories on the new system. If creating the same directories, ensure that the owner of the directories is the DB2 instance owner typically the ldapdb2 user.
Backup and restore operations are required to initially synchronize an LDAP replica with an LDAP master or whenever the master and replica get out of sync. A replica can get out of sync if it is not defined to the master. In this case, the master does not know about the replica and does not save updates on a propagation queue for that replica.
If a newly configured master LDAP directory is to be loaded with initial data, you can use bulk-loading utilities to speed up the process. This is another case in which the replica is not informed of updates and a manual backup and restore is required to get the replica synchronized with the master.
16.9 Concurrent updates on Symmetric Multi-Processor systems
Better update performance, particularly on Symmetric Multi-Processor (SMP) systems, is typically achieved by making updates concurrently (for example, with multiple concurrent update clients). In some cases, update performance might not improve with concurrent updates, specifically when the LDAP propagation queue grows very large. This can happen when the LDAP master server does updates faster than those updates can be propagated to the LDAP replica servers. Because propagation is done serially, concurrent updates on the master are likely to result in a growth of the propagation queue. Some testing is required in a master/replica environment to determine how much performance improvement, if any, comes from concurrent updates.
16.10 AIX operating system tuning
In this section we discuss AIX operating system tuning.
16.10.1 Enabling large files
The underlying AIX operating system files that hold the contents of a large directory can grow beyond the default size limits imposed by the AIX operating system. If the size limits are reached, the directory ceases to function correctly. The following steps make it possible for files to grow beyond default limits on an AIX operating system:
1. When you create the file systems that are expected to hold the directory’s underlying files, you should create them as Large File Enabled Journaled File Systems. The file system containing the DB2 instance’s home directory, and, if bulkload is used, the file system containing the bulkload temporary directory, are file systems that can be created this way.
2. Set the soft file size limit for the root, ldap, and the DB2 instance owner users to -1. A soft file size limit of -1 for a user specifies the maximum file size for that user as unlimited. The soft file size limit can be changed using the smitty chuser command. Each user must log off and log back in for the new soft file size limit to take effect. You must also restart DB2.
Setting MALLOCMULTIHEAP
The MALLOCMULTIHEAP environment variable can improve LDAP performance on symmetric multi-processor (SMP) systems. To set this variable, run the following command just before starting ibmslapd, in the same environment where you will start ibmslapd:
export MALLOCMULTIHEAP=1
The disadvantage of using MALLOCMULTIHEAP is increased memory usage. It might take less memory, yet have less of a performance benefit, if the variable is set as follows:
export MALLOCMULTIHEAP=heaps: numprocs+1
Where numprocs is the number of processors in the multiprocessor system.
You can find more information about MALLOCMULTIHEAP in the AIX documentation.
Setting MALLOCTYPE
Set the MALLOCTYPE environment variable as follows, according to the version of AIX you are running:
AIX 5.1
Set MALLOCTYPE as follows:
export MALLOCTYPE=buckets
AIX 5.2
Set MALLOCTYPE to the default. If you have already set MALLOCTYPE to another value, you can set it to the default by typing the following:
export MALLOCTYPE=null
You can find more information about MALLOCTYPE in the AIX documentation.
Setting other environment variables
You might experience better performance by setting the AIXTHREAD_SCOPE and NODISCLAIM environment as shown in the following commands. Check the AIX documentation to see if these settings might be right for your installation.
To set AIXTHREAD_SCOPE, use the following command:
export AIXTHREAD_SCOPE=S
To set NODISCLAIM, use the following command:
export NODISCLAIM=TRUE
16.10.2 Tuning process memory size limits
On UNIX platforms, some of LDAP performance tuning tasks in this document result in process sizes that exceed the operating system default limits. This section describes how to increase the operating system limits so that the affected processes do not run out of memory. When a process runs out of memory, the process often ends. In some cases, it leaves a core dump file, an error message, or an error log entry. On AIX systems, the system error log might indicate that the process ended due to memory allocation failure. Use the errpt –a | more command to display the error log.
Increasing the operating system process memory size limits
On UNIX systems, each user can either inherit resource limits from the root user or have specific limits defined. The most useful setting to use for the process size limits is unlimited. That way, the system process size limits are defined to allow the maximum process growth.
On AIX systems, the process size limits are defined in the /etc/security/limits file. A value of -1 indicates that there is either no limit or that it is unlimited. The names of the limits to increase are data and rss. For changes to the /etc/security/limits file to take effect, the user must log out of the current login session and log back in. On AIX, some limits may apply to the root user.
On Solaris systems, the process size limits are defined by the ulimit command. You can specify a value of unlimited on the command. The names of the limits to increase are data and vmemory. By default on Solaris systems, the root user has unlimited access to these resources (for example, unlimited).
When setting resource limits for a process, it is important to know that the limits that apply are those that are in effect for the parent process and not the limits for the user under which the process runs.
16.10.3 AIX-specific process size limits
On AIX systems, the number of data segments that a process is allowed to use also limits the process memory size. The default number of data segments is 1. The size of a data segment is 256 MB. Data segments are shared for both data and stack. The maximum number of data segments a process can use is 8.
Setting the maximum number of AIX data segments On AIX, the number of segments that a process can use for data is controlled by the LDR_CNTRL environment variable. It is defined in the parent process of the process that is to be affected. For example, the following example defines one additional data segment:
export LDR_CNTRL=MAXDATA=0x10000000
start_process
unset LDR_CNTRL
It is a good idea to unset the LDR_CNTRL environment variable, so that it does not unintentionally affect other processes.
Unlike other environment variables for the IBM Directory Server process (slapd or ibmslapd), the LDR_CNTRL environment variable cannot be set as a front-end variable in the slapd32.conf or ibmslapd.conf configuration file. It must be set as an environment variable.
16.10.4 AIX data segments and LDAP process DB2 connections
On AIX, process segments are used for increasing a process memory size and for shared memory. A segment can be used for one or the other of these purposes, but not for both. When possible, the IBM Directory Server uses shared memory to communicate between its server process (slapd or ibmslapd) and DB2 processes. Each shared segment used in this way is a connection to the DB2 database. If there are not enough process segments to satisfy the number of DB2 connections defined in the IBM Directory Server configuration file (slapd32.conf or imbslapd.conf), the remaining connections are satisfied by using local TCP/IP sockets. For this reason, there is no conflict between increasing the process memory size of the IBM Directory Server process and increasing the number of DB2 connections defined for the IBM Directory Server to use.
16.10.5 Verifying process data segment usage
If the perfagent.tools are installed, /usr/bin/svmon -P pid shows the memory usage of a process. In the output, identify the segments labeled shmat/mmap. Segments with an Inuse column of zero (0) are for data segments that are available for process growth. Segments with an Inuse column greater than 1 are for data segments in which the process has already grown. Segments with an Inuse column of 1 are usually found in the slapd or the ibmslapd process and represent the shared memory segments being used for DB2 connections.
16.11 Adding memory after installation on Solaris systems
Memory added to a computer after the installation of a Solaris operating system does not automatically improve performance. To take advantage of added memory, you must:
1. Update the shared memory (shmem) value in the /etc/system file:
set shmsys:shminfo_shmmax = physical_memory
Where physical_memory is the size on of the physical memory on the computer in bytes. You must restart the computer for the new settings to take effect.
2. From the command line, set the ulimit values for increasing process memory and file size to unlimited:
ulimit -d unlimited
ulimit -v unlimited
ulimit -f unlimited
16.12 SLAPD_OCHANDLERS variable on Windows
On Windows, if you have clients that are generating many connections to the server and the connections are being refused, set the SLAPD_OCHANDLERS environment variable to 5 before starting the server.
16.13 IBM Directory Change and Audit Log
In this section we discuss the IBM Directory Change and Audit Log.
16.13.1 When to configure the LDAP change log
IBM Tivoli Directory Server has a function called change log that results in a significantly slower LDAP update performance. The change log function should be configured only if needed.
The change log function causes all updates to LDAP to be recorded in a separate change log DB2 database (that is, a different database from the one used to hold the LDAP server Directory Information Tree). The change log database can be used by other applications to query and track LDAP updates. The change log function is disabled by default.
One way to check for existence of the change log function is to look for the suffix CN=CHANGELOG. by issuing the following command:
ldapsearch -D cn=root -w <dir admin password> -b ““ “objectclass=*
If it exists, the change log function is enabled. To disable this after it is configured, run the following commands with the directory shutdown. If you try to do this with the directory up you will get the following message:
Unable to unconfigure database while IBM Tivoli Directory Server is running.
For IBM Directory 4.1 and earlier:
ldapucfg -g
For IBM Directory 5.1 and later:
ldapxcfg
and click Configure/unconfigure changelog and uncheck box on the right and then click update and it will unconfigure the changelog. You will get a message stating that disabling changelog will destroy the data currently in the changelog database. Do you still want to continue? If you want to save the changelog you will need to copy it off before you answer this yes, if not it will be deleted.
16.13.2 When to configure the LDAP audit log
The audit log shows what searches are being performed and the parameters used in each search. The audit log also shows when a client binds and unbinds from the directory. Observing these measurements allows you to identify LDAP operations that take a long time to complete. Depending on what options you have turned on to monitor this can cause a significantly slow down of many aspects of the IBM Directory Server performance. It is recommended that all audit logging features be turned off unless your are troubleshooting searches or need to record for security reasons. Do the following:
ldapsearch -D cn=root -w <passwd> -b "cn=audit,cn=localhost" objectclass=*
With 4.1 and earlier you will see the following included in the output:
cn=audit,cn=localhost
objectclass=ibm-auditConfig
objectclass=top
cn=audit
ibm-auditlog=C:Program FilesIBMLDAPvaraudit
ibm-audit=false
ibm-auditfailedoponly=true
ibm-auditbind=true
ibm-auditunbind=true
ibm-auditsearch=false
ibm-auditadd=false
ibm-auditmodify=false
ibm-auditdelete=false
ibm-auditmodifydn=false
ibm-auditextopevent=false
With 5.1 and later you will see the following included in the output:
CN=AUDIT,CN=LOCALHOST
objectclass=ibm-auditConfig
objectclass=ibm-slapdConfigEntry
objectclass=top
cn=audit
ibm-auditLog=C:Program FilesIBMLDAPvaraudit.log
ibm-auditVersion=2
ibm-audit=false
ibm-auditFailedOPonly=true
ibm-auditBind=true
ibm-auditUnbind=true
ibm-auditSearch=false
ibm-auditAdd=false
ibm-auditModify=false
ibm-auditDelete=false
ibm-auditModifyDN=false
ibm-auditExtOPEvent=false
ibm-auditExtOp=false
You can use the Web admin tool for each of the versions of the Directory to set these settings to what you want.
16.14 Hardware tuning
In this section we discuss hardware tuning.
16.14.1 Disk speed improvements
With millions of entries in LDAP server, it can become impossible to cache all of them in memory. Even if a smaller directory size is cacheable, update operations must go to disk. The speed of disk operations is important. Here are some considerations for helping to improve disk drive performance:
Use fast disk drives.
Use a hardware write cache.
Spread data across multiple disk drives.
Spread the disk drives across multiple I/O controllers.
Put log files and data on separate physical disk drives.
16.15 Monitoring performance
The ldapsearch command can be used to monitor performance, as shown in the following sections.
16.15.1 ldapsearch with "cn=monitor"
The following ldapsearch command uses "cn=monitor".
ldapsearch -h ldap_host -s base -b cn=monitor objectclass=*
Where ldap_host is the name of the LDAP host.
With 5.1 and earlier
We had these limited outputs to the cn=monitor command. The monitor search returns the following attributes of the server:
version: Version of the LDAP server
totalconnections: Total number of connections to the server
currentconnections: Total number of current connections
maxconnections: Configured maximum number of connections
writewaiter: Number of threads waiting to write
readwaiters: Number of threads waiting to read
opsinitiated: Operations initiated against the server
opscompleted: Number of operations completed
entriessent: Number of entries sent from the server
searchesrequested: Number of searches requested
searchescompleted: Number of searches completed
filter_cache_size: Configured maximum size of the filter cache
filter_cache_current: Current size of the filter cache
filter_cache_click: Number of searches that have click the filter cache
filter_cache_miss: Number of searches that have missed the filter cache
entry_cache_size: Configured maximum size of the entry cache
entry_cache_current: Current size of the entry cache
entry_cache_click: Number of entries returned from entry cache
entry_cache_miss: Number of entries returned not from entry cache
currenttime: Current time of the search
starttime: Start time of the server
en_currentregs: Number of events currently registered
en_notificationssent: Number of event notifications sent
With 5.2 and later
A number of upgrades to the cn=monitor command allows it to pull out more data to better monitor how the LDAP is doing. The monitor search returns some of the following attributes of the server:
cn=monitor
version=IBM Tivoli Directory, Version 5.2
total connections: The total number of connections since the server was started
current connections: The number of active connections
maxconnections: The maximum number of active connections allowed
writewaiters: The number of threads sending data back to the client
readwaiters: The number of threads reading data from the client
opsinitiated: The number of initiated requests since the server was started
livethreads: The number of worker threads being used by the server
opscompleted: The number of completed requests since the server was started
entriessent: The number of entries sent by the server since the server was started.
searchesrequested: The number of initiated searches since the server was started.
searchescompleted: The number of completed searches since the server was started.
filter_cache_size: The maximum number of filters allowed in the cache.
filter_cache_current: The number of filters currently in the cache.
filter_cache_click: The number of filters retrieved from the cache rather than being resolved in DB2.
filter_cache_miss: The number of filters that were not found in the cache that then needed to be resolved by DB2.
filter_cache_bypass_limit: Search filters that return more entries than this limit are not cached.
entry_cache_size: The maximum number of entries allowed in the cache.
entry_cache_current: The number of entries currently in the cache.
entry_cache_click: The number of entries that were retrieved from the cache.
entry_cache_miss: The number of entries that were not found in the cache that then needed to be retrieved from DB2.
acl_cache: A Boolean value indicating that the ACL cache is active (TRUE) or inactive (FALSE).
acl_cache_size: The maximum number of entries in the ACL cache.
currenttime: The current time on the server. The current time is in the format: year month day hour:minutes:seconds GMT.
 
Note: If expressed in local time the format is day month date hour:minutes:seconds timezone year.
starttime: The time the server was started. The start time is in the format: year month day hour:minutes:seconds GMT.
 
Note: If expressed in local time the format is day month date hour:minutes:seconds timezone year.
en_currentregs: The current number of client registrations for event notification.
en_notificationssent: The total number of event notifications sent to clients since the server was started.
The following attributes are for operation counts:
bindsrequested: The number of bind operations requested since the server was started
bindscompleted: The number of bind operations completed since the server was started
unbindsrequested: The number of unbind operations requested since the server was started
unbindscompleted: The number of unbind operations completed since the server was started
addsrequested: The number of add operations requested since the server was started
addscompleted: The number of add operations completed since the server was started
deletesrequested: The number of delete operations requested since the server was started
deletescompleted: The number of delete operations completed since the server was started
modrdnsrequested: The number of modify RDN operations requested since the server was started
modrdnscompleted: The number of modify RDN operations completed since the server was started
modifiesrequested: The number of modify operations requested since the server was started
modifiescompleted: The number of modify operations completed since the server was started
comparesrequested: The number of compare operations requested since the server was started
comparescompleted: The number of compare operations completed since the server was started
abandonsrequested: The number of abandon operations requested since the server was started
abandonscompleted: The number of abandon operations completed since the server was started
extopsrequested: The number of extended operations requested since the server was started
extopscompleted: The number of extended operations completed since the server was started
unknownopsrequested: The number of unknown operations requested since the server was started
unknownopscompleted: The number of unknown operations completed since the server was started
The following attributes are for server logging counts:
slapderrorlog_messages: The number of server error messages recorded since the server was started or since a reset was performed
slapdclierrors_messages: The number of DB2 error messages recorded since the server was started or since a reset was performed
auditlog_messages: The number of audit messages recorded since the server was started or since a reset was performed
auditlog_failedop_messages: The number of failed operation messages recorded since the server was started or since a reset was performed
The following attributes are for connection type counts:
total_ssl_connections: The total number of SSL connections since the server was started
total_tls_connections: The total number of TLS connections since the server was started
The following attributes are for tracing:
trace_enabled : The current trace value for the server. TRUE, if collecting trace data, FALSE, if not collecting trace data.
trace_message_level: The current ldap_debug value for the server. The value is in hexadecimal form, for example:
0x0=0
0xffff=65535
trace_message_log: The current LDAP_DEBUG_FILE environment variable setting for the server.
The following attributes are for denial of service prevention:
available_workers: The number of worker threads available for work.
current_workqueue_size: The current depth of the work queue.
largest_workqueue_size: The largest size that the work queue has ever reached.
idle_connections_closed: The number of idle connections closed by the Automatic Connection Cleaner.
auto_connection_cleaner_run: The number of times that the Automatic Connection Cleaner has run.
emergency_thread_running: The indicator of whether the emergency thread is running.
totaltimes_emergency_thread_run: The number of times the emergency thread has been activated.
lasttime_emergency_thread_run: The last time the emergency thread was activated.
The following attribute has been added for alias dereference processing:
bypass_deref_aliases: The server runtime value that indicates if alias processing can be bypassed. It displays TRUE if no alias object exists in the directory, and FALSE if at least one alias object exists in the directory.
The following attributes are for the attribute cache:
cached_attribute_total_size: The amount of memory used by the directory attribute cache, in kilobytes. This number includes additional memory used to manage the cache that is not charged to the individual attribute caches. Consequently, this total is larger than the sum of the memory used by all the individual attribute caches.
cached_attribute_configured_size: The maximum amount of memory, in kilobytes, assigned to the directory attribute cache.
cached_attribute_click: The number of times the attribute has been used in a filter that could be processed by the attribute cache. The value is reported as follows:
cached_attribute_click=attrname:#####
cached_attribute_size: The amount of memory used for this attribute in the attribute cache. This value is reported in kilobytes as follows:
cached_attribute_size=attrname:######
cached_attribute_candidate_click: A list of up to ten most frequently used noncached attributes that have been used in a filter that could have been processed by the directory attribute cache if all of the attributes used in the filter had been cached. The value is reported as follows:
cached_attribute_candidate_click=attrname:#####
You can use this list to help you decide which attributes you want to cache. Typically, you want to put a limited number of attributes into the attribute cache because of memory constraints.
16.15.2 Monitor examples
The following sections show examples of using values returned by the ldapsearch command with cn=monitor to calculate the throughput of the server and the number of add operations completed on the server in a certain timeframe.
Throughput example
The following example shows how to calculate the throughput of the server by monitoring the server statistic called opscompleted, which is the number of operations completed since the LDAP server started.
Suppose the values for the opscompleted attribute obtained by issuing two ldapsearch commands to monitor the performance statistics, one at time t1 and the other at a later time t2, were opscompleted (t1) and opscompleted(t2.) The average throughput at the server during the interval between t1 and t2 can be calculated as:
(opscompleted(t2) - opscompleted(t1) - 3)/(t2 -t1)
Three is subtracted to account for the number of operations performed by the ldapsearch command itself.
Workload example
The monitor attributes can be used to characterize the workload, similar to the throughput example but split out by type of operation.
For example, you can calculate the number of add operations that were completed in a certain amount of time.
Suppose the values for the addscompleted attribute obtained by issuing two ldapsearch commands to monitor the performance statistics, one at time t1 and the other at a later time t2, were addscompleted (t1) and addscompleted(t2.) The number of add operations completed on the server during the interval between t1 and t2 can be calculated as:
(addscompleted(t2) - addscompleted(t1) - 3)/(t2 -t1)
Three is subtracted to account for the number of operations performed by the ldapsearch command itself.
Similar calculations can be done for other operations, such as searchescompleted, bindscompleted, deletescompleted, and modifiescompleted.
ldapsearch with "cn=workers,cn=monitor"
You can run a search using "cn=workers,cn=monitor" to get information about what worker threads are doing and when they started doing it.
ldapsearch -D <adminDN> -w <adminpw> -b cn=workers,cn=monitor -s base objectclass=*
This information is most useful when a server is performing poorly or not functioning as expected. It should be used only when needed to give insight into what the server is currently doing or not doing.
The "cn=workers, cn=monitor" search returns detailed activity information only if auditing is turned on. If auditing is not on, "cn=workers, cn=monitor" returns only thread information for each of the workers.
 
Attention: The cn=workers,cn=monitor search suspends all server activity until it is completed. For this reason, a warning should be issued from any application before issuing this feature. The response time for this command will increase as the number of server connections and active workers increase.
For more information, see the IBM Tivoli Directory Server Version 5.2 Administration Guide.
ldapsearch with "cn=connections,cn=monitor"
You can run a search using "cn=connections,cn=monitor" to get information about server connections:
ldapsearch -D<adminDN> -w <adminPW> -h <servername> -p <portnumber> -b cn=connections,cn=monitor -s base objectclass=*
This command returns information in the following format:
cn=connections,cn=monitor
connection=1632 : 9.41.21.31 : 2002-10-05 19:18:21 GMT : 1 : 1 : CN=ADMIN : :
connection=1487 : 127.0.0.1 : 2002-10-05 19:17:01 GMT : 1 : 1 : CN=ADMIN : :
 
Note: If appropriate, an SSL or a TLS indicator is added on each connection.
ldapsearch with "cn=changelog,cn=monitor"
You can run a search using "cn=changelog,cn=monitor" to obtain information about the changelog attribute cache. (See 16.13.1, “When to configure the LDAP change log” on page 533, for information about the change log.) The command returns the following information:
cached_attribute_total_size
The amount of memory used by the changelog attribute cache, in kilobytes. This number includes additional memory used to manage the cache that is not charged to the individual attribute caches. Consequently, this total is larger than the sum of the memory used by all the individual attribute caches.
cached_attribute_configured_size
The maximum amount of memory, in kilobytes, assigned to the changelog attribute cache.
cached_attribute_click
The number of times the attribute has been used in a filter that could be processed by the changelog attribute cache. The value is reported as follows:
cached_attribute_click=attrname:#####
cached_attribute_size
The amount of memory used for this attribute in the changelog attribute cache. This value is reported in kilobytes as follows:
cached_attribute_size=attrname:######
cached_attribute_candidate_click
A list of up to ten most frequently used noncached attributes that have been used in a filter that could have been processed by the changelog attribute cache if all of the attributes used in the filter had been cached. The value is reported as follows:
cached_attribute_candidate_click=attrname:#####
You can use this list to help you decide which attributes you want to cache. Typically, you want to put a limited number of attributes into the attribute cache because of memory constraints.
16.16 Troubleshooting error files
When a problem occurs that appears to be related to the IBM Directory Server, you should first check the following files for error messages.
For IBM Directory Server version 4.1 and older: The default location of these files is /var/ldap for Solaris and /tmp for AIX.
slapd.errors
cli.error
You can change the location of the slapd.errors file (but not the cli.error file) by updating the ibm-slapdErrorLog parameter in the slapd32.conf configuration file.
For IBM Directory Server version 5.1 or later: The default location is /var/ldap for both Solaris and AIX.
ibmslapd.log
db2cli.log
You can change the location of both of these files by modifying the ibm-slapdErrorLog and ibm-slapdCLIErrors parameters in the ibmslapd.conf.
ibmslapd trace
An ibmslapd trace provides a list of the SQL commands issued to the DB2 database. These commands can help you identify operations that are taking a long time to complete. This information can in turn lead you to missing indexes, or unusual directory topology. To turn the ibmslapd trace on, run the following commands:
ldtrc on
ibmslapd –h 4096
After you have turned the trace on, run the commands that you think might be giving you trouble. Running a trace on several operations can slow performance, so remember to turn the trace off when you are finished using it:
ldtrc off
Changing the diagnostic level for error message log files
DB2 error log
On AIX systems or Solaris Operating Environments, the db2diag.log file is located, by default, in the /INSTHOME/sqllib/db2dump directory, where INSTHOME is the home directory of the instance owner.
On Windows NT and Windows 2000 systems, the db2diag.log file is located, by default, in the x:sqllibinstance directory, where:
x: is the drive where DB2 Data Links Manager is installed.
instance is the name of the instance for which you want to change the diagnostic setting. The instance name in which Data Links Manager is running is DLFM.
The location of the db2diag.log file is controlled by the DB2 server configuration parameter DIAGPATH, so the directory paths on your system might be different from the default paths.
Procedure
You control the level of detailed information that is written to the db2diag.log file by using the DIAGLEVEL configuration parameter and the DLFM_LOG_LEVEL registry value.
DIAGLEVEL
Determines the severity of DB2 diagnostic information recorded in the db2diag.log error log file. Valid values are from 1–4. 1 denotes that a minimal amount of information is to be recorded, and 4 denotes that the maximum amount of information is to be recorded. The default setting is 3. You can increase the amount of error information recorded using the following command: db2 update dbm cfg using DIAGLEVEL 4. This setting should be changed only at the request of IBM service or development for debugging purposes.
DLFM_LOG_LEVEL
Determines the severity of DLFM diagnostic information recorded in the db2diag.log error log file. Its default setting is LOG_ERR. You can increase the amount of error information recorded using the following command:
db2set DLFM_LOG_LEVEL=LOG_DEBUG
 
Attention: Increasing the amount of diagnostic output can result in both performance degradation and insufficient storage conditions in your database instance file system. This procedure should only be used when troubleshooting problems requiring the additional diagnostics.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset