The Active Directory is a standardized and central database for Windows Server systems that houses user accounts used for authentication, file shares, printers, computers, and other settings such as security groups. The main purpose of Active Directory is to allow only authorized users to logon to the network and act as a central management for network resources.
Once you have set up a Windows Server in your environment, you might have business requirements that are not supported by your server’s default settings. For instance, you may desire to scale down on your power/energy consumption, maximize your server’s output and have the lowest server latency. It’s for this reason that we must always ensure that our AD is running optimally. And one way to ensure that is by performance tuning.
We are going to give you a few tips on how you can tweak your server settings and scale up your AD’s performance and energy efficiency, especially when you have varied workload.
For performance turning to reap maximum impact, tuning should be centered around server hardware, workload, energy budget, as well as performance objectives of the server. We are going to describe crucial tuning considerations that can yield improved systems’ performance coupled with optimal energy consumption.
We’ll break down each setting and outline its benefits to help you make an informed decision and achieve your goals as far as workload, system’s performance, and energy utilization is concerned.
This encompasses the RAM, Processor, storage, and Network Card.
To increase scalability of the server, the least possible amount of required RAM is calculated as follows:
Current size of database + Total size of SYSVOL + Recommended RAM by OS + Vendor Recommendations
Any additional RAM can be added in anticipation of the database’s growth and workload in the server’s lifetime. For remote sites with few users, these requirements can be relaxed as they will not require much RAM to cache much information to service requests.
In virtualization scenarios, avoid committing too much memory to the host machine. In some cases, memory overcommit happens where more memory is allocated to the guest machines than the underlying host machine. This is not such a big deal, but it becomes a huge mountain if the total size of memory collectively allocated to guest machines exceeds that of the host machine and the host begins paging. Remember, the objective of RAM optimization is to minimize time required going back to the disk.
16GB RAM is a reasonable amount of memory for a physical server. For virtual machines, though, an estimated size of 12GB would be considered decent enough with anticipation of future upgrade and growth of the database and resources.
This is a type of RAM that is easily and quickly accessible by the microprocessor more than the ordinary RAM. The cache performance of an Active Directory depends on the memory space allocated for caching. Data access done at the memory level is faster than access instructions on physical volumes.
To make this processing highly efficient, more memory must be added to minimize disk input / output requests. The viable option is to have enough RAM installed to handle all operations of the operating system and the installed applications. Therefore, system logs and databases should be placed on separate volumes to offer more flexibility in storage layout.
To improve the I/O request on a hard disk, the Active Directory should implement the following hardware configurations:
- Use of RAID controllers
- Increase the number of disks handling log files
- Support write cache on disk controllers
The subsystem performance of each volume should be reviewed; the idea is to have enough room for sudden changes in load to avoid client request non-responsiveness. Data consistency will only be guaranteed when all changes are written to logs.
Non-critical tasks such as system scans, backups, and activities taking place when the system is not overloaded should be scheduled. Backup procedures and scanning programs with low I/O requests should be used because they reduce competition with critical services in the Active Directory.
To investigate the degree of traffic which should be supported, it’s prudent to make a mention of 2 broad categories of network capacity planning for Active Directory Domain Services.
Firstly, we have replication traffic which passes back and forth across Domain controllers. Then, we have client-to-server network traffic also known as intra-site traffic. Client-server traffic is much simpler to plan for since it involves minimal client requests to the Active Directory in contrast to the huge volumes of data sent back by the Active Directory Domain Services.
A bandwidth of 100Mbps will be adequate in environments serving close to 5,000 users sharing a server. A 1GB Network Card is recommended for environments where users exceed 5,000 per server.
In virtualized environments, the network adapter should be in a position to support the Domain Controller load and the rest of the guests or virtual machines which are sharing the virtual switch which is attached to the physical network card.
Planning storage on the server entails two things: storage size and performance.
For Active Directory, sizing is only a consideration for large environments. This is because even for a 180GB hard drive, SYSVOL and NTDS.DIT can fit quite easily. It’s therefore not prudent to allocate so much disk space in this area.
However, you should ensure that 110% of the NTDS.DIT size is available for defragmentation. From there henceforth, one should plan for growth over a 3-to-5-year lifespan of the Hardware. An estimate of about 300% the size of NTDS.DIT database file will be satisfactory to accommodate growth over time and allow for offline defragmentation.
Processors with limited free cycles increase the wait times leading to execution. Server optimization should ensure that enough room is available to handle workload surges and in the long run minimize response time to client requests. Reducing the workload on the processors involve, selecting the best processors, directing client requests to available processors, and using processor information to gauge system performance.
Performance tuning on the Active Directory has two objectives:
- The optimal configuration and performance of the Active Directory to balance the load efficiently
- All work sent to the Active Directory have to be efficient
For the objectives above to work, three areas need to be looked at
This means having enough number of domains that can handle redundancy and client requests within a short time. All the server hardware must be able to handle existing load. Capacity planning involves scaling up operations across multiple servers. Adding more resources like RAM to the server is essential in preventing possible failures by ensuring that every aspect of the server is working as intended.
A typical capacity planning takes place in three stages:
- Evaluating the existing environment by determining the current challenges.
- Determining the hardware needed according to the findings in the step above.
- Validating the employed system to ensure that it works within the defined specifications.
The domain controllers in the Active Directory are configured to handle loads efficiently. The System Administrator is supposed to balance the demands of individual users against available resources. Add-on products that manage bandwidth and port usage may be implemented to restrict network resource uses.
Active Directory Client/Application Tuning
The Active Directory has to be set up so that the client and application requests use the Active Directory to achieve maximum efficiency.
Domain Controllers and Site Considerations
Placing domain controllers and site considerations revolve around optimization for referrals and optimizations with trusts in mind.
A well-defined site definition is central to the performance of servers. Clients not getting requested services may report poor performance when querying the Active Directory. Since client requests can come from IPv4 or IPv6, an Active Directory is supposed to be configured to get data from IPv6 addresses. By default, the operating system usually picks IPv6 over IPv4 if both are configured to send/receive data.
Most domain controllers use name resolution for reverse lookup when determining the client’s site. When this happens, delays in the thread pool are inevitable leading to unresponsiveness from the domain controller. By optimizing the name resolution framework, quick response is assured from the domain controllers.
An alternative is to locate read/written domain controllers where read-only domain controllers are used. Optimizing this scenario means:
- Using an application code change to contact writable domain controllers when read-only domain controller would be sufficient.
- Placing the read/write domain controller at the center of operations to reduce latency.
Optimization for Referrals
Referrals define how Lightweight Direct Access Protocol (LDAP) requests are processed when domain controllers do not have a copy of the requested partition. When the output of a referral request is found, it has the name of the partition, port number, and DNS name.
This information is used by the client to send requests to the server hosting the partition. The recommendation is to make sure that the Active Directory that has the site definitions and domain controllers are in place to reflect the client’s needs. Implementing domain controllers from multiple domains in a single site and relocation the applications may also help fine-tuning the domain controllers.
Optimization with Trusts in Mind
In a domain with multiple forests, trusts have to be defined depending on the domain hierarchy. All secure channels at the root of the forest may be overloaded due to increasing authentication requests between the domain controllers. This will cause delays in far-flung Active Directories and this overload in inter-forest and low-level trust scenarios. Some of the recommendations to help reduce forest trust overload.
- Using MacConcurrentAPI to help distribute load across a secure channel.
- Create shortcut links to trusts as needed depending on available load.
- All domain controllers within a domain should be able to handle name resolutions and communicate trusted domain controllers.
- All trust should be based on locality considerations.
- Reduce the chances of running into MaxConcurrentAPI challenges by enabling Kerberos as needed as well as reducing the use of secure channels.
Name resolution taking place over firewalls takes a toll on the system and will, in turn, impact the clients negatively. To overcome this, access to trusted domains need to be optimized through the following steps:
- The WINS and DNS should resolve names within the trusting domain controllers by listing the domains. This step is to counter the problem of static records which tend to cause connectivity problems over time. A manual maintenance of all the forwarders and secondary copies of the resource environment needed by the clients need to be maintained.
- Converging all site names shared between trusted domains reflecting domain controllers that re on the same location by ensuring IP and subnet addresses are linked to sites within the forest.
- Ensure all ports are open and firewalls configured to accommodate all trusts. Closed or restricted ports will lead to several failed communication attempts, forcing the client to experience timeouts and hung threads or applications.
- Domain controllers forming a trusting domain should be installed on the same physical location.
When no domain is specified disabling trust checks on the availability domain, trust checks are recommended.
Do you have unclear NTFS Permissions assignments?
Do you have too many special permissions set on your fileservers?
Or blocked NTFS Permission Inheritance?
Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!