How to Prevent Privilege Creep With FolderSecurityViewer

Ensuring the right access privileges are aligned with appropriate user roles is usually the headache of the IT department.

If there is a mismatch between a user’s responsibilities and their access privileges, it poses serious security risks, including data breach, exfiltration of sensitive information, and implantation of viruses and worms on the company’s systems.

In this article, we are going to talk about how to prevent privilege creep using a versatile tool known as FolderSecurityViewer.

What Is Privilege Creep?

Typically, privilege creep refers to the steady gathering of un-audited access rights beyond what a person requires to complete their tasks.

If a user requires rights to access an IT infrastructure, and sufficient justification has been given, those rights should be given.

However, when that same individual no longer needs those rights, and nothing is done to remove them, they remain unchanged. Over time, with the addition of more roles, a person can gather unnecessary and insecure rights.

How Privilege Creep Occurs

Simply, privilege creep takes place when users’ privileges are not cleaned out, especially after changing roles. Promoting employees, demoting employees, or carrying out transfers within departments are the major cause of access creep.

For example, a manager is hired and granted the access rights to the sensitive IT systems in a company. After some months in the position, he is demoted and a new manager is hired to replace him. However, instead of the access rights of the old manager being revoked, he still retains them.

The same scenario can happen when an employee is transferred to another department or an employee is promoted to a higher position. Also, if an employee is granted temporary access permissions to cover for vacations or prolonged absences, and the rights are not rescinded, privilege creep can ensue.

Dangers of Privilege Creep

Privilege creep usually leads to a two-fold security risk to organizations. The first risk occurs when an employee who still has uncleaned privileges gets tempted to gain unauthorized access to a sensitive system.

In most organizations, security incidences take place because of dissatisfied employees attempting to cause damage or just ‘make a point’. If such employees have unnecessary privileges, they can maliciously gain entry into systems away from their immediate work station, making finding them out difficult.

Second, if the user account of an employee with excess privileges is hacked, a criminal can collect more information than if the privileges of the account were not excessive. If an account is compromised, it becomes the property of the attacker, and it is more lucrative if it has excess rights.

How to Avoid Privilege Creep

Carry out access reviews

The best technique of avoiding privilege creep is carrying out frequent, thorough access reviews. The IT department should regularly confirm every employee’s access rights to ensure the unnecessary accumulated privileges are revoked.

If a company has invested in a robust identity and access management system (IAM), undertaking access reviews become less taxing and making decisions concerning employees’ continued access become easier. Implementing an IAM system will ensure granted access privileges are appropriately authenticated and audited.

Importantly, when conducting access reviews, the principle of least privilege should be applied. The permissions granted to users should be limited to the minimal level that enables them to carry out their tasks without any difficulties. For instance, someone in the HR department should not be given the privileges of accessing the organization’s customer database.

Access reviews should be maintained throughout the year, with a frequent rotation in every department within the company. Every employee, from the CEO to the lowest-ranked, should have their access permissions periodically reviewed, especially when there is a change in roles.

Communication of changes in roles

In case any employee changes roles, it should be promptly communicated to the IT department. If formal notification is not done, the IT department may not revoke the employee’s access rights, which can lead to harmful consequences.

So, the HR department should work together with the IT department to avoid such lapses, and enhance the security of the company’s infrastructure.

Ensure privileges are aligned

By ensuring the privileges of each employee are aligned to their specific roles and responsibilities, it becomes easier to prevent this creeping monster.

In the company’s employee lifecycle management policy, a comprehensive documented process should be included that clearly outlines the IT-related actions.

In case of any changes to roles, prompt notification should be made to the IT department for updating of the privileges and closure of redundant accounts.

How FolderSecurityViewer Can Help

The task of preventing privilege creep is delicate and demanding. If you try to manually sieve a big number of users’ privileges, it can consume a lot of your time and drain a lot of resources, besides the mistakes and oversights that can ensue.

Therefore, investing in an IAM system can greatly reduce the extensive costs of tackling the security vulnerabilities ensuing from privilege creep as well as misaligned or abused privileges.

For example, the FolderSecurityViewer is a powerful free tool you can use to see all the permissions accorded to users. After analyzing the permissions, you can clean them out, and reduce chances of privilege creep occurring.

First, you’ll need to download the tool from here.

After launching the tool, you’ll need to select the folder you need to review its permissions, and click the entry Permissions Report of the context menu for the magic to start.

  

You’ll then be provided with a comprehensive permissions report containing several things, including the names of users, department of users, and their respective allowed permissions.


If you want to get more information, you can click on the “Access Control List” button and see the various privilege rights accorded to users.

You can also export the permissions report in Excel, CSV, or HTML format, and make more analysis.

 After carrying out the access reviews using FolderSecurityViewer, you can audit identities and permissions to ensure role-based privileges are applied and excessive privileges are revoked.

Conclusion

The FolderSecurityViewer is a wonderful tool you can use to provide you with visibility into the permissions and access rights for your IT infrastructure. This way, you can easily prevent privilege creep and avert costly security breaches from occurring.

How To Upgrade Windows Server 2019

In-place upgrading of a Windows Server Operating System allows the Administrator to upgrade the existing installation of Windows Server to a new version without changing the existing settings and features.

The Windows Server 2019 In-Upgrade feature allows you to upgrade the existing The Long-Term Servicing Channel (LSTC) release like the Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019. The in-place upgrade service allows organizations to handle upgrades to newer versions within the shortest time possible. The direct upgrade is possible even when your existing Server Installation requires some dependencies before an upgrade.

Clients who do not document server installations or do not have the infrastructure or code for deployment will find it hard to upgrade to new Window Server versions. Without the Windows Server 2019 In-Place upgrade feature, you will miss many improvements on WS2019.

How to Upgrade to Windows Server 2019

Using the in-place upgrade to move to Windows 2019, use the Windows Server 2019 media on a DVD, USB or any appropriate method of installation. Start the setup.exe

The existing installation will be discovered, and you can perform the in-place upgrade. The installation should not take more than five minutes, but it all depends on the speed of the server and running roles and features.

The following example shows an in-place upgrade from Windows 2016 to Windows 2019 from an ISO file.

  1. Mount the ISO file and click on setup
  2. Accept defaults and click next (Download and install updates as the default option)
  3. On the next screen we will specify the product key and click next – the key can activate unlimited upgrades
  4. Select the edition with the desktop experience option and click next
  5. Accept user license terms and click accept
  6. Select the option for keeping personal files and programs because we intend to upgrade the Server. Click on Next.
  7. Windows will take time collecting updates and when done click on next when done (this depends on the speed of your internet)
  8. A warning will pop up about upgrading to a new Windows Version. Read the message and if you are okay with it click on confirm.
  9. The next step requires that you click on FlightSigning to enable it. (FlightSigning enables you to trust Windows Insider Previews builds that have signed certificates but not trusted by default.
  10. Click on install to initiate the installation process.

Once the upgrade is finished, you will notice some new features

  • The PowerShell replaces CMD
  • The Apps and Features open the settings panel and not Programs and Features as it in Windows Server 2016, which opens Control Panel where you can uninstall or change program and settings instead of the control panel.
  • Windows Defender Security Center has all the security settings.

Installing the Active Directory Domain System on Windows Server 2019

There is no much difference experienced if you have installed an Active Directory Directory Services on Windows Server 2016.

Run the server manager

  1. Click on Manage
  2. Roles and Features
  3. Follow the wizard and install AD DS
  4. Click on the link to promote the Server to a Domain Controller

Selecting Server Roles

  1. Click on the Add Roles and Features Wizard
  2. On the resulting wizard click on the roles, you want to add and click next

Creating a New Forest

  1. Click on the active directory domain service configuration Wizard
  2. On the deployment configuration wizard, choose the option to add a new forest
  3. Specify the domain information for the forest
  4. Click next

The Forest Functional Level (FFL) and the Domain Functional Level (DFL) are named Windows Servers in preview versions; use the Active Directory Service Configuration wizard to promote the server.

The Domain Controller options wizard will take you through the Server promotion wizard.

If you need more configuration options such as the Hyper-V installations, you can use the preview version for Windows Server 2019, which is 8.3

At the moment, most developers are still running tests on servers using the kind of hardware you will find in a professional environment. Testing using the Virtual Machines could also give good results however a server operating system should be verified using hardware deployments.

Detect Permission Changes in Active Directory

This articles describes how to track permissions changes in Active Directory.

Overview

Let’s start an article, with a small example :

If some example organization works in three shifts, with different server administrators, and , in meantime permissions on some Active Directory objects, change, overnight, it is the good practice to know which admin ,and when changed it.

For that information, auditing for changes to permissions on Active Directory should be enabled, and in this article, we will explain how to do it successfully.

Enable auditing of Active Directory service changes

The first step is enabling auditing of Active Directory service changes. It has to be done on the domain controller, on a way to change Group policy object, Default Domain Controllers Policy.

The operation should be done from a server, or a workstation with Remote Server Administration Tools (RSAT)  installed.

By opening Group Policy Management, and expanding Active Directory Forest, Domains, and then the Domain Controllers Organizational Unit (OU), access to Default Domain Controllers Policy GPO is granted, and by right-clicking Edit from the menu, Group policy management editor will open.

When in Group Policy Management Editor, navigate to ( and expand policies )  Computer Configuration, then  Windows Settings then  Advanced Policy Configuration and click DS Access.

Among the other subcategories, there will be Audit Directory Service Changes.

In the properties of Audit Directory Service Changes policy, Configure the following audit events option, both checkboxes ( Success and Failure ) should be ticked.

Adding a system access control list (SACL)

Next step is adding a system access control list (SACL) to the domain to audit for modified permissions.

System access control lists ( SACLs) are used for establishing security policies across the system for actions like logging or auditing resource access.

SACL specifies :

  • Which security principals (users, groups, computers) should be audited when accessing the object.
  • Which access events should be audited for these principals
  • Which access events should be audited for these principals
  • Adding system access control list (SACL) is made from Active Directory Users and Computers ( ADUC), by opening View menu, and check Advanced Features ( it has to be activated).

Click Active Directory Domain ( on the left), and select Properties > Security > Advanced, then switch to Auditing tab, and click Add. It will open Auditing Entry tab.

In the Auditing Entry tab, click Select a Principal.

Enter the “everyone” in the object name in the Select User, Computer, Service Account, or Group dialog, and click Ok.

Auditing Entry has to be set to “Sucess” and Applies to option has to be set to “ This object and all descendant objects”.

Under “Permissions” option, only selected option has to be “Modify Permissions”.

Check

And that is it. The only thing left to do is check the changes of permissions.

It can be done in PowerShell by usage of the command

Get-EventLog Security -Newest 10 | Where-Object {$_.EventID -eq 5136} | Format-List

The output should be the formatted list of information about changes ( who made changes on which object, and information about new security descriptor).

Windows Server – How To Close Open Files

Here I will describe how to close open server files and processes.

Every system admin on Microsoft Windows Server systems, at least once, will come in a situation that some file is open on a server, and it is needed to check what kind of process or user opened it.

This open files can cause some troubles, like upgrade errors, or reboot hold up etc.

It could be a huge problem, which, if not thought through, can cause the delay of updates, or errors in server maintenance.

More common, but less extreme issues regarding this could come from users. Sometimes, in situations when users leave shared files open on their accounts, some other users, when opening the same file can experience error messages, and cannot access the same file.

This article will show you the way how to deal with that kind of issues, how to find and close open files/process. The operations can be applied to Microsoft Windows Server systems 2008, 2012, 2016 and Windows 10 for workstations.

There are a lot of working methods to deal with that kind of problems, first, one that we will describe is a usage of computer management:

View open files on a shared folder

In a situation of locked files on the server, made by users, this method could come in handy to troubleshoot it.

Use right click on start menu and select Computer Management ( or in start menu search type compmgmt.msc)

The procedure is very simple, and in most cases, it works with no problems.

Click on Shared Folders”, and after that, on Open Files.

That should open the screen with a list of files that are detected as open, the user that opened it, possible locks, and mode that is opened in.

By right click on the wanted file, choose an option, “Close open file”, and that will close it.

With processes and file details, the process is bit different.

Usage of Windows Task Manager

Task Manager will not close opened shared files, but it can close processes on the system.

It can be opened with a combination of keys ctrl, alt, del ( and choose Task Manager), or right-clicking on the taskbar then choose open task manager option.

Under tab processes, you can see all active processes and line it by parameters CPU, Memory etc…

If there is a process that you want to terminate, it can be done by simply right click on the process, and then choose End Process option.

Usage of Resource Monitor

For every system administrator, Resource Monitor is “the tool” that allows control and overview overall system processes and a lot more.

Resource Monitor can be opened by typing “resource monitor” in a start menu search box.

Another option is to open up the task manager, click the performance tab and then click Open Resource Monitor.

When Resource Monitor opens, it will show tabs, and one, needed for this operation is Disk.

It shows disk activity, and processes, files open, PID, read and write bytes per second etc.

If the system is running a lot of “live” processes, it can be confusing, so Resource Monitor offers “stop live monitoring” option, which will stop processes on screen running up and down, and will give you an overview of all processes up to “stop moment”.

Resource monitor offers an overview of opened files paths and processes on the system, and with that pieces of information, it is not a problem to identify and close files or processes.

Powershell cmdlet approach

Of course, PowerShell can do everything, GUI apps can, maybe even better, and in this case, there are several commands, that can and will close all your system’s opened files and processes.

There are more than one solutions with PowerShell scripts, and it is not recommended for administrators without experience in scripting.

For this example, we will show some of the possible solutions with PowerShell usage.

The following examples are applied to  Server Message Block (SMB) supported systems, and for systems that do not support SMB, the following examples will show how to close the file with NET file command approach.

In situations where one, or small numbers of exact known open files should be closed, this cmdlet can be used. It is, as usual, used from elevated PowerShell, and applies to a single file ( unsaved data on open files, in all examples, won’t be saved).

Close-SmbOpenFile -FileId ( id of file )
Confirm 
Are you sure you want to perform this action? 
Performing operation 'Close-File' on Target ‘( id of file)’. 
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): N

There is a variation of cmdlet which allows closing open files for a specific session.

Close-SmbOpenFile -SessionId ( session id )

This command does not close a single file, it applies to all opened files under the id of the specific session.

The other variation of the same cmdlet is applying to a file name extension ( in this example DOCX).

The command will check for all opened files with DOCX extension on all system clients and it will force close it. As mentioned before, any unsaved data on open files, will not be saved.

Get-SmbOpenFile | Where-Object -Property ShareRelativePath -Match ".DOCX" | Close-SmbOpenFile -Force

There are a lot more this cmdlet flags, and variations which allow applying a lot of different filters and different approaches to closing open files.

Powershell Script approach

With PowerShell scripts, the process of closing open files and processes can be automated.

$blok = {$adsi = [adsi]"WinNT://./LanmanServer"

$resources = $adsi.psbase.Invoke("resources") | Foreach-Object {

 New-Object PSObject -Property @{

 ID = $_.gettype().invokeMember("Name","GetProperty",$null,$_,$null)

 Path = $_.gettype().invokeMember("Path","GetProperty",$null,$_,$null)

 OpenedBy = $_.gettype().invokeMember("User","GetProperty",$null,$_,$null)

 LockCount = $_.gettype().invokeMember("LockCount","GetProperty",$null,$_,$null)

 }

}

$resources | Where-Object { $_.Path -like '*smbfile*'} |ft -AutoSize

$resources | Where-Object { $_.Path -like '*smbfile*'} | Foreach-Object { net files $_.ID /close }

}

Invoke-Command -ComputerName pc1 -ScriptBlock $blok

Our example script enables closing a file specified by path, that should be inserted In the script.

This way of closing open files is not recommended for administrators without PowerShell scripting experience, and if you are not 100% sure, that you are up to the task, do not use this way.

Close A File On Remote Computer Using Command Line

There are two other ways to close the open files. Either Net File or PSFile (Microsoft utility) could be used to close them. The first command can be ruined by usage of NET File command using the Psexec.exe remotely. The NET command does not support any Remote APIs.

Net file command can list all open shared files and the number of file lock per file. The command can be used to close files and remove locks ( similar to SMB example before) and it is used, similar to example before, when user leave a file open or locked.

It can be done with the following syntax

C:>net file [id [/close]]

In this syntax, ID parameter is the identification number of file ( we want to close), and of course, parameter close, represents action we want to apply to ID ( file).

Best practice of NET file command usage is to list open files by running Net File command, which lists all open files and sign it with numbers 0, 1, etc

So when files are listed, the command which will close open files is ( for example),

C:>net file 1 /close

So command will apply in a way that will close a file signed with number 1.

PsFile usage

PsFile is a third party application, but I will not put it in a list of third parties, as any good system administrator should use it as “normal”.

commands are similar to net file commands, with a difference that it doesn’t truncate long file names, and it can show files opened on remote systems locally.

It uses the NET API, documented in platform tools, and it becomes available by downloading PsTools package.

 psfile [\\RemoteComputer [-u Username [-p Password]]] [[Id | path] [-c]]

Usage of PsFile “calls” remote computer with valid username and Password, and with path inserted it will close the open files on the remote system

For Processes opened on the remote system, there is a similar command called PsKill, which on same principle “kills” processes.

Release a File Lock

In some situations, a problem with closing files can be handled by releasing a file lock. There are many examples of users locking their files, and leave it open ( for some reason, the most common type of locked files are excel files).

So all other users get an error message of type: Excel is locked for editing by another user, and get no option to close it or unlock.

As an administrator, you should have elevated rights and with right procedure, that can be fixed easily.

With pressing windows key and R, you will get windows run dialog.

In run dialog type mmc ( Microsoft Management Console).

By going to option File > Add/Remove Snap-in, add a “Shared Folders” snap-in.

If you are already an operating system that has the issue, choose Local Computer option, if not, choose Another computer option and find a wanted computer name.

Expand the Shared Folders, then select open files option.

Choose locked/open file, and close it by right click and selection of Close open file.

The described procedure will unlock and close an open file ( similar as in the first example of an article), and users will be able to access it.

Usage of Third-party apps

There is a lot of third-party apps with the purpose of handling open server files on the market.

We will describe a few of most used ones in this purpose.

Process Explorer – a freeware utility solution from Windows Sysinternals, initially created by Winternals, but acquired by Microsoft. It can be seen as Windows Task Manager with advanced features. One of many features is the close open files feature, and it is highly recommended for Server Administrators and IT professionals.

Sysinternals can be accessed on the following link :

https://docs.microsoft.com/en-us/sysinternals/

OpenedFilesView – Practically a single executable file application, displays the list of all opened files on your system. For each opened file, additional information is displayed: handle value, read/write/delete access, file position, the process that opened the file, and more.

To close a file or kill a process, right-click any file and select the desired option from the context menu.

It can be downloaded on the following link :

https://www.nirsoft.net/utils/opened_files_view.html

Lockhunter – usually a tool with a purpose of deletion of blocked files ( to recycle bin). It can be a workaround for open files, and it has a feature of listing and unlocking locked files on your system. It is very powerful, and helpful in a situation when system tools fail.

It could be downloaded on following the link: http://lockhunter.com/

Long Path Tool – Long Path Tool is a shareware program provided by KrojamSoftthat, as its name suggests, helps you fix a dozen issues you’ll face when a file’s path is too long. Those issues include not being able to copy, cut, or delete the files in question because its path is too long. With a bunch of features, this could maybe be an “overkill” for this purpose, but it is definitely a quality app for all sysadmins.

It could be downloaded on following link: https://longpathtool.com/

How To Generate all Domain Controllers in Active Directory

Here we describe how to generate all Domain Controllers in Active Directory Sites and Services tool.

Active directory Sites and Services can be seen as an administrative tool, used to manage sites and the related components on Microsoft server systems.

It contains is a list of all Domain Controller connected to the system, no matter of number.

In some situations, admins can notice more than one DC listed under NTDS settings.

What are this other DC ’s, and how can they be generated automatically?

KCC

Those DC’s are called KCC’s ( Knowledge Consistency Checker) nominated bridgehead server per site to handle replication between specific sites.

That bridgehead server is then responsible for replicating any changes to all remaining DCs in its site.

In simple words, KCC takes care of replication by generating DC which communicates with other DC’s KCC’S auto-generated domain controllers and takes care of replication.

How to create automatically generated Domain Controllers

There are situations like server moves or adding new organizational Domain Controller when you can get to the situation that Active Directory is not creating ‘Automatically Generated’ connections with root Domain Controller. So, the Domain Controller can be seen, but not on the “real” Domain Controller list.

There is more than one solution, to this problem, we will bring most used and tested ones.

Manually forcing auto generation

The first method, although it can get in “workaround” category, should be manually force of auto-generation. It can be done on a way to right click on the NTDS Settings option, then choose All Tasks and Check Replication Topology in the end. That should force trigger auto-generation of all Domain Controllers, and your Domain Controllers should be visible on the list.

Repadmin

Repadmin Is a command line tool, used for replication problems diagnostic and repair.

It can be used from an elevated command prompt by typing ntdsutil.

By entering command

repadmin / showrepl*

an output is replication state of all DC’s in the system.

By command

Repadmin/replicate

force replication is started, and, considering our article, this command could by forcing replication, generate all Domain Controllers on the Sites and Services list.

Conclusion

It is usually not necessary to create manual connections when the KCC is being used to generate automatic connections. If any conditions change, the KCC automatically reconfigures connections. Adding manual connections when the KCC is employed potentially increases replication traffic and that will conflict with optimal settings set by KCC.

If a connection is not working due to a failed domain controller, the KCC automatically builds temporary connections to other replication sites ( if the damage is not too big ), to ensure that replication occurs. If all the domain controllers in a site are unavailable, the KCC automatically creates replication connections between domain controllers from another site.”

It is not recommended to manually modify this – unless you have a very specific use case.  As long as these records are auto-generated, they can survive a Domain Controller failure as the KCC/ISTG will automatically create a new connection.  However, if you manually create a connection/specify a bridgehead server, if that server goes offline, KCC will not create a new connection and replication between the affected sites will stall.

How to Set Accurate Time for Windows Server 2016

Accurate time for Windows Server 2016

Windows Server 2016 should be able to maintain an accuracy of 1ms in synch with the UTC time. This is true because the new algorithms and periodic time checks are obtained from a valid UTC server.

The windows time service is a component that uses a plug-in for the client and server to synchronize. Windows has two built-in client time providers that link with the third party plugins. One of the providers uses the Network Time Protocol (NTP) or the Microsoft Network Time Protocol (MS-NTP) to manage the synchronizations to the nearest server. Another provider used in the Hyper-V environment and is synchronized through Virtual Machines (VM) to a Hyper-V host.

Windows has a habit of picking the best provider if two are available.

The article will discuss the three main elements that relate to accurate system time.

  • Measurements
  • Improvements
  • Best practices

Domain Hierarchy

Computers that are members of a domain use the NTP protocol that authenticates to a time reference in relating to security and authenticity. The domain computers synchronize with the master clock that is controlled by domain hierarchy and the scoring system. A typical domain has hierarchical stratum layers where each Domain Controller (DC) refers to the parent DC with accurate time. The hierarchy revolves around the Primary Domain Controller (PDC) or a DC with the root forest, or a DC with a Good Time Server for the Domain (GTIMESERV) flag.

Standalone computers use the time.windows.com. The name resolution takes place when the Domain Name Service resolves to a time owned by a Microsoft resource. Like any other remotely located time references, network outages not allow synchronization to take place. Paths that are not symmetrical in a network reduce time accuracy.

Hyper-V guests have at least two windows time providers; therefore, it is possible to observe different behaviors with either the domain or the standalone.

NOTE: stratum refers to a concept in both the NTP and the Hyper-V providers. Each has a value indicating clock location in the hierarchy. Stratum 1 is for high-level clock, and stratum 0 is for hardware. Stratum 2 servers communicate to stratum 1 servers, stratum 3 to stratum 2 and the cycle continues. The lower stratums show clocks that are more accurate with the possibility of finding errors. The command line tool w32tm (W32time) takes time from stratum 15 and below.

Factors Critical for Accurate Time

Solid Source Clock

The original source of the clock needs to stable and accurate at all times. This implies that during the installation of the Global Positioning Service (GPS) pointing to stratum 1, you take #3 into consideration. Therefore, if the source clock shows stability, then the entire configuration will have a constant time.

Securing the original source time means that a malicious person will not be able to expose the domain to time-based threats.

Stable Client Clock

A stable client takes in on the natural drift of the oscillator to make sure that it is containable. The NTP uses multiple samples to condition the local clocks on standalone to stay on course. If the time oscillation on the client computers is not stable, there will be fluctuations between adjustments leading to malfunctioning of the clock. Some machines may require hardware updates for proper functioning.

Symmetrical NTP Communication

The NTP connection should be symmetrical at all times because the NTP use calculation adjustments to set time as per the symmetry levels. If the NTP request takes longer than the expected time on its return, time accuracy is affected. You may note that the path could change due to changes in topology or routing of packets through different interfaces.

The battery-powered devices may use different strategies, which in some cases require that the device be updating every second. Such a setting consumes more power and can interfere with power saving modes. Some battery run devices have some power settings that can interfere with the running of other applications and hence interfere with the W32time functions.

Mobile devices are never 100% accurate if you look at the various environmental factors that interfere with the clock accuracy. Therefore, battery-operated devices should not have high time accuracy settings.

Why is Time Important

A typical case in a Windows environment is the operation of the Kerberos that needs at least 5 minutes accuracy between the clients and servers. Other instances that require time include:

  • Government regulations, for example, the United States of America uses 50ms for FINRA, and the EU uses 1ms ESMA or MiFID II.
  • Cryptography
  • Distributed systems like the databases
  • Block chain framework for bitcoin
  • Distributed logs and threat analysis
  • AD replication
  • The Payment Card Industry (PCI)
  • The Time Improvements for Windows Server 2016
  • Windows Time Service and NTP

The algorithm used in Windows Server 2016 has greatly improved the local clock when synchronizing with the UTC. The NTP has four values to calculate the time offset based on timestamps of client requests or responses and server requests and responses.

The modern network environment has too much congestion and related factors that affect the free flow of communication. Windows Server 2016 use different algorithms to cancel out the disturbances. Besides, the source used in windows for time references uses improved Application Programmers Interface with the best time resolution giving an accuracy of 1ms.

Hyper-V

Windows 2016 Server made some improvements that include accurate VM start and VM restore. The change gives us an accuracy of 10µs of the host with a root mean square (RMS) of 50µs with a machine carrying a 75% load.

Moreover, the stratum level at the host sends to guests more transparently. Earlier hosts would stick fixed at stratum 2 regardless of its accuracy, the changes in Windows Server 2016 the host reports at stratum 1, which gives better timing for the virtual machines.

Domains created in Windows 2016 Server will find time to be more accurate because the time does not default to the host and that is the reason behind manually disabling the Hyper-V time provider settings in windows joining a Windows 2012R2 and below.

Monitoring

Counters tracking the performance counters are now part of the Windows Server 2016, they allow to monitoring, troubleshooting, and baselining time accuracy. The counters include:

Computed Time Offset

This feature indicates the absolute time between the system clock and chosen time source in microseconds. The time updates when a new valid sample is available. Clock accuracy is traced using the performance counter that has an interval of 256 seconds or less.

Clock Frequency Adjustment

This adjustment indicates the time set by the local W32Time measured in parts per billion. The counter is important when it comes to visualizing actions taken by W32time.

NTP Roundtrip Delay

NTP Roundtrip Delay is the time taken when transmission of a request to the NTP server and when the response is valid. This counter helps in characterizing the delays experienced by the NTP client. When the roundtrip is large or varies can lead to noise when the NTP computes time, thereby affecting time accuracy.

NTP Client Source Count

The source count parameter holds the number of clients and unique IP addresses of servers that are responding to client requests. The number may be large or small compared to active peers.

NTP Server Incoming Requests

A representation of the number of requests received by the NTP server indicated as request per second.

NTP Server Outgoing Responses

A representation of the number of answered requests by the NTP server indicated as responses per second.

The first three show the target scenarios for troubleshooting accuracy issues. The last three cover NTP server scenarios, which help determine the load and setting a base for the current performance.

Configuration Updates per Environment

The following is a description that changes the default configuration between Windows 2016 and earlier versions. The settings for Windows Server 2016 and Windows 10 build 14393 are now taking unique settings.

Role

Settings

Server 2016

Windows 10

Servers 12 and 08 and Windows 10

Standalone or a Nano Server

       
 

Time server

time.windows.com

N/a

time.windows.com

 

Poling frequency

64-1024 seconds

N/a

Once a week

 

Clock update frequency

Once a second

N/a

Once a hour

Standalone Client

       
 

Time server

N/a

time.windows.com

time.windows.com

 

Polling frequency

N/a

Once a day

Once a week

 

Clock update frequency

N/a

Once a day

Once a week

Domain Controller

       
 

Time server

PDC/GTIMESERV

N/a

PDC/GTIMESERV

 

Polling frequency

64 to 1024 seconds

N/a

1024 to 32768 seconds

 

Clock update frequency

Once a day

N/a

Once a week

Domain Member Server

       
 

Time server

DC

N/a

DC

 

Polling frequency

64 to 1024 seconds

N/a

1024 to 32768 seconds

 

Clock update frequency

Once a second

N/a

Once every 5 minutes

Domain Member Client

       
 

Time server

N/a

DC

DC

 

Polling frequency

N/a

1024 to 32768 seconds

1024 to 32768 seconds

 

Clock update frequency

N/a

Once every 5 minutes

Once every 5 minutes

Hyper-V Guest

       
 

Time server

Chooses the best alternative based on host stratum and time on the server

Chooses the best alternative based on host stratum and time server

Defaults to host

 

Polling frequency

Based on the role above

Based on the role above

Based on the role above

 

Clock update frequency

Based on the role above

Based on the role above

Based on the role above

Impact of Increased Polling and Clock Update Frequency

To get the most accurate time, the defaults for polling frequencies and clock updates are which give you the ability to make adjustments more frequently. The adjustments lead to more UDP and NTP traffic that will in no way affect the broadband links.

Battery devices do not store the time when turned off and when turned on it may lead to frequent time adjustments. Increasing the polling frequency will lead to instability, and the device will use more power.

Domain controllers should have less interference after multiple effects of increasing updates from NTP clients and AD domain. NTP does not require many resources compared to other protocols.

You can reach the limits of the domain functionality before getting a warning indicating increased settings in Windows Server 2016. The AD does not use secure NTP, which does not synchronize time accurately but will increase the clients two stratum away from the PDC.

You can reserve at least 100NTP requests per second for every core. If you have a domain with 4 CPUs each, the total NTP should be serving 1,600 NTP requests per second. As you set up the recommendations make sure, you have a large dependency on the processor speeds and loads. Administrators should conduct all baseline tests on site.

If your DCs are running on sizeable CPU load more than 40%, the system is likely to generate some noise when NTP is responding to requests; this may impair domain time accuracy.

Time Accuracy Measurements

Methodology

Different tools are used to gauge the time and accuracy of Windows Server 2016. The techniques are applicable when taking measurements and tuning the environment to determine if the test outcome meet the set requirements.

The domain source clock has two precision NTP servers and GPS hardware. Some of these tests need a highly accurate and reliable clock source as a reference point adding to your domain clock source.

Four different methods we use to measure accuracy in physical and virtual machines:

  • Take the reading of the local clock conditioned by a w32tm and reference it against a test machine with a separate GPS hardware.
  • Measure pings coming from the NTP server to its clients using the “stripchart” of the W32tm utility
  • Measure pings from the client to the NTP server using “stripchart” of the W32tm utility.
  • Measure the Hyper-V output from the host to the guests using the Time Stamp Counter (TSC). After getting the difference of the host and client time in the VM, use the TSC to estimate the host time from the guest. We also consider the use of TSV clock to factor out delays and the API latency.

Topology

For comparison purposes, testing both the Windows Server 2012R2 and Windows Server 2016 based on topology is sensible. The topologies have two physical Hyper-V hosts that point to a 2016 Server with a GPS hardware installed. Each of these hosts runs at least three domains joining the windows guests taking the arrangement shown in the diagram below.

Windows Server 2016 Forest Time Hierarchy between two 2016 Hyper-V Hosts

TOPOLOGY 1. Image Source

The lines on the diagram indicate time hierarchy and the transport or protocol used

Windows Server 2012R2 Forest Time Hierarchy hosted between two 2016 Hyper-V Hosts.

TOPOLOGY 2. Image Source

Graphical Results Overview

The following graph is a representation of the time accuracy between two members of a domain. Every graph shows both Windows Server 2012R2 and 2016 outcome. The accuracy was a measurement taken from the guest machine in comparison to the host. The graphical data shown indicate both the best and worst case scenarios.

TOPOLOGY 3. Image Source

Performance of the Root Domain PDC

The root PDC synchronizes with the Hyper-V host using a VMIC that is present in Windows Server 2016 GPS hardware that shows stability and accuracy. This is critical because a 1ms accuracy is needed.

Performance of the Child Domain Client

The child domain client is attached to a Child Domain PDC sending communication to the Root PDC. Its timing should also be within the 1ms accuracy.

Long Distance Test

Long distance test could involve comparing a single virtual network hop to 6 physical network hops on Windows Server 2016. Increasing network hops mean you increased latency and extended time differences. The 1ms accuracy may negatively change demonstrating a symmetrical network. Do not forget that every network is different and measurements taken depend on varying environmental factors.

Best Practices for Accurate Timekeeping

Solid Source Clock

The machine timing is as good as its source clock. To achieve the 1ms accuracy, a GPS hardware or time appliance should be installed to refer to the master source clock. The default time.windows.com may not give the accurate or stable and local time source. Also, as you move away from the source clock, you are bound to lose time.

Hardware GPS Options

The different hardware solutions that offer accurate time depend on GPS antennas. Use of radio and dial-up modem solutions is also accepted. The hardware options connect through PCIe or USB ports.

Different options give varying time accuracy and the final time depends on the environment. Environmental factors that interfere with accuracy depends on GPS availability, network stability, the PC hardware and network load.

Domain and Time Synchronization

Computers in a domain use the domain hierarchy to determine the machine to be used as a source for time synchronization. Every domain member will look for a machine to sync with and save it as its source. Every domain member will follow a different route that leads to its source time. The PDC in the Forest Root should be the default source clock for all machines in the domain.

Here is a list of how roles in the domain find their original time source.

Domain Controller with PDC role

This is the machine with authority on time source for the domain. Most of the time it issues are accurate and must synchronize with the DC in the parent domain with exceptional cases where GTIMESERV role is active.

Other Domain Controller

This will take the role of a time source for clients and member servers in the domain. A DC synchronizes with the PDC of its domain or any DC in the parent domain.

Clients or Member Servers

This type of machine will synchronize with any DC or PDC within its domain or picks any DC or PDC in the parent domain.

When sourcing for the original clock, the scoring system is used to identify the best time source. Scoring takes into account the reliable time source based on the relative location, which happens only once when the time service starts. To fine-tune time synchronization, add good timeservers in a specific location and more redundancy.

Mixed Operating System Environments (Windows 2012 R2 and Windows 2008 R2)

In a pure Windows Server 2016 domain environment, you need to have the best time accuracy. Deploying a Windows Server 2016 Hyper-V in a Windows 2012 domain will be more beneficial to the guests because of the improvements made in Server 2016.

A Windows Server 2016 PDC delivers accurate time due to the positive changes to its algorithms, which also acts as a credible source. You may not have an option of replacing the PDC but can add a Windows Server 2016 DC with the GTIMESERV flag as one way of upgrading time accuracy for the domain.

Windows Server 2016 DC delivers better time to lower clients but is always good to use it as a source NTP time.

As already stated above, clock polling and refresh, frequencies are modified in Windows Server 2016. You can also change the settings manually to match the down-level DCs or make the changes using the changes using group policy.

Versions that came prior to Windows Server 2016 have a problem with keeping accurate time that ends in system drifting immediately you make a change. Obtaining samples from accurate NTP sources and conditioning the clock leads to small changes in system clock leading to better time keeping on the low-level OS versions.

In some cases involving the guest domain controllers, samples from the Hyper-V TimeSync is capable of disrupting time synchronization. However, for Server 2016, it should no longer be an issue when the guest machines run on Server 2016 Hyper-V hosts.

You can use the following registry keys to disable the Hyper-V TimeSync service from giving samples to w32time

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider

“Enabled”=dword:00000000

Allow Linux to Use Hyper-V Host Time

Guest machines using Linux and run the Hyper-V; it is normal for clients to use the NTP daemon for time synchronization against the NTP servers. If the Linux distribution supports version 4 TimeSync protocol with an enabled TimeSync integration on the guest, then synchronization will take place against the host time. Enabling both methods will lead to inconsistency.

Administrators are advised to synchronize against the host time by disabling the NTP time synchronization by using any of the following:

  • Disabling NTP servers in the ntp.conf file
  • Disabling the NTP Daemon

In this particular configuration, the Time Server Parameter is usually the host, and it should poll at a frequency of 5 seconds the same as the Clock Update Frequency. Exclusive synchronization over NTP demands that you disable the TimeSync integration service in the guest machine.

NOTE: Linux accurate timing support must have a feature supported in the latest upstream Linux Kernels. As at now, it is not available across most Linux distros.

Specify Local Reliable Time Service Using the GTIMESERV

The GTIMESERV allows you to specify one or more domain controllers as the accurate source clocks. For example, you can use a specific domain controller with a GPS hardware and flag it as GTIMESERV to make sure that your domain references to a clock based on a GPS hardware.

TIMESERV is a Domain Services Flag that indicates whether the machine is authoritative and can be changed if the DC loses connection. When the connection is lost, the DC returns the “Unknown Stratum” error when you query via the NTP. After several attempts, the DC will log System Event Time Service Event 36.

The configuration of a DC as your GTIMESERV use the following command

w32tm /config /manualpeerlist:”master_clock1,0x8 master_clock2,0x8” /syncfromflags:manual /reliable:yes /update

If the DC has a GPS hardware, use the following steps to disable the NTP client and enable the NTP server.

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpClient /v Enabled /t REG_DWORD /d 0 /f

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpServer /v Enabled /t REG_DWORD /d 1 /f

Restart Windows Time Service

net stop w32time && net start w32time

Finally, tell network hosts that this machine has a reliable time source using the command

w32tm /config /reliable:yes /update

Confirm the changes, run the following commands, which indicate the results as shown

w32tm /query /configuration

Value

Expected Setting

AnnounceFlags

5 (Local)

NtpServer

(Local)

DIIName

C:\WINDOWS\SYSTEM32\w32time.DLL (Local)

Enabled

1 (Local)

NtpClient

(Local)

w32tm /query /status /verbose

Value

Expected Setting

Stratum

1 (primary reference – syncd by radio clock)

ReferenceId

0x4C4F434C (source name: “LOCAL”)

Source

Local CMOS Clock

Phrase Offset

0.0000000s

Server Role

576 (Reliable Time Service)

Windows Server 2016 on 3rd party Virtual Platforms

The virtualization of windows means that the time responsibility defaults to the Hypervisor. However new members of the domain need to be synchronized with the Domain Controller for the AD to work effectively. The best that you can do is to disable time virtualization between guests and 3rd party virtual platforms.

Discover the Hierarchy

The chain of time hierarchy to the master clock is dynamic and non-negotiated. You must query the status of a specific machine to get its time source. This analysis helps in troubleshooting issues relating to synchronizations.

If you are ready to troubleshoot, find the time source by using the w32tm command.

w32tm /query /status

The output will be the source. Finding the source is the initial step in time hierarchy. The next thing to do is to use the source entry and /Stripchart parameter to find the next time source.

w32tm /stripchart /computer:MySourceEntry /packetinfo /samples:1

The command below gives a list of domain controller found in a specific domain and relays the results that you use to determine each partner. The command also includes machines with manual configurations.

w32tm /monitor /domain:my_domain

You can use the list to trace the results through the domain and know their hierarchy and time offset at each step. Marking the point where time offset increases, you can get to know the cause of incorrect time.

Using Group Policy

Group policy is used to accomplish strict accuracy by making sure clients are assigned specific NTP servers. Clients can control how down-level OS should work when virtualized.

Look at the list of all possible scenarios and relevant Group Policy settings:

Virtualized Domains

To gain control over the Virtualized Domain Controllers in Windows 2012 R2, disable the registry entry corresponding to the virtual domain controllers. You may not want to disable the PDC entry because in most cases, Hyper-V host delivers a stable time source. The entry to the registry requires that you restart the w32time service after making changes.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider]

“Enabled”=dword:00000000

Accuracy Sensitive Loads

For any workload that is sensitive to time accuracy, ensure that the group machines are set to use the NTP servers and any related time settings like update frequency and polling. This is a task handled by a domain, but if you want to have more control, target specific machines to point to the master clock

Group Policy Setting

New Value

NtpServer

ClockMasterName,0x8

MinPollInterval

6-64 seconds

MaxPollInterval

6 seconds

UpdateInterval

100 to once per second

EventLogFlags

3 – All special time logging

NOTE: The NtpServer and EventLogFlags are located on the System\Windows Time Service\Time Providers, if you follow the Configure Windows NTP Client Settings. The other three are under the System\Windows Time Service if you follow through the Global Configuration Settings

Remote Accuracy Sensitive Loads Remote

Systems running on the branch domains such as the Retail and Payment Credit Industry (PCI), windows will use the current site data and DC Locator to search the local DC unless you have a manual NTP time source configured.

In such an environment, you need 1 second accuracy with the option of using the w32time services to move the clock backwards. If you can meet the requirements, use the table below to create a policy

Group Poilicy Settings

New Value

MaxAllowedPhaseOffset

1, if more than on second, set clock to correct time.

The MaxAllowedPhaseOffset is a setting you will find it under System\Windows Time Service using global Configuration settings.

Azure and Windows IaaS Consideration

Azure Virtual Machine; Active Directory Domain Services

If you have Azure VM running Active Directory Domain Services runs as part of the existing configuration in a Domain Forest, then the TimeSync (VMIC) should not be running. Disabling VMIC allows all DCs in both physical and virtual forests to use a single time sync hierarchy.

Azure Virtual Machine: Domain –Joined Machine

If you have a host whose domain links to an existing Active Directory Forest whether virtual or physical the best you can do is to disable TimeSync for the guest and make sure the W32Time is set to synchronize with the Domain Controller.

Azure Virtual Machine: Standalone Workgroup Machine

If your Azure is not part of a domain and it is not a Domain Controller, you can keep the default time configuration and let the VM synchronize with the host.

Windows Application that Requires Accurate Time

Time Stamp API

Programs or applications that need time accuracy in line with the UTC should use the GetSystemTimePreciseAsFileTime API to get the time as defined by Windows Time Service.

UDP Performance

An application that uses UDP to communicate during network transactions, you should minimize latency. You have the registry options to use when configuring different ports. Note that any changes to the registry should be restricted to system administrators.

Windows Server 2012 and Windows Server 2008 need a Hotfix to avoid datagram loss.

Update Network Drivers

Some network cards have updates that help improve performance and buffering of UDP packets.

Logging for System Auditors

Time tracing regulation may force you to comply by archiving the w32tm logs, performance monitor, and event logs. Later these records can to confirm your compliance at a specific time in the past.

  • Use the following factors to indicate time accuracy:
  • Clock accuracy using the computed time offset counter
  • Clock source looking for “peer response from” in the w32tm event logs
  • Clock condition status using the w32tm logs to validate the occurrence of “ClockDispl Discipline:*SKEW*TIME*.”

Event Logging

The log that gives you a complete story in the information it stores. Filter out the Time-Server logs you will discover any influences that have changed the time. Group policy can affect the events of the logs.

W32time Debug Logging

Use the command utility w32tm to enable audit logs. The logs will have clock updates as well as showing the source clock. Restarting the service enables new logging.

Performance Monitor

Windows Server 2016 Windows Time service counters can collect logging that auditor’s need. You can log the data locally or remotely by recording the machines Time Offset and Round Trip Delays. Like any other counter, you can create remote monitors and alerts using the System Center Operations Manager. You can set an alert for any change of accuracy when it happens.

Windows Traceability Example

Using sample log files from the w32tm utility, you can validate two pieces of information where the Windows Time Service conditions the first log file at a given time.

151802 20:18:32.9821765s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:223 CR:156250 UI:100 phcT:65 KPhO:14307

151802 20:18:33.9898460s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:64 KPhO:41

151802 20:18:44.1090410s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:65 KPhO:38

All the messages that start with “ClockDisplin Discipline” are enough proof that your system is interacting with the system clock via the w32time.

The next step is to find the last report before the time change to get the source computer that is the current reference clock. Like in the example below, we have the Ipv4 address of 10.197.216.105 as the reference clock. Another reference could point to the computer name or the VMIC provider.

151802 20:18:54.6531515s – Response from peer 10.197.216.105,0×8 (ntp.m|0x8|0.0.0.0:123->10.197.216.105:123), ofs: +00.0012218s

Now that the first section is validated investigate the log file on the reference time source using the same steps. This will give you a physical clock such as the GPS or a known time source like the National Institute of Standards and Technology (NIST). If the clock is a GPS hardware, then manufacturer logs may be required.

Network Considerations

The NTP protocol algorithm depends on the network symmetry making it difficult to predict the type of accuracies needed for certain environments. Use the Performance Monitor and new Windows Time Counters for Windows Server 2016 to create baselines.

The Precision Time Protocol (PTP) and the Network Time Protocol (NTP) are the two that you can use to gauge accurate time. Clients that are not part of a domain, windows use the Simple NTP by default. Clients found within a Windows domain use the secure NTP also referred to as MS-SNTP that help in leveraging domain communication giving an advantage over Authenticated NTP.

Reliable Hardware Clock (RTC)

Windows will not step time unless some conditions are beyond the norm. The implication is that the w32tm changes the frequency at regular intervals relying on the Clock Update Frequency Settings, which is 1 second on Windows Server 2016. It will move the frequency if it is behind and vice versa when it is ahead of time.

This reason explains why you need to have acceptable results during the baseline test. If what you get for the “Computed Time Offset” is not stable, then you may have to verify the status of the firmware.

Troubleshooting Time Accuracy and NTP

The Discovering Hierarchy section gave us an understanding of the source and inaccurate time. When looking for the time offset to identify the point where the divergence takes place from its NTP Sources. Once you can trace the hierarchy of time, you need to focus on the divergent system to gather more information to help determine the issues causing all these inconsistencies.

Here are some tools that you can use:

System event logs

  • Enable logging:

w32tm logs – w32tm /debug /enable /file:C:\Windows\Temp\w32time-test.log /size:10000000 /entries:0-300

w32Time Registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time

  • Local network interfaces
  • Performance counters
  • W32tm /stripchart /computer:UpstreamClockSource
  • PING UpstreamClockSource (gauging latency and understand the number of hops to source)

Tacert UpstreamClockSource

Problem

Symptoms

Resolution

Local TSC unstable

Use perfmon-Physical computer- Sync clock stable clock

Update firmware or try an alternative hard to confirm that it does display the same issue

Network latency

W32tm stripchart displays the RoundTripDelay exceeding 10ms. Use Tracert to find where the latency thrives

Locate a nearby source clock for time. Install a source clock on the same domain segment or point to one that is geographically closer. Domain environment needs a client with the GtimerServ role.

Unable to reliably reach the NTP source

W32tm /stripchart gives “request time out”

NTP source unresponsive

NTP Source is not responsive

Check Perfmon counters for NTP client Source Count, NTP server outgoing responses, and NTP Server Incoming Requests. Determine the outcome with your baseline tests results

Use server performance counters to determine change in load or if there is any network congestion

Domain Controller not using the most accurate clock

Changes in topology or a recently added master clock

w32tm /resync /rediscover

Clients Clocks are drifting

Time-Service event 36 in System event log or you see a text log with the following description: “NTP Client Time Source Count” going from 1 to 10

Identify errors in the upstream source and query if it may be experiencing performance issues

Baselining Time

Baseline tests are important because they give you an understanding of the expected performance accuracy of the network. Use the figures to detect problems on your Windows Server 2016 in the future. The first thing to baseline is the root PDC or any machine with the role of GTIMESRV. Every PDC in the forest should have a baseline test results and eventually pick DCs that are critical and get the baseline results too.

It is important if you can baseline Windows 2016 and 2012 R2 using the w32tm /stripchart tool as a comparison tool. You can use two similar machines and compare the results.

Using the performance counters, collect all information for at least one week to give you enough reference when accounting for various network time issues. The more figures you have for comparison gives you enough confidence that your time and accuracy is stable.

NTP Server Redundancy

A manual NTP server configuration in a non-domain network means that you should have a good redundancy measure to get better accuracy when other components are also stable. On the other hand, your topology does not have a good design and other resources not stable, leads to poor accuracy levels. Take caution to limit timeservers’ w32time to 10.

Leap Seconds

The climatic and geological activities on planet earth lead to the varying rotation periods. In an ideal scenario, the rotation varies every two years by one second. When the atomic time grows, there will be a correction of a second up or down called the leap second. When doing the correction, it never exceeds 0.9 seconds. The correction is always announced six months before time.

Before Windows Server 2016, the Microsoft Time Service did not account for the leap seconds and relied on external time service to handle the adjustments. The changes made to Windows Server 2016, Microsoft is working on a suitable solution to handle the leap second.

Secure Time Seeding

W32time in Windows Server 2016 includes the Secure Time Seeding Feature that determines the approximate current time of the outgoing Secure Sockets Layer Connection (SSL). The value helps in correcting gross errors on the local system clock.

You can decide not to use the Secure Time Seeding feature and use the default configurations instead. If you intend to disable the feature, use the following steps:

Set the UtilizeSSLTimeData registry value to 0 using the command below

reg add KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Config /v UtilizeSslTimeData /t REG_DWORD /d 0 /f

If the machine does not detect any changes and does not ask for a reboot, notify the W32time service about the changes. This will stop enforcing time monitoring based on data coming from the SSL connections.

W32tm.exe /config /update

Rebooting the machine activates the setting immediately and directs the machine to stop collecting data from SSL connections.

For the above setting to be effective on the entire domain, set the UtilizeSSLTimeData value in W32time using the Group Policy Setting to 0 and make the setting public. The moment the setting is picked a Group Policy Client, the W32time service gets the notification and stop enforcing and monitoring SSL time data. If the domain has some portable laptops or tablets, you can exclude them from the policy change because when they lose battery power, they will need to access the Secure Time Seeding feature to acquire the current time.

Conclusion

The latest developments in the world of Microsoft Windows Server 2016 means that you can now get the most accurate on your network once you observe some conditions. Accuracy matters in almost everything that we do and the question of the relevance of time making a lot of sense judging by what we already covered in the document.

The Windows Time Service (W32Time) main work is to give your machine time regardless of whether it is a standalone or working on a network environment. The primary use of time in a Windows Server 2016 Environment is to make sure that there is enough security for Kerberos authentication. The W32Time makes it almost impossible to have replay attacks in an Active Directory or when running Virtual Machines on Hyper-V hosts.

Active Directory Authoritative Restore with Windows Server Backup

Overview 

In short lines, an authoritative restore is a Windows Server process of return of a designated deleted Active directory object or container of an object to the state before deletion, at the time when it was backed up. 

An authoritative restore process will replicate the restore object across organization’s domain controllers, but, restore process will increase the Unique Sequence Number (USN) of all attributes on the restored object.  

Because the object will have a much higher Unique Sequence Number, it will replicate across all domain controllers of organization and overwrite anything associated to the previous object. 

In this article, our goal is to describe the procedure and make test example of this process. 

Procedure and Examples 

In an example, hypothetic scenario, it is needed to restore user deleted from Active Directory Users and Computers. 

First thing in the scenario is a restoration from backup. For a backup start, it is needed to restart the domain controller in Directory Recovery Mode (Safe mode). It can be done with a reboot and press key F8 on startup. 

Login is made with local admin, using username.\administrator, and password setter up during domain controller installation for Directory Services Restore Mode ( DSRM ). 

After login, right-click start menu and choose Command Prompt ( Admin ) option. 

When Command Prompt is accessed, following command,  will show available backups:

wbadmin get versions 

Following command (followed by “yes” option ) will start restoration based on the chosen backup entry : 

wbadmin start systemstaterecovery version: (chosen version) 

 And user will be prompted to reboot with “Yes” option. 

After reboot, it is needed to start the Command Prompt (Admin) again, and run ntdsutil command for accessing and managing a Windows Active Directory (AD) database. (Ntdsutil should only be used by experienced administrators and it should be used from an elevated command prompt). 

At ntdsutil prompt, it is needed to enter following commands: 

 activate instance ntds 

And after that : 

authoritative restore 

At authoritative restore prompt terminal, the full path to the object that is wanted to restore should be entered. 

restore object cn=(object name),OU=(organizational unit) ,DC=(domain controller),DC=local 

It is needed to confirm it with “yes”, and restoration will start.  

Exit the authoritative restore with the command:”quit” and ntdsutil with the command: “quit”. 

From Command Prompt terminal, disable safe boot sequence of a server with a command:

bcdedit /deletevalue safeboot 

After reboot and login to the server, a wanted object should be restored in Active Directory. 

 

 

Do you want to prevent unauthorized deletion of directory objects or something similar to this problem?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Windows Server: Clean Up Orphaned Foreign Security Principals

This article will show different ways to clean up orphaned Foreign Security Principals. There are more ways to do it, but before any method described, it is needed to say that if it is possible, clean your “orphaned FSP’s” with GUI method. PowerShell methods are not recommended to users without excellent knowledge of console, due to possible issues that method can cause. 

But first, let’s see overview – What is FSP? 

Overview 

Foreign Security Principals (FSPs) are security principals, created when an object ( user, computer or group) is added to some domain group, but with origins from an external trusted domain. 

FSP is recognized by mark. It is marked with a red curly arrow connected to an icon of object and acts as a pointer. 

Active Directory creates them automatically, after adding security principal from another forest to a group from that domain. 

When security principal, on which FSP is pointing,  is removed, the FSP becomes orphan. Orphan is a term used for FSP’s which have no more principals, FSP is pointing on.   

In the case of the creation of the same principle, old FSP will still be orphaned. New FSP will have different   SID ( security identifier) number, no matter what principle is the same. 

The outcome could be that FSP that is once orphaned, stays orphaned forever until it is removed/cleaned up by the administrator. 

There are two ways of identification and cleaning orphaned FSP. It can be done by GUI ( Graphics User Interface), and by PowerShell console. 

Identification and clean up of orphaned FSP via GUI 

As mentioned before, this is the most recommended way of cleaning orphaned FSP’s. 

In “GUI” way, orphaned FSP can be found in Active Directory Users and Computers console, when advanced features are enabled (If advanced features are not enabled, FSP won’t be seen). They are stored in the ForeignSecurityPrincipals container. Orphaned FSP can be identified through column “Readable Name”. 

If FSP is orphaned, Readable Name column in the console will show up empty. 

They can be cleaned by selection and right-click deletion. 

Cleaning FSP via PowerShell 

For PowerShell cleaning, all FSP objects first have to be listed. 

All FSP’s can be listed by usage of Get-ADObject cmdlet. 

Get-ADObject -Filter {ObjectClass -eq ForeignSecurityPrincipal'} 

When listed, they can be removed by usage of the Translate method, but precaution is advised. There is a possibility, in case of network connectivity issue  FSP’s can be seen as momentarily orphaned, PowerShell method will delete them too, and that can make problems due to SID change.

$ForeignSecurityPrincipalList = Get-ADObject -Filter {ObjectClass eq 
 ‘foreignSecurityPrincipal' }    
foreach($FSP in $ForeignSecurityPrincipalList)  
{      
Try     
 {$null=(New-Object System.Security.Principal.SecurityIdentifier($FSP.objectSid)).Translate([System.Security.Principal.NTAccount])}      
Catch    
  {Remove-ADObject -Identity $FSP}  
} 

Scheduled removal of orphaned FSP 

A task can be scheduled to make removal of orphaned FSP automatic.  

The best way to remove FSP by schedule  is by created script like for example : 

 The fictive company has a monthly turnover of  50 employees. 

A custom script can be made to delete orphaned FSP’s in the time range of 1 month  : 

 

Import-Module -Name OrphanForeignSecurityPrincipals 
$MyCompanyTurnover = 20 
 $OrphanFSPListFilePath ='c:\temp\OFSP.txt' 
$OrphanForeignSecurityPrincipalsList = Get-OrphanForeignSecurityPrincipal TabDelimitedFile $OrphanFSPListFilePath 

If ($OrphanForeignSecurityPrincipalsList) 
 { 
    If ($OrphanForeignSecurityPrincipalsList.Count -gt $MyCompanyTurnover) 
    { 
        $MailParameters = @{ 
            SmtpServer = 'mail.mycompany.com' 
            From       = 'NoReply@mycompany.com' 
            To         = 'Administrator@mycompany.com' 
            Subject    = "Orphan Foreign Security Principals found" 
            Body       = 'Please check attached file.' 
            Attachment = $OrphanFSPListFilePath 
        } 
          Send-MailMessage @MailParameters 
    }    else { 
        Remove-OrphanForeignSecurityPrincipal -TabDelimitedFile $OrphanFSPListFilePath 
    } 
} 

Recovery of deleted FSP 

Deleted orphaned FSP’s can be restored by restoring it from recycle bin if recycle bin feature is activated before deletion is made. 

FSP can be restored via PowerShell cmdlets too : 

An object can be found  with the following cmdlet, and after they are listed, selected orphaned FSP’s can be restored: 

Get-ADObject -Filter 'IsDeleted -eq $TRUE' -IncludeDeletedObjects | Where-Object {$_.DistinguishedName -like "CN=S-*"} 

There is one more way of restoring orphaned Foreign security principals that we need to mention.  

It is the way of following same steps as made before first creation. By adding foreign user/computer/group account into the same groups where it has been before it got orphaned status. This step will create the same Foreign Security principal as it was before, just with different SID number. 

 

Avoid having problems on the FSP or Foreign Security Principals

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

How to Configure NFS in Windows Server 2016

NFS  (Network File System) is a client-server filesystem that allows users to access files across a network and handle them as if they are located in a local file directory. It is developed by  Sun Microsystems, Inc, and it is common for Linux/ Unix systems. 

Since Windows Server 2012 R2, it is possible to configure it on Windows Server as a role and use it with Windows or Linux machines as clients. Read to know about How to Configure NFS in Windows Server 2016 here.

How to install NFS to Windows Server 2016 

Installation of NFS (Network File System) role is no different than an installation of any other role. It goes from “Add roles and features Wizard”. 

With few clicks on  “Select server roles” page, under File and Storage Services, and expansion of File and iSCSI Services, the system will show checkbox “Server for NFS”. Installation of that role will enable NFS server. 

The configuration of NFS on Windows Server 2016 

After installation, it is needed to configure role properly. The first stage is choosing or creating a folder for NFS (Network File System) share. 

With right click and properties option, the system will bring the NFS Sharing tab, and Manage NFS sharing button, as part of the tab. 

It will provide  NFS Advanced Sharing dialogue box, with authentication and mapping options, as well as with “Permissions” button. 

Clicking on “Permissions” button will open Type of access drop-down list, with the possibility of root user access, and permission level. 

By default, any client can access the NFS shared folder, but it is possible to control or limit the specific clients, with a clicking of Add button and type the client’s IP address or hostname. 

 Mount NFS Shared Folder on Windows Client 

The steps above make NFS (Network File System) server ready for work.  

To successfully test it, it is needed to mount chosen NFS folder on a Linux or Windows client with following steps: 

  1. It is needed to activate a feature on the client, by clicking Control Panel / Programs and Features / Services for NFS / Client for NFS
  2. After installing the service, it is needed to mount the folder with the following command :
mount –o \\<NFS-Server-IP>\<NFS-Shared-Folder> <Drive Letter>: 

 The command maps folder as drive and assigns chosen letter to it. 

Mount NFS Shared Folder on Linux Client  

No matter NFS is common to Linux / Unix systems, it is still needed to mount folder to a system via command, similar to windows systems. 

mount –t NFS <NFS-Server-IP>/<NFS-Shared-Folder> /<Mount-Point> 

 

Do you have unclear NTFS Permissions assignments?
Do you have too many special permissions set on your fileservers?
Or blocked NTFS Permission Inheritance?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Overview: How to Troubleshoot Active Directory Replication Issues

Active Directory Replication is more or less the center of all sorts of problems. It is a crucial service and it becomes more complicated when dealing with more than one domain controller.  Issues relating to replication can vary from authentication issues and problems arising when trying to access resources over the network. 

All objects in the Active Directory are replicated between domain controllers so that all partitions are synchronized. A large company with multiple sites means that replication takes place at the local site as well as the other sites to keep all partitions synchronized. This article aims to show you how to troubleshoot Active Directory replication issues. 

Active Directory replication problems come from different sources, some of which are Domain Name System failures, network problems, or security issues. 

Resources Needed to Troubleshoot Active Directory Replication 

Failures coming in and out the active directories due to replication issues lead to many inconsistencies between domain controllers. Such failures lead to systemic failures or inconsistent output. Identifying the main cause of replication failure helps system administrators identify the possible cause and hence elimination of the problem.  One of the commonly used interfaces based replication-monitoring tool is the Active Directory Replication Status Tool. 

Understanding Recommendations from the tool solution 

The red and yellow warning events in the system logs will always point out to the specific cause of replication failure and give the source and destination in the Active Directory. Any steps that are suggested by the warnings should be tried as explained. Other tools such as the Repadmin tool can give more information to help resolve replication issues. 

  • Eliminating Disruptions or Hardware Failures 

Before troubleshooting replication failures, it is important to rule out any issues related to software updates or upgrades, intentional disruptions, software configurations, and hardware failures. 

  • Intentional Disruptions 

Disruptions caused by unavailability (offline state) of a remote domain controller can be corrected by adding the computer as a member server using the Install From Media (IFM) method to configure the Active Directory Domain Services. The Ntdsutil command-line tool can be used to create installation media. 

  • Software Upgrades and Hardware Failures 

Hardware failures can come from failing motherboards or hard drives. Once a hardware problem is identified, system administrators should take immediate action to replace the failing components. Active Directory Replication failures can take place after a planned upgrade. The best way to handle this is through an effective communication plan that prepares people in advance. 

  • Software Configurations 

Some software settings such as the typical windows firewall have port 135 open alongside other advanced security settings. Some firewalls can be configured to allow for replication. 

Responding to Failures Reported on Windows 2000 Server 

Active Directory configured on Windows 2000 Server that has failed beyond the tombstone lifetime should be resolved by: 

  • Moving the server from a corporate to a private network 
  • Removing the Active Directory or Reinstalling the Operating System 
  • Removing its metadata from the Active Directory to hide its objects 

Removing the server metadata ensures that any attempt by the server to revive objects settings after 14 days is impossible. This also helps avert further error logs due to replication attempts with a missing Domain Controller. 

What are the Root Causes of Replication? 

Apart from the already discussed causes leading to replication failures, here are some other reasons. 

Network Connectivity: caused by unavailable network or wrong configurations 

Name Resolutions: Wrong DNS configurations 

Authentication and Authorizations: Aces denied errors every time a domain controller tries to connect for replication 

Directory Database: A slow data store not being able to handle fast transactions that take place within replication timeouts. 

Replication Engine: when replication schedules are short, it will lead to longer queues and large processing which may not be possible within the outbound replication schedule. 

Replication Topology: All domain controllers need to have links linking them to other sites within the Active Directory.  The links should map wide area networks or the virtual private network connections. All objects should be supported by the same site topology within the network to avoid replication failures. 

How do We Fix Replication Problems 

Any of the following approaches can be used to fix Active Directory Replication Issues: 

  • Daily monitoring of the state of replication using the Repadmin.exe to extract daily status updates 
  • Resolving reported replication failures as soon as possible, using steps provided in the event logs. Replication failures resulting from software configurations require un-installation of the software before attempting any other solutions. 
  • If all attempts to resolve replication issues do not work, remove the Active Directory Directory Services from the server and reinstall. 

When an attempt to remove AD DS fails when the server is online, any of the following methods can resolve the issue. 

  • Force the removal of the AD DS from the Directory Restore Mode (DSRM) by cleaning up the server metadata and reinstall the AD DS. 
  • Reinstall Operating system and reconfigure the Domain Controller 

Retrieving Replication Status Using Repadmin 

When everything in the Active Directory is working as intended and produces no errors, then it means the following services are working correctly: 

  • DNS 
  • Remote Procedure Call (RPC) 
  • Network Connectivity 
  • Window Time Service (W32time) 
  • Kerberos Authentication Protocol 

The Repadmin tool is used to study the daily replication activities. The tool is able to access all the replication status of all domain controllers in the forest. The report is relayed in a .CSV format that can be accessed using any spreadsheet reader. 

Generating Repadmin for Domain Controllers in a Spreadsheet.                                 

Using the command prompt as an administrator type the following: 

Repadmin /showrepl * /csv &gt; showrepl.csv 

  • Open Microsoft Excel, navigate to the showrepl.csv, and click open 
  • Hide or delete column A and the Transport Type Column 
  • Select the row below the column heading and click freeze panes by clicking on Freeze Top Row 
  • Select the whole spreadsheet and click filter from the data tab 
  • Click on the down arrow below the source DC column, point to text filters, and select the custom filter. 
  • In the custom AutoFilter box, below show rows where click on does not contain.  On the box next to it, type Del to eliminate results from deleted domain controllers. 
  • Repeat the previous step for the Last Failure Known Column and use does not equal and type 0 
  • Resolve replication issues. 

 Conclusion 

Replication going on smoothly throughout the Active Directory is critical. Poor replication means all manner of problems from authentication to inconsistent results.  The article is supposed to help you check on your system’s replication status and learn how to resolve the common replication errors.  

Protect Yourself and discover all permissions owner on your Windows fileservers!

Pass your next security audit without worrying about security leaks!

Get your free trial of the easiest and fastest NTFS Permission Reporter now!