Windows Server Deduplication

Thought to be one of the useful features of Windows Server since the launch of the 2008 R2 version. Deduplication is a native feature added through the server manager that gives system administrators enough time to plan server storage and network volume management.

Most Server Administrators rarely talk about this feature until it is time to address the organization’s storage crunch. Data deduplication identifies similar data blocks and saves a copy as the central source reducing the spread of data all over the storage areas. Deduplication works on a file or block level giving you more space in the server.

Special hardware components, which are relatively expensive, are required to explore the block level deduplication; the reason behind extra hardware is the complex processing requirements. The file level of deduplication is not complicated and thus does not require the additional hardware. In most cases, Administrators implementing deduplication prefer the file approach.

When to Apply Windows Server Deduplication

Windows server file deduplication works on the file level its operations work on a higher level than a block level as it tries to match chunks of data. File deduplication is an operating system level meaning that you can enable the feature within a virtual guest in a hypervisors environment.

Growth in industries is also driving the demand for deduplication although storage hardware components are becoming bigger and affordable. Deduplication is all about fulfilling the growing demand.

Why is Deduplication Feature Found on Servers?

Severs are central to any organization data, as users store their information into the repositories. Not all users embrace new technology on how to handle their work while others feel safe making multiple copies of the same work. Most of the work Server Administrators should be doing managing and backing up user data, and this gives them an easy time using windows dedupe feature.

Data deduplication in a straightforward feature and will take a few minutes to make it active. Deduplication is one of the server roles found on windows servers, and you do not need a restart for it to work. However, it is safe to do so to make sure the entire process is configured correctly.

Preparing for Windows Server Duplication

  • Click on start
  • Click on the run command window
  • Enter the command below and press enter (this command runs against selected volume to analyses potential space for storage)


  • Right click on the volume in Server Manager to activate data deduplication
  • The following wizard will guide you through the deduplication process depending on the type of server in place. (Choose a VDI or Hyper-V configuration or File Server)

Set up The Timing for Deduplication

Deduplication should run on scheduled time to reduce the strain on existing resources. You should not aim to save storage space at the expense of optimization of the server. The timing should at such a time when there is little strain on the server to allow for quick and effective deduplication.

Deduplication is a process that requires more CPU time because of the numerous activities and process taken by each job. Other deduplication demands include optimization, integrity scheduling, and garbage collection. All these deduplication activities should be running at peak hours unless the server has enough resources to withstand system slowdowns.

The capacity that deduplication reclaims varies depending on server use and storage available. General files, ISOs, Office applications files, and virtual disks consume much of the storage locations.

Benefits of Windows Server Deduplication

With the help of deduplication, it brings these direct benefits to the organization:

Reduced Storage Allocation

Deduplication can reduce storage space for files and backups. Therefore, an enterprise can get more storage space reducing the annual cost of storage hardware. With enough storage, there is a lot of efficiency, speed and eliminates the need of installing backup tapes

Efficient Volume Replication

Deduplication ensures that only unique data is written to the disk hence reducing network traffic

Increasing Network Bandwidth

If deduplication is configured to run at the source no need to transfer files over the network

Cost-Effective Solution

Power consumption is reduced, less space required for extra storage for both local and remote locations. The organization buys and spends less on storage maintenance thus reducing the overall storage costs.

File Recovery

Deduplication ensures faster file recoveries and restoration without straining the day’s business activities.

Features of Deduplication

Transparency and Ease of Use

Installation is straightforward on the target volume(s). Running applications and users will not know when deduplication takes place. The file system works well with NTFS file requirements. Files using the encryption mode, Encrypted File System (EFS), files that have a capacity smaller than 32KB or those with Extended Attributes (EAs) cannot be processed during deduplication. In such cases, file interaction takes place through NTFS and not deduplication. Files with alternative data stream will only have its primary data stream deduplicated, as the alternative will be left on the disk.

Works on Primary Data

The feature once installed on the primary data volumes will operate without interfering with the server’s primary objective. The feature will ignore hot data (active files at the time of deduplication) until it reaches a given number of days. The skipping of such files maintains consistency of the active files and shortens the deduplication time.

This feature uses the following approach when processing special files

  • Post procession: when new files are created, the files go directly to the NTFS volume where they are evaluated on a regular schedule. The background processing confirms file eligibility for deduplication every hour by default. The scheduling for confirmation time is flexible
  • File age: a setting on the deduplication feature called MinimumFileAgeDays controls how long a file should stay on the queue before it is processed. The default number of days is 5. The Administrator can configure it to 0 to process all files.
  • Type of File and Location Exclusions: you can instruct the deduplication feature not to process specific file types. You can choose to ignore CAB files, which does help the process in any way and any file that requires a lot of compression space such as PNG files. There is an option of directing the tool not to process a particular folder.


Any volume that is under deduplication runs as an automatic unit. The volume can be backed up and move it to a different location. Moving it to another server means that anything that was in that file is accessible on its new site. The only thing that you need to change is schedule timings because the native task scheduler controls the scheduler. If the new server location does not have a running deduplication feature, you can only access the files that have not been deduplicated.

Minimal Use of Resources

The default operations of the deduplication feature are to use minimal resources on the primary server. If any case the process is active, and there is a likely shortage of resources, deduplication will surrender the resources to the active process and resume when enough is available.

How storage resources are utilized

  • The harsh index storage method uses low resources and reduces read/write operations to scale large datasets and deliver high edit/search performance. The index footprint left behind is excessively low and uses a temporary partition.
  • Deduplication verifies the amount of space before it executes. If no storage space is available, it will keep trying at regular intervals. You can schedule and run any deduplication tasks during off-peak hours or during idle time.

Sub-file Segmentation

The process segments files into different sizes for example between 32 to 128 KB using an algorithm based on Microsoft research and other developers. The segmentation splits the file into a sequence depending on the content of the file. A Rabin fingerprint, a system based on sliding window hash helps to identify the chunk boundaries.

The average size of every segment is 64KB and are compressed and placed into a chunk store hidden in a folder located at the System Volume Information (SVI) folder. A reparse point, which is a pointer to the map of all data streams, helps in the replacement of normal files when requested.


Another feature you get from deduplication is that sub-file segmentation and indexing engine is shared with BranchCache feature. This sharing is important because when a Windows Server is running and all the data segments are already indexed, they can be quickly sent over the network as needed, therefore saving a lot of network traffic within the office or the branch.

How Does Deduplication Affect Data Access?

The fragmentations created by deduplication are stored on the disk are file segments that are spread all over increasing seek time. Upon the processing of each file, the filter driver will work overtime to maintain the sequence by keeping the segments together in a random fashion. Deduplication keeps a file cache to avoid repeating file segments and helps in quick file access. In a case where multiple users access the same resource simultaneously, that access pattern enables speeding up of the deduplication for each user.

  • No much difference is noted when opening an Office document; users cannot tell whether the feature is running or not
  • When copy one bulky file, deduplication will send end-to-end copy that is likely to be 1.5 times faster than it would take a non-deduplicated file.
  • During the transfer of multiple bulky files simultaneously, cache helps to transfer the file 30% times faster
  • The file-server load simulator (File Server Capacity Tool) when used to test multiple file access scenarios, you will notice a reduction of about 10% in the number of users supported.
  • Data optimization increases between 20-35 MB/Sec per job that easily translates to 100GB/hour for a single 2TB volume running on one core CPU with a 1GB RAM. This is an indicator that multiple volumes can be processed if additional CPU, disk resources, and memory.

Reliability and Risk Preparedness

Even when you configure the server environment using RAID, there is the risk of data corruption and loss attributed to disk malfunctioning, control errors, and firmware bugs. Other environmental risks to stored data include radiation or disk vibrators. Deduplication raises the risk of disk corruption especially when one file segment referring to thousands of other files is located in a bad sector. Such a scenario gives a possibility of losing thousands of user data.


Using the Windows Server Backup tool runs a selective file restore API to enable backup applications to pull files out of the optimized backup

Detect and Report

When a deduplication filter comes across a corrupted file or section of the disk, a quick checksum validation will be done on data and metadata. This validation helps the process to recognize data corruption during file access, hence reducing accumulated failures.


An extra copy of critical data is created, and any file segments with more than 100 references are collected as most popular chunks.


Inspection of the deduplication process and host volumes take place on a weekly basis to scrub for any logged errors and tries to fix them from alternative copies. An optional deep scrubber will walk you through the whole data set by identifying errors and fixing them if possible.

When the disk configurations are configured to mirror each other, deduplication will look for a better copy on the other side and use it as a replacement. If there are no other alternatives, data will be recovered from an existing backup. Scanning and fixing of errors is a continuous process once the deduplication is active.

Verdict on Deduplication

Some of the features described above does not work in all Window Server 2012 editions and may be subject to limitations. Deduplication was built for volumes that support NTFS data structure. Therefore root volumes and system drives, and it cannot be used with Cluster Shared Volumes (CSV). Live Virtual Machines (VMs) and active SQL databases are not supported by deduplication.

Deduplication Data Evaluation Tool

To get a better understanding of the deduplication environment, Microsoft created a portable evaluation tool that installs into the \Windows\System32\ directory. The tool can be tested on Windows 7 and later Windows operating systems. The tool installed through the DDPEval.exe supports local drives, mapped, unmapped, and remote shares. If you are using Windows NAS or an EMC /NetApp NAS, you can test it on a remote share.


The Windows Server native deduplication feature is now becoming a popular feature. It mirrors the needs of a typical server administrator working in production deployments. However, planning for deduplication before implementation is necessary because of the varying situations in which its use may not be applicable.

Windows Server 2016 – Whats New in Data Deduplication

Deduplication intends to eliminate repeating data to create a single instance. The creation of the single instance improves storage utility and works well in a network with heavy network transfers.

Some may confuse deduplication from data compression, which identifies repeat data within single files and encodes the redundancy. In simple terms, deduplication is a continuous process that eliminates excess copies of data, therefore, decreasing storage demands.

Data deduplication applies to Windows Server the Semi-annual Channel and Windows Server 2016. Data deduplication in Windows Server 2016 is highly optimized, manageable, and flexible.

The new elements of data deduplication in Windows Server 2016 are:

The Updated Features

Support for Large Volumes

It was present in earlier versions where volumes were partitioned to fit data sizes above 10TB. Windows Server 2016, data deduplication supports volume sizes up to 64TB.

What is the Added Value?

The volumes in Windows Server 2012 R2 had to be appropriately portioned in the correct sizes to ensure optimization was keeping up with the rate of data transfer. The implication here was that data deduplication could only work on volumes with data of 10TB or less. The performance also depends on existing workloads on write patterns.

What is Different?

Windows Server 2012 R2 uses a single thread and input and an output queue for every volume. This is to maximize optimization and make sure jobs do not fall behind affecting the volume’s overall saving rate, and large data sets have to be broken into small volumes. The volume size depends on the expected partition size, the maximum size is between 6 and 7TB for high volumes and 9 and 10TB for low volumes.

Windows Server 2016 has a new way of working the data deduplication runs on more than one thread and uses multiple input and output for every volume. This routine introduces a new routine that was only possible after dividing data into small chunks.

Support for Large Files

In earlier versions, any files approaching the 1TB size were not eligible for deduplication. Windows Server 2016 supports files with a maximum of 1TB.

What is the Added Value?

For Windows Server 2012 R2 you cannot use large files for deduplication due to reduced performance in the deduplication process queue. Windows Server 2016 deduplication for files up to 1TB is possible enabling you to save a large volume of work, for example, reduplicating large backup files.

What is Different?

Windows Server 2016 deduplication process uses a new streaming and mapping structures to improve and increase the deduplication output and access. Besides, the process can now optimize when there is a failure instead of restarting the entire process. Deduplication affects file with a capacity of 1TB.

The New Features

Support for Nano Server

Nano server support is a new feature and is available in any Nano Server Deployment option that features Windows Server 2016.

What is the Added Value?

Nano servers is a headless deployment in Windows Server 2016 that need a smaller system for tracing resources, enables quick startup and needs fewer updates and restarts than the Windows Server Core Deployment version.

Simple Backup Support

A Windows Server 2012 R2 supported Virtualized Backups like Microsoft Data Protection Manager after successful manual configurations. Windows Server 2016 has some new default backups that allow for seamless data deduplication for Virtual backups.

What is the Added Value?

For this to happen in earlier versions of Windows Server, you must manually tune deduplication settings as opposed to Windows Server 2016 that has a simplified process for its Virtualized backup applications. Server 2016 enables deduplication for a volume the same way the General Purpose File Servers.

Support Clusters Operating System Rolling Upgrade

Data is capable of supporting new Cluster OS Rolling Upgrade feature in Windows Server 2016.

What is the Added Value?

The failover clusters in Windows Server 2012 R2 could have a mix of nodes that run deduplication alongside nodes that operate Windows Server 2016 versions of deduplication.

The improvement adds full access to data that are Deduplicated during the rolling upgrade; this allows the gradual rollout of the new version of Data deduplication on an existing Windows Server 2012 R2 cluster without allowing for downtime during the upgrading process.

What is Different?

Earlier versions of Windows Server, a failover cluster required that all nodes in a cluster be of the same Windows Server version. In Windows Server 2016 version, the rolling upgrades allow clusters to run in mixed modes.

Upgrade and Conversion Options for Windows Server 2016 / 2019

It is always a good idea to start a new Windows Server 2016 / 2019 installation on a new slate. However, in some instances, you may be working on a site that will force you to upgrade from the current installation to the latest version.

The routines described here apply to the server versions of Windows 2016 and 2019. The article describes moving to Windows Server 2016 / 2019 from different lower server platforms.

The path to the new Operating System (OS) depends on the current system and configuration that you are running. That being the case, the following terms define activities you are likely to encounter when deploying the 2016 Server.


The simplest way of getting a new Operating System to work on your hardware and to be specific, a clean installation demands that you delete the previous operating system.


To migrate system settings to the new Windows Server using a virtual machine is what we call migration. The process also varies depending on the roles and system configurations already running.

Cluster OS Rolling Upgrade

This feature is new in Windows Server 2016, and its role is to make sure the Administrator can upgrade the operating system of all nodes running Windows Server 2012 R2 to Windows Server 2016 without interfering with the Hyper-V or Scale-Out File Server workloads.

The feature also helps in reducing downtime, which may affect Service Level Agreements.

License Conversion

Some Operating Systems use releases that allow the conversion of one edition to another without so much struggling. What you need is a simple command issued alongside a license key, and you end doing the license conversion.


When you want to use the latest software that comes with the newer versions, then you have to do an upgrade. In-place upgrades mean using the same hardware and install the new Operating System. For example, you can upgrade from evaluation to retail version or from a volume license to an ordinary retail edition.

NOTE 1: An upgrade will work well in virtual machines if you do not need specific OEM hardware drivers.

NOTE 2: Windows Server 2016 release, you can only perform an upgrade on a version installed using the Desktop Experience (not a server core option).

NOTE 3: if you use NIC teaming disable it before you perform an upgrade when the upgrade is complete re-enable it.

Upgrade Retail Versions of Windows Server to Windows Server 2016 / 2019

Note the following general principles:

  • Upgrading a 32-bit to 64-bit architectures is not possible. Note that all Windows Server 2016 versions are only available in 64-bit
  • You cannot upgrade from one language to another
  • If you are running a domain controller, make sure you can handle the task or read the following link (Upgrade Domain Controllers to Windows Server 2012 R2 and Windows Server 2012).
  • You cannot upgrade from a preview version
  • You cannot switch from Server Core installation to a Server with a Desktop installation
  • You cannot upgrade from a Previous Windows Server installation to an evaluation copy of Windows Server

You can read from the table below that shows a summary of Windows Operating systems that you can upgrade. If you are unable your current Window version, then upgrading to Windows Server 2016 is impossible

Current Windows Edition Possible Upgrade Edition
  • Windows Server 2012 Standard
  • Windows Server 2016 Standard or Datacenter
  • Windows Server 2012 Datacenter
  • Windows Server 2016 Datacenter
  • Windows Server 2012 R2 Standard
  • Windows Server 2016 Standard or Datacenter
  • Windows Server 2012 R2 Datacenter
  • Windows Server 2016 Datacenter
  • Windows Server 2012 R2 Essentials
  • Windows Server 2016 Essentials
  • Windows Storage Server 2012 Standard
  • Windows Storage Server 2016 Standard
  • Windows Storage Server 2012 Workgroup
  • Windows Storage Server 2016 Workgroup
  • Windows Storage Server 2012 R2 Standard
  • Windows Storage Server 2016 Standard
  • Windows Storage Server 2012 R2 Workgroup
  • Windows Storage Server 2016 Workgroup

Per-Server-Role Considerations for Upgrading

Even the supported upgrade paths from earlier versions before Windows Server 2016, some roles are part of the new system and may only need additional preparation or actions to get the desired intent.

Converting Current Evaluation Version to a Current Retail Version

It is possible to convert the trial version of Windows Server 2016 Standard to a Data 2016 Standard Server or a Datacenter. The two conversions can be retail versions. You can also convert Windows Server 2016 Datacenter to the retail version.

Before any conversion attempt to retail, make sure that your server is running an evaluation version; you can confirm this by any of the following:

  • From the administrator’s command prompt, run
slmgr.vbs /dlv;
  • The evaluation versions will include “EVAL” as the output
  • Open the control panel
  • Then click on System and Security
  • Click on System
  • View the activation status found on the activation area of the System page
  • Click view details, and you will see more information on your Windows Status
  • If your Windows is activated, you will see information showing the remaining for the evaluation period.

If you are running a retail version, you will see the “Upgrading previous retail versions of Windows Server 2016” prompting you to upgrade to Windows Server 2016.

Windows Server 2016 Essentials, the conversion to retail version is possible if you have a retail volume license or OEM key in the command slmgr.vbs

In any case, you are running an evaluation version of Windows Server 2016 Standard or Windows Server 2016 Datacenter. The following conversions can help you:

  • If the server is a domain controller, it cannot change to the retail version. First, install another domain controller on a server that runs a retail version and remove the AD DS from the domain controller that has the evaluation version.
  • Read the license terms
  • From the administrator’s command prompt, enter this command to get the current edition
DISM /online /Get-CurrentEdition

Note the edition ID the abbreviation form of the edition name and then run

DISM /online /Set-Edition:<edition ID> /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX /AcceptEula

Once you get the ID and product key, the server should restart twice.

You can convert the evaluation version of Windows Server 2016 Standard to the retail version of Windows Server 2016 Datacenter using the same command and correct product key.

Converting Current Retail Edition to a Different Current Retail Edition

After successful installation of Windows Server 2016, you can run setup to repair the installation using a process called “repair in place “to convert it to a different edition.

In the case of Windows Server 2016 Standard, you can convert the system to Windows Server 2016 Datacenter by:

  • From the administrator’s command prompt, use the following command to determine the existing edition
DISM /online /Get-CurrentEdition
  • Run the command to get the ID of the edition you want to upgrade to
DISM /online /Get-TargetEditions
  • Note the ID edition, the name of the edition and then run
DISM /online /Set-Edition:<edition ID> /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX /AcceptEula
  • Once you get the ID and product key, the server should restart twice.
  • Converting Current Retail Version to Current Volume Licensed Version

Once you have Windows Server 2016 running, you can convert it to a retail version, an OEM version, or a volume-licensed version. The edition will not change. If the starting point was an evaluation version, change it to retail version and then do as follows:

  • From the administrator’s command, run
slmgr /ipk <key>
  • Insert the appropriate volume-license, OEM or retail key instead of <key>


Upgrading Windows Server is a complicated process; therefore, Microsoft suggests that you migrate all roles and settings to Windows Server 2016 to avoid costly mistakes.

Whats New in Storage in Windows Server 2019 and 2016

The two Window Server Edition 2016 and 2019 have new features, and that has made it possible to store data in the name of storage migration. The migration service helps keep inventory when moving from one platform to another. Other essential details such as security settings and settings from old systems to the new server installation.

The article will try to explain what is new and any changed functions in the storage systems of Windows Server 2016, Windows Server 2019, and other semiannual releases.

We will start by highlighting some of the features added by the two server systems.

Managing Storage with Windows Admin Center

The Windows Admin Center is the central location where an App operating like a browser handles the server functions, clusters, Windows 10 PCs, and hyper-converged infrastructure containing storage locations. The Admin center does this as part of the new server configurations.

The Windows Admin Center is different and runs on Windows Server 2019 and some versions of Windows, we covered it first because it is new and we did not want you to miss it.

Storage Migration Service

The Storage Migration Service is the latest technology making it easy to move servers from an old to a new server version. All the events take place via a graphical interface displaying data on the servers and transfer data and configuration to new servers and then optimally moves old server identities to the new one to match apps and user settings.

Storage Spaces Direct (Available in Server 2019 only)

Several improvements have been made to storage spaces direct in Server 2019 though not included in Windows Server, Semi-Annual channel. Here are some of these improvements:

Deduplication and Compression of ReFS Volume

You will be able to store up to 10X more data on the same storage space using deduplication and compression of the ReFS system. You only need to turn this to on using a single click on the Windows Admin Center.

The different storage sizes with an option to compress make the saving rates to increase. The multi-threaded post processing will keep performance impact low. However, it supports a volume of up to 64TB and with each file reaching 1TB.

Native Support for Persistent Memory

Open up more performance with the native Storage Spaces Direct support for continuous creation of memory modules including the Intel Optane DC PM and NVDIMM-N. Use persistent memory as your cache to speed up the active working set or use it as an extra space needed to facilitate low latency. Manage persistent memory the same way you would any other storage device in Windows Admin Center or PowerShell.

Nested Resiliency for Two-Node Hyper-Converged Infrastructure on the Edges

The all new software resiliency option inspired by RAID 5 + 1 helps survive two hardware failures. The nested resiliency, the two node Storage Spaces Direct cluster should offer continuous accessible storage for programs and virtual machines even when one server node fails.

Two-Server Cluster Using USB Flash Drive as a Witness

Use a low-cost USB flash plugged into your router to act as a witness between two servers in a cluster. If the server is down, the USB will know which of the servers has more data.

Windows Admin Center

Managing and monitoring storage spaces direct with the newly built dashboard gives you an opportunity to create, delete, open and expand volumes with a few clicks. Follow performances of IOPS and IO latency from the entire clusters to the individual hard disks and SSD.

Performance Log

You will see what your server was up to in its resource utilization and performance using the built-in history feature. With more than 50 counters that cover memory, computation, storage and network are collected automatically collected and left in the cluster for a full year.

You do not find anything to install or configure or start; things will work in this feature.

Scale up to 4 PB for Every Cluster

Get to the level of multi-petabyte scale which makes sense in media servers for backup and archiving purposes. Windows Server 2019, storage spaces direct supports up to 4 petabytes (PB) which is the same as 4,000 terabytes.

Other capacity guides are increased as well; for instance, you can create volumes reaching 64 and not 32. The clusters can be stitched together into a set to make the scaling that fits within one storage namespace.

Accelerated Parity is now 2X Faster

You are now able to create Storage Spaces Direct Volumes that are part mirror and part parity. For example, mixing RAID-1 and RAID -5/6 to harness the advantages of both. Windows Server 2019, the performance of mirror accelerates parity is twice that of Windows Server 2016 due to optimizations.

Drive Latency Outline Detection

Get to know which drives have abnormal latency using proactive monitoring and the built-in outlier detection an inspiration from Microsoft Azure. You can see the failing drives labeled automatically in the PowerShell and Windows Admin Center.

Manual Delimiting of the Allocation of Volumes to Increase Fault Tolerance

The Admin can manually change the limit of allocations of volume in Storage Spaces Direct. Delimiting is done to increase fault tolerance in specific circumstances with added management considerations and complexity.

Storage Replica

The storage replica has the following improvements:

Storage Replica in Windows Server, Standard Edition

It is now very possible to use Storage Replica with Windows Server, Standard Edition as well as the Datacenter editions. Running storage replica on Windows Server, Standard Edition has the following weaknesses:

  • Storage replica replicated a single volume and not an unlimited volume number
  • Volume varies with some taking up to 2TB instead of taking an unlimited size

Storage Replica Log Performance Improvements

Some improvements on how storage and replica logs track replication improve replication throughout the latency period as well as Storage Spaces Direct clusters that replicate.

To get the increased performance, all members of the replication group must run Windows Server 2019.

Test Failover

Mount a temporary snapshot of the replicated storage on destination server for testing or backing up purposes.

Windows Admin Center Support

Supporting the graphical management of replication is made possible via the Server Manager Tool. This involves server-to-server replication, cluster-to-cluster and stretch cluster replication.

Miscellaneous Improvements

Storage Replica also seems to have the following improvements:

  • Changes asynchronous stretch cluster behaviors for automatic failover to take place.
  • Multiple bug fixes


SMB1 and Guest Authentication Removal

Windows Server does not install the SMB1 client and server by default while at the same time the ability to authenticate guests in SMB2 if off by default.

SMB2/SMB3 Security and Compatibility

More options for security and application compatibility were added including the disabling oplocks in SMB2+ for old applications. This also covers the need for signing encryption on every connection from the client.

Data Deduplication

Data Deduplication Supports ReFS

You do not have to choose between the advantages of a modern file system with ReFS and Data Deduplication. Anytime you enable Data Deduplication, enabling ReFS is also possible now.

Data Port API for Optimized Ingress/egress to Deduplicated Volumes

As a developer, you can now enjoy the advantage of knowing data deduplication and how to store data in an efficient manner

File Server Resource Manager

Windows Server 2019 can prevent the File Resources Manager service from creating a change(USN) journal on storage volumes. This is to create and conserve more space on every volume; however, it will disable real-time classification.

This is the same effect that takes place in Windows Storage Server, Version 1803.

What’s New in Storage in Windows Server, Version 1709

Server Version 1709 is the first window server release with a Semi-Annual Channel, which is a channel that fully supported in production for 18 months and a new version coming in every six months.

Storage Replica

Disaster recovery and protection team is an added function of the Storage Replica which is now expanded to include:

Test Failover

You now have an option of mounting the destination storage through a test failover. The snapshots can be mounted temporarily for both testing and backup purposes

Windows Admin Center Support

Supported the management of graphical applications managing replications. You access it via Server Manager Tool.

Storage Replica also has the following improvements:

  • Change asynchronous cluster behaviors to enable automatic failover
  • Multiple bug fixes

What’s New in Storage in Windows Server 2016

Storage Spaces Direct

The storage spaces direct facilitate the availability and scalability of storage using servers with local storage. This implies that deployment and management software that control storage systems and unlock the use of new classes of storage devices. The devices include SATA, SSD, and NVMe disks that may not have been possible with clustered Storage Spaces with Shared Disks.

What Value Does the Change add?

The Storage Spaces Direct allows service providers and enterprises to use industry standard servers with local storage. The idea is to build highly available and scalable software-defined storage.

Use of servers with local storage decreases complexity as it increases scalability and allows the use of storage devices such as SATA solid state disks. This lowers the cost of flash storage, or NVMe sold state Disks

Storage Spaces Direct Removes the need to have a shared SAS fabric which simplifies deployment and configuration. This means the server uses the network as the storage fabric leveraging the SMB3 and SMB Direct (RDMA) for both high speed and low latency as well as good use of the processing unit.

Adding more server to the configuration increases storage capacity and input and output performance. In Windows Server 2016 has its Storage Spaces Direct working differently as explained below:

Storage Replica

Enables the storage, block-level, stretching of failover clusters between sites, as well as synchronous replication between servers. Synchronous replication enables mirroring of data in physical sites with consistent volumes to ensure no data is lost at the file system level. Asynchronous replication may increase the possibility of data loss.

What Value Does the Change Add?

Provide a single vendor disaster recovery solution for both planned and unplanned power loss

Use SMB3 transport with proven performance, scalability, and reliability

  • Stretch windows failover clusters further
  • Use Microsoft end-to-end software for storage and clustering such as Hyper-V, Scale-Out File Server, Storage Replica, Storage Spaces, ReFS/ NTFS, and Deduplication.
  • Help reduce complexity costs by:
  • Being hardware agnostic with no specific requirement for storage configuration like DAS or SAN
  • Allow the storage of commodities and network technologies
  • Features easy graphical management interface for nodes and clusters through failover cluster manager
  • Includes comprehensive and large scale scripting options through the Windows PowerShell
  • Help in the reduction of downtime, large scale productivity
  • Provide supportability and performance metrics and diagnostic capabilities

What Works Differently

The functionality is new in Windows Server 2016

Storage Quality of Service

You can use the storage quality of Service (QoS) as a central monitor for end-to-end storage performance and develop management policies using Hyper-V and CSV clusters in Windows Server 2016.

What Value Does the Change Add?

You will be able to change the QoS policies on a CSV and assign one or more virtual disks on Hyper-V machines. The storage will automatically adjust to meet the policies and workloads that keep fluctuating.

  • Each policy can give a minimum reserve or create a maximum to be used when collecting data. For example, a single virtual hard disk, a tenant, a service or a virtual machine can be used.
  • Use Windows PowerShell or WMI to perform the following:
  • Create Policies on CSV cluster
  • Assign the policy to virtual hard disk and status within the policies
  • Enumerate policies on the CSV clusters
  • Monitor flow performance and status of the policy
  • If you have several virtual hard disks sharing the same policy and performance is shared to meet the demands within the policy’s minimum and maximum settings, it means that the policy can manage virtual hard disks, a single or multiple virtual machines that constitute a service owned by a tenant.

What Works Differently

This is a new feature in Windows Server 2016. The management of minimum reserves and monitoring flow of all virtual disks over a cluster using a single command and central policy-based management are not possible in the previous Server releases.

Data Deduplication


New or Updated


Support large volumes

Updated Before windows Server 2016 you had to specify sizes. Anything above 10TB did not qualify for deduplication. Server 2016 supports deduplication sizes of up to 64TB

Large file support

Updated Before Windows Server 2016, files with 1TB could not deduplicate. Server 2016 supports deduplication of files up to 1TB.

Nano Server Support

New Deduplication is available and fully supported for Server 2016

Simple Backup Support

New Windows Server 2012 R2 supported the Virtual backups using the Microsoft’s Data Protection Manager. Windows Server 2016 simple backup is possible and is seamless

Cluster OS Rolling Upgrades Support

New Deduplication supports Cluster OS Rolling Upgrade and is available in Windows Server 2016

SMB Hardening Improvements for SYSVOL and NETLOGON Connections

Windows 10 and Windows Server 2016 client connections to the Active Directory Domain Service, the SYSVOL and NETLOGON share domain controllers that require SMB signing and authentication via Kerberos.

What does this Change Add?

It reduces the possibility of man-in-the-middle attacks

What Works Differently?

If the SMB and mutual authentication are not available, a Windows 10 or Server 2016 will not access the domain-based Group Policy Scripts. It is also good to note that the registry values of the settings are not present by default, the hardening rules will apply until a new policy change comes in through Group Policy or any relevant registry values.

Work Folders

The added changes to notifications are there when Work Folder server is running on Windows Server 2016, and the Work Folder is on a client running Windows 10.

What Value Does this Change Add?

Windows Server 2012 R2 when the changes in files are synchronized to Work Folder, clients will get notified of the impending changes and wait for at least 10 minutes to the update.

When running Windows Server 2016, the Work Folders will immediately notify the Windows 10 client and the synchronization changes immediately.

What Works Differently

This is a new feature in Windows 2016 and the client accessing the Work Folders must be a Windows 10. In case you are using older clients, or if the Work Folder is on Windows Server 2012 R2, the client will poll every 10 minutes for any new changes.


The next cycle will be the ReFS that offer support for large scale storage allocation with varying workloads, reliability, resilience and scalability for your data.

What Values Does the Change Add?

ReFS bring in the following improvements:

  • Implementing new storage tiers that help deliver fast performance and increased capacity. This functionality further enables:
  • Multiple resiliency on the same virtual disk through mirroring and parity tier
  • Enhanced responsiveness to drifting working sets
  • Introducing a block of cloning and substantial improvement of VM operations such as. vhdx checkpoint merge operations.
  • The ReFS tool enables the recovery of leaked storage and helps keep from being corrupted.

What Works Differently?

These functionalities are new in Windows Server 2016.


With so many features available to Windows Server 2019, the article covered the fully supported features. At the time of writing this post, some features were partially supported in earlier versions but are getting full support in the latest Server versions. From this read, you can see that Windows Server 2019 is a good upgrade that you can experience.

Windows Server 2016 and GDPR

“As the world continues to change and business requirements evolve, some things are consistent: a customer’s demand for security and privacy.”
Satya Nadella, Microsoft’s CEO

An important topic in European IT world these days is GDPR ( General Data Protection Regulation ).

A new European data and privacy protection law will be activated on May 25, 2018, referred to all citizens of EU with a purpose of protecting and enabling the privacy rights of individuals.

The GDPR regulates protection and enabling private data of any individual, no matter where data is sent, processed or stored.

The GDPR forms complex set of rules regarding any organization that offers goods or services to citizens of EU or collects and analyzes data regarding EU citizens in any form, no matter of the location of business included.

The Key Elements of the GDPR can be settled on three key points

  • Enhanced personal privacy rights
  • An increased duty of protecting personal data
  • Mandatory personal data breach reporting

Those points, in short lines, define protection of EU residents by granting access to their personal data, and rights to manage it in any way ( correct, erase or move ), awareness and responsibility of organisations that process personal data, and mandatory reporting of detected breaches to supervisory authorities, no later then 72 hours after detection.

How does the GDPR define personal and sensitive data, and how those definitions relate to data held by organizations?

Personal data, considered by GDPR, is any information related to an identified or identifiable natural person, direct identification (legal name etc.) indirect identification ( specific information that can identify you in data references), and online identifiers ( IP, mobile ID’s and location data).

The GDPR sets specific definitions for generic data ( an individual’s gene sequence) and biometric data. This type of data, along with other subcategories of personal data (data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership: data concerning health; or data concerning a person’s sex life or sexual orientation) are treated as personal data, and require individual’s acceptance where these data are to be processed.

In case of processing any sensitive or personal data on a physical or virtual server, the GDPR require implementation of technical and organizational security measures to protect personal data and processing systems from today’s security risks, like Ransomware attacks, or any type of cyberterrorism.

An additional type of problem occurs with Ransomware attacks regarding the GDPR estimated penalties, which make any company’s system that contains personal and sensitive data, potential-rich targets. Depending on the kind of infringement, there might be monetary penalties from 2% up to 4% of the total worldwide annual turnover, not less than 10 to 20 million Euro.

What does GDPR mean for Windows Server security and protection, and how does Windows Server supports GDPR compliance?

At Microsoft server 2016, security is placed on architectural principle, and it can be seen as four major points:

  • Protect – Focus and innovation on preventive measures
  • Detect – Monitoring tools with the purpose to spot abnormalities and respond to attacks faster
  • Respond – Usage of response and recovery technologies and experts
  • Isolate – Isolation of operating system components and data secrets, limited administrator privileges, and rigorously measured host health.

Those points implemented in Windows Server, greatly improve the defense of possible data breaches.

Key features within Windows Server are pointed to help user efficiently and effectively implement the security and privacy mechanisms the GDPR requires for compliance.

Windows Server 2016 helps block the common attack vectors used to gain illegal access to user systems: stolen credentials, malware, and a compromised virtualization fabric.

In addition to reducing business risk, the security components built into Windows Server 2016 help address compliance requirements for key government and industry security regulations.

These identities, operating system, and virtualization protections enable better protection of datacenter running Windows Server as a VM in any cloud, and limit the ability of attackers to compromise credentials, launch malware, and remain undetected. Likewise, when deployed as a Hyper-V host, Windows Server 2016 offers security assurance for virtualization environments through Shielded Virtual Machines and distributed firewall capabilities. With Windows Server 2016, the server operating system becomes an active participant in data center security.

The GDPR specifically regulates control over access to personal data, and system that process it, including administrator/privileged accounts. It defines privileged identities as any accounts that have elevated privileges, such as user accounts that are members of the Domain Administrators, Enterprise Administrators, local Administrators, or even Power Users groups.

Those kinds of accounts are protected from compromising with protecting guidelines, all organizations should implement:

  • Reasonable allocation of privileges – User should not have more privileges than needed for successful job completion.
  • Limit sign in time for privileged accounts to “strictly work-related operations”.
  • Social engineering research – In goal to prevent email phishing, and a possibility for the security breach, even though “harmless”, lower level accounts
  • Every account with unnecessary domain admin-level privileges increases exposure to attackers seeking to compromise credentials. To minimize the surface area for attack, it is recommended to provide only the specific set of rights that an admin needs to do the job – and only for the window of time needed to complete it. That way of administration is called Just Enough Administration and Just-in-Time Administration, and it is highly recommended,

Windows Server 2016 offers various types of prevention and protection tools and features, for various types of user accounts, such as

  • Microsoft Identity Manager 2016
  • Local Administration password solution
  • Windows Defender Credential Guard
  • Windows Defender Device Guard
  • Control Flow Guard

which cover the areas of protecting the user/admin credentials, trusted software-only installation, breach notification, and jump-oriented programming (JOP) defense.

It actively alerts administrators to potential breach attempts with enhanced security auditing that provides more detailed information, which can be used for faster attack detection and forensic analysis. It logs events from Control Flow Guard, Windows Defender Device Guard, and other security features in one location, making it easier for administrators to determine what systems may be at risk.

A newly introduced feature is Shielded VMs. They include a virtual TPM (Trusted Platform Module) device, which enables organizations to apply BitLocker Encryption to the virtual machines and ensure they run only on trusted hosts to help protect against compromised storage, network, and host administrators. Shielded VMs are created using Generation 2 VMs, which support Unified Extensible Firmware Interface (UEFI) firmware and have virtual TPM.

The GDPR can have a significant impact on any business that uses any type of personal data. it should be taken seriously, and implemented as soon as possible, no matter time, funds, or planning required.

Windows Server – How To Close Open Files

Here I will describe how to close open server files and processes.

Every system admin on Microsoft Windows Server systems, at least once, will come in a situation that some file is open on a server, and it is needed to check what kind of process or user opened it.

This open files can cause some troubles, like upgrade errors, or reboot hold up etc.

It could be a huge problem, which, if not thought through, can cause the delay of updates, or errors in server maintenance.

More common, but less extreme issues regarding this could come from users. Sometimes, in situations when users leave shared files open on their accounts, some other users, when opening the same file can experience error messages, and cannot access the same file.

This article will show you the way how to deal with that kind of issues, how to find and close open files/process. The operations can be applied to Microsoft Windows Server systems 2008, 2012, 2016 and Windows 10 for workstations.

There are a lot of working methods to deal with that kind of problems, first, one that we will describe is a usage of computer management:

View open files on a shared folder

In a situation of locked files on the server, made by users, this method could come in handy to troubleshoot it.

Use right click on start menu and select Computer Management ( or in start menu search type compmgmt.msc)

The procedure is very simple, and in most cases, it works with no problems.

Click on Shared Folders”, and after that, on Open Files.

That should open the screen with a list of files that are detected as open, the user that opened it, possible locks, and mode that is opened in.

By right click on the wanted file, choose an option, “Close open file”, and that will close it.

With processes and file details, the process is bit different.

Usage of Windows Task Manager

Task Manager will not close opened shared files, but it can close processes on the system.

It can be opened with a combination of keys ctrl, alt, del ( and choose Task Manager), or right-clicking on the taskbar then choose open task manager option.

Under tab processes, you can see all active processes and line it by parameters CPU, Memory etc…

If there is a process that you want to terminate, it can be done by simply right click on the process, and then choose End Process option.

Usage of Resource Monitor

For every system administrator, Resource Monitor is “the tool” that allows control and overview overall system processes and a lot more.

Resource Monitor can be opened by typing “resource monitor” in a start menu search box.

Another option is to open up the task manager, click the performance tab and then click Open Resource Monitor.

When Resource Monitor opens, it will show tabs, and one, needed for this operation is Disk.

It shows disk activity, and processes, files open, PID, read and write bytes per second etc.

If the system is running a lot of “live” processes, it can be confusing, so Resource Monitor offers “stop live monitoring” option, which will stop processes on screen running up and down, and will give you an overview of all processes up to “stop moment”.

Resource monitor offers an overview of opened files paths and processes on the system, and with that pieces of information, it is not a problem to identify and close files or processes.

Powershell cmdlet approach

Of course, PowerShell can do everything, GUI apps can, maybe even better, and in this case, there are several commands, that can and will close all your system’s opened files and processes.

There are more than one solutions with PowerShell scripts, and it is not recommended for administrators without experience in scripting.

For this example, we will show some of the possible solutions with PowerShell usage.

The following examples are applied to  Server Message Block (SMB) supported systems, and for systems that do not support SMB, the following examples will show how to close the file with NET file command approach.

In situations where one, or small numbers of exact known open files should be closed, this cmdlet can be used. It is, as usual, used from elevated PowerShell, and applies to a single file ( unsaved data on open files, in all examples, won’t be saved).

Close-SmbOpenFile -FileId ( id of file )
Are you sure you want to perform this action? 
Performing operation 'Close-File' on Target ‘( id of file)’. 
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): N

There is a variation of cmdlet which allows closing open files for a specific session.

Close-SmbOpenFile -SessionId ( session id )

This command does not close a single file, it applies to all opened files under the id of the specific session.

The other variation of the same cmdlet is applying to a file name extension ( in this example DOCX).

The command will check for all opened files with DOCX extension on all system clients and it will force close it. As mentioned before, any unsaved data on open files, will not be saved.

Get-SmbOpenFile | Where-Object -Property ShareRelativePath -Match ".DOCX" | Close-SmbOpenFile -Force

There are a lot more this cmdlet flags, and variations which allow applying a lot of different filters and different approaches to closing open files.

Powershell Script approach

With PowerShell scripts, the process of closing open files and processes can be automated.

$blok = {$adsi = [adsi]"WinNT://./LanmanServer"

$resources = $adsi.psbase.Invoke("resources") | Foreach-Object {

 New-Object PSObject -Property @{

 ID = $_.gettype().invokeMember("Name","GetProperty",$null,$_,$null)

 Path = $_.gettype().invokeMember("Path","GetProperty",$null,$_,$null)

 OpenedBy = $_.gettype().invokeMember("User","GetProperty",$null,$_,$null)

 LockCount = $_.gettype().invokeMember("LockCount","GetProperty",$null,$_,$null)



$resources | Where-Object { $_.Path -like '*smbfile*'} |ft -AutoSize

$resources | Where-Object { $_.Path -like '*smbfile*'} | Foreach-Object { net files $_.ID /close }


Invoke-Command -ComputerName pc1 -ScriptBlock $blok

Our example script enables closing a file specified by path, that should be inserted In the script.

This way of closing open files is not recommended for administrators without PowerShell scripting experience, and if you are not 100% sure, that you are up to the task, do not use this way.

Close A File On Remote Computer Using Command Line

There are two other ways to close the open files. Either Net File or PSFile (Microsoft utility) could be used to close them. The first command can be ruined by usage of NET File command using the Psexec.exe remotely. The NET command does not support any Remote APIs.

Net file command can list all open shared files and the number of file lock per file. The command can be used to close files and remove locks ( similar to SMB example before) and it is used, similar to example before, when user leave a file open or locked.

It can be done with the following syntax

C:>net file [id [/close]]

In this syntax, ID parameter is the identification number of file ( we want to close), and of course, parameter close, represents action we want to apply to ID ( file).

Best practice of NET file command usage is to list open files by running Net File command, which lists all open files and sign it with numbers 0, 1, etc

So when files are listed, the command which will close open files is ( for example),

C:>net file 1 /close

So command will apply in a way that will close a file signed with number 1.

PsFile usage

PsFile is a third party application, but I will not put it in a list of third parties, as any good system administrator should use it as “normal”.

commands are similar to net file commands, with a difference that it doesn’t truncate long file names, and it can show files opened on remote systems locally.

It uses the NET API, documented in platform tools, and it becomes available by downloading PsTools package.

 psfile [\\RemoteComputer [-u Username [-p Password]]] [[Id | path] [-c]]

Usage of PsFile “calls” remote computer with valid username and Password, and with path inserted it will close the open files on the remote system

For Processes opened on the remote system, there is a similar command called PsKill, which on same principle “kills” processes.

Release a File Lock

In some situations, a problem with closing files can be handled by releasing a file lock. There are many examples of users locking their files, and leave it open ( for some reason, the most common type of locked files are excel files).

So all other users get an error message of type: Excel is locked for editing by another user, and get no option to close it or unlock.

As an administrator, you should have elevated rights and with right procedure, that can be fixed easily.

With pressing windows key and R, you will get windows run dialog.

In run dialog type mmc ( Microsoft Management Console).

By going to option File > Add/Remove Snap-in, add a “Shared Folders” snap-in.

If you are already an operating system that has the issue, choose Local Computer option, if not, choose Another computer option and find a wanted computer name.

Expand the Shared Folders, then select open files option.

Choose locked/open file, and close it by right click and selection of Close open file.

The described procedure will unlock and close an open file ( similar as in the first example of an article), and users will be able to access it.

Usage of Third-party apps

There is a lot of third-party apps with the purpose of handling open server files on the market.

We will describe a few of most used ones in this purpose.

Process Explorer – a freeware utility solution from Windows Sysinternals, initially created by Winternals, but acquired by Microsoft. It can be seen as Windows Task Manager with advanced features. One of many features is the close open files feature, and it is highly recommended for Server Administrators and IT professionals.

Sysinternals can be accessed on the following link :

OpenedFilesView – Practically a single executable file application, displays the list of all opened files on your system. For each opened file, additional information is displayed: handle value, read/write/delete access, file position, the process that opened the file, and more.

To close a file or kill a process, right-click any file and select the desired option from the context menu.

It can be downloaded on the following link :

Lockhunter – usually a tool with a purpose of deletion of blocked files ( to recycle bin). It can be a workaround for open files, and it has a feature of listing and unlocking locked files on your system. It is very powerful, and helpful in a situation when system tools fail.

It could be downloaded on following the link:

Long Path Tool – Long Path Tool is a shareware program provided by KrojamSoftthat, as its name suggests, helps you fix a dozen issues you’ll face when a file’s path is too long. Those issues include not being able to copy, cut, or delete the files in question because its path is too long. With a bunch of features, this could maybe be an “overkill” for this purpose, but it is definitely a quality app for all sysadmins.

It could be downloaded on following link:

How to Set Accurate Time for Windows Server 2016

Accurate time for Windows Server 2016

Windows Server 2016 should be able to maintain an accuracy of 1ms in sync with the UTC time. This is because new algorithms and periodic time checks are obtained from a valid UTC server.

The Windows time service is a component that uses a plug-in for the client and server to synchronize. Windows has two built-in client time providers that link with the third party plugins.

One of the providers uses the Network Time Protocol (NTP) or the Microsoft Network Time Protocol (MS-NTP) to manage the synchronizations to the nearest server.

Windows has a habit of picking the best provider if the two are available.

This article will discuss the three main elements that relate to an accurate time system in Windows Server 2016.

  • Measurements
  • Improvements
  • Best practices

Domain Hierarchy

Computers that are members of a domain use the NTP protocol that authenticates to a time reference in relating to security and authenticity.

The domain computers synchronize with the master clock that is controlled by domain hierarchy and the scoring system.

A typical domain has hierarchical stratum layers where each Domain Controller (DC) refers to the parent DC with accurate time.

The hierarchy revolves around the Primary Domain Controller (PDC) or a DC with the root forest, or a DC with a Good Time Server for the Domain (GTIMESERV) flag.

Standalone computers use the The name resolution takes place when the Domain Name Service resolves to a time owned by a Microsoft resource.

Like any other remotely located time references, network outages not allow synchronization to take place. Paths that are not symmetrical in a network reduce time accuracy.

Hyper-V guests have at least two windows time providers; therefore, it is possible to observe different behaviors with either the domain or the standalone.

NOTE: stratum refers to a concept in both the NTP and the Hyper-V providers. Each has a value indicating clock location in the hierarchy. Stratum 1 is for high-level clock, and stratum 0 is for hardware. Stratum 2 servers communicate to stratum 1 servers, stratum 3 to stratum 2 and the cycle continues. The lower stratums show clocks that are more accurate with the possibility of finding errors. The command line tool w32tm (W32time) takes time from stratum 15 and below.

Factors Critical for Accurate Time

Solid Source Clock

The original source of the clock needs to stable and accurate at all times. This implies that during the installation of the Global Positioning Service (GPS) pointing to stratum 1, you take #3 into consideration. Therefore, if the source clock shows stability, then the entire configuration will have a constant time.

Securing the original source time means that a malicious person will not be able to expose the domain to time-based threats.

Stable Client Clock

A stable client takes in on the natural drift of the oscillator to make sure that it is containable. The NTP uses multiple samples to condition the local clocks on standalone to stay on course.

If the time oscillation on the client computers is not stable, there will be fluctuations between adjustments leading to malfunctioning of the clock. Some machines may require hardware updates for proper functioning.

Symmetrical NTP Communication

The NTP connection should be symmetrical at all times because the NTP use calculation adjustments to set time as per the symmetry levels.

If the NTP request takes longer than the expected time on its return, time accuracy is affected. You may note that the path could change due to changes in topology or routing of packets through different interfaces.

The battery-powered devices may use different strategies, which in some cases require that the device be updating every second.

Such a setting consumes more power and can interfere with power saving modes. Some battery run devices have some power settings that can interfere with the running of other applications and hence interfere with the W32time functions.

Mobile devices are never 100% accurate if you look at the various environmental factors that interfere with the clock accuracy. Therefore, battery-operated devices should not have high time accuracy settings.

Why is Time Important

A typical case in a Windows environment is the operation of the Kerberos that needs at least 5 minutes accuracy between the clients and servers. Other instances that require time include:

  • Government regulations, for example, the United States of America uses 50ms for FINRA, and the EU uses 1ms ESMA or MiFID II.
  • Cryptography
  • Distributed systems like the databases
  • Block chain framework for bitcoin
  • Distributed logs and threat analysis
  • AD replication
  • The Payment Card Industry (PCI)
  • The Time Improvements for Windows Server 2016
  • Windows Time Service and NTP

The algorithm used in Windows Server 2016 has greatly improved the local clock when synchronizing with the UTC. The NTP has four values to calculate the time offset based on timestamps of client requests or responses and server requests and responses.

The modern network environment has too much congestion and related factors that affect the free flow of communication.

Windows Server 2016 use different algorithms to cancel out the disturbances. Besides, the source used in windows for time references uses improved Application Programmers Interface with the best time resolution giving an accuracy of 1ms.


Windows 2016 Server made some improvements that include accurate VM start and VM restore. The change gives us an accuracy of 10µs of the host with a root mean square (RMS) of 50µs with a machine carrying a 75% load.

Moreover, the stratum level at the host sends to guests more transparently. Earlier hosts would stick fixed at stratum 2 regardless of its accuracy, the changes in Windows Server 2016 the host reports at stratum 1, which gives better timing for the virtual machines.

Domains created in Windows 2016 Server will find time to be more accurate because the time does not default to the host and that is the reason behind manually disabling the Hyper-V time provider settings in windows joining a Windows 2012R2 and below.


Counters tracking the performance counters are now part of the Windows Server 2016, they allow to monitoring, troubleshooting, and baselining time accuracy. The counters include:

Computed Time Offset

This feature indicates the absolute time between the system clock and chosen time source in microseconds. The time updates when a new valid sample is available. Clock accuracy is traced using the performance counter that has an interval of 256 seconds or less.

Clock Frequency Adjustment

This adjustment indicates the time set by the local W32Time measured in parts per billion. The counter is important when it comes to visualizing actions taken by W32time.

NTP Roundtrip Delay

NTP Roundtrip Delay is the time taken when transmission of a request to the NTP server and when the response is valid.

This counter helps in characterizing the delays experienced by the NTP client. When the roundtrip is large or varies can lead to noise when the NTP computes time, thereby affecting time accuracy.

NTP Client Source Count

The source count parameter holds the number of clients and unique IP addresses of servers that are responding to client requests. The number may be large or small compared to active peers.

NTP Server Incoming Requests

A representation of the number of requests received by the NTP server indicated as request per second.

NTP Server Outgoing Responses

A representation of the number of answered requests by the NTP server indicated as responses per second.

The first three show the target scenarios for troubleshooting accuracy issues. The last three cover NTP server scenarios, which help determine the load and setting a base for the current performance.

Configuration Updates per Environment

The following is a description that changes the default configuration between Windows 2016 and earlier versions. The settings for Windows Server 2016 and Windows 10 build 14393 are now taking unique settings.



Server 2016

Windows 10

Servers 12 and 08 and Windows 10

Standalone or a Nano Server


Time server



Poling frequency

64-1024 seconds


Once a week


Clock update frequency

Once a second


Once a hour

Standalone Client


Time server



Polling frequency


Once a day

Once a week


Clock update frequency


Once a day

Once a week

Domain Controller


Time server





Polling frequency

64 to 1024 seconds


1024 to 32768 seconds


Clock update frequency

Once a day


Once a week

Domain Member Server


Time server





Polling frequency

64 to 1024 seconds


1024 to 32768 seconds


Clock update frequency

Once a second


Once every 5 minutes

Domain Member Client


Time server





Polling frequency


1024 to 32768 seconds

1024 to 32768 seconds


Clock update frequency


Once every 5 minutes

Once every 5 minutes

Hyper-V Guest


Time server

Chooses the best alternative based on host stratum and time on the server

Chooses the best alternative based on host stratum and time server

Defaults to host


Polling frequency

Based on the role above

Based on the role above

Based on the role above


Clock update frequency

Based on the role above

Based on the role above

Based on the role above

Impact of Increased Polling and Clock Update Frequency

To get the most accurate time, the defaults for polling frequencies and clock updates are which give you the ability to make adjustments more frequently.

The adjustments lead to more UDP and NTP traffic that will in no way affect the broadband links.

Battery devices do not store the time when turned off and when turned on it may lead to frequent time adjustments. Increasing the polling frequency will lead to instability, and the device will use more power.

Domain controllers should have less interference after multiple effects of increasing updates from NTP clients and AD domain. NTP does not require many resources compared to other protocols.

You can reach the limits of the domain functionality before getting a warning indicating increased settings in Windows Server 2016.

The AD does not use secure NTP, which does not synchronize time accurately but will increase the clients two stratum away from the PDC.

You can reserve at least 100NTP requests per second for every core. If you have a domain with 4 CPUs each, the total NTP should be serving 1,600 NTP requests per second.

As you set up the recommendations make sure, you have a large dependency on the processor speeds and loads. Administrators should conduct all baseline tests on site.

If your DCs are running on sizeable CPU load more than 40%, the system is likely to generate some noise when NTP is responding to requests; this may impair domain time accuracy.

Time Accuracy Measurements


Different tools are used to gauge the time and accuracy of Windows Server 2016.

The techniques are applicable when taking measurements and tuning the environment to determine if the test outcome meet the set requirements.

The domain source clock has two precision NTP servers and GPS hardware.

Some of these tests need a highly accurate and reliable clock source as a reference point adding to your domain clock source.

Four different methods we use to measure accuracy in physical and virtual machines:

  • Take the reading of the local clock conditioned by a w32tm and reference it against a test machine with a separate GPS hardware.
  • Measure pings coming from the NTP server to its clients using the “stripchart” of the W32tm utility
  • Measure pings from the client to the NTP server using “stripchart” of the W32tm utility.
  • Measure the Hyper-V output from the host to the guests using the Time Stamp Counter (TSC). After getting the difference of the host and client time in the VM, use the TSC to estimate the host time from the guest. We also consider the use of TSV clock to factor out delays and the API latency.


For comparison purposes, testing both the Windows Server 2012R2 and Windows Server 2016 based on topology is sensible.

The topologies have two physical Hyper-V hosts that point to a 2016 Server with a GPS hardware installed. Each of these hosts runs at least three domains joining the windows guests taking the arrangement shown in the diagram below.

Windows Server 2016 Forest Time Hierarchy between two 2016 Hyper-V Hosts

TOPOLOGY 1. Image Source

The lines on the diagram indicate time hierarchy and the transport or protocol used

Windows Server 2012R2 Forest Time Hierarchy hosted between two 2016 Hyper-V Hosts.

TOPOLOGY 2. Image Source

Graphical Results Overview

The following graph is a representation of the time accuracy between two members of a domain. Every graph shows both Windows Server 2012R2 and 2016 outcome.

The accuracy was a measurement taken from the guest machine in comparison to the host. The graphical data shown indicate both the best and worst case scenarios.

TOPOLOGY 3. Image Source

Performance of the Root Domain PDC

The root PDC synchronizes with the Hyper-V host using a VMIC that is present in Windows Server 2016 GPS hardware that shows stability and accuracy. This is critical because a 1ms accuracy is needed.

Performance of the Child Domain Client

The child domain client is attached to a Child Domain PDC sending communication to the Root PDC. Its timing should also be within the 1ms accuracy.

Long Distance Test

Long distance test could involve comparing a single virtual network hop to 6 physical network hops on Windows Server 2016.

Increasing network hops mean you increased latency and extended time differences. The 1ms accuracy may negatively change demonstrating a symmetrical network.

Do not forget that every network is different and measurements taken depend on varying environmental factors.

Best Practices for Accurate Timekeeping

Solid Source Clock

The machine timing is as good as its source clock. To achieve the 1ms accuracy, a GPS hardware or time appliance should be installed to refer to the master source clock.

The default may not give the accurate or stable and local time source. Also, as you move away from the source clock, you are bound to lose time.

Hardware GPS Options

The different hardware solutions that offer accurate time depend on GPS antennas. Use of radio and dial-up modem solutions is also accepted. The hardware options connect through PCIe or USB ports.

Different options give varying time accuracy and the final time depends on the environment.

Environmental factors that interfere with accuracy depends on GPS availability, network stability, the PC hardware and network load.

Domain and Time Synchronization

Computers in a domain use the domain hierarchy to determine the machine to be used as a source for time synchronization.

Every domain member will look for a machine to sync with and save it as its source. Every domain member will follow a different route that leads to its source time. The PDC in the Forest Root should be the default source clock for all machines in the domain.

Here is a list of how roles in the domain find their original time source.

Domain Controller with PDC role

This is the machine with authority on time source for the domain. Most of the time it issues are accurate and must synchronize with the DC in the parent domain with exceptional cases where GTIMESERV role is active.

Other Domain Controller

This will take the role of a time source for clients and member servers in the domain. A DC synchronizes with the PDC of its domain or any DC in the parent domain.

Clients or Member Servers

This type of machine will synchronize with any DC or PDC within its domain or picks any DC or PDC in the parent domain.

When sourcing for the original clock, the scoring system is used to identify the best time source. Scoring takes into account the reliable time source based on the relative location, which happens only once when the time service starts.

To fine-tune time synchronization, add good timeservers in a specific location and more redundancy.

Mixed Operating System Environments (Windows 2012 R2 and Windows 2008 R2)

In a pure Windows Server 2016 domain environment, you need to have the best time accuracy.

Deploying a Windows Server 2016 Hyper-V in a Windows 2012 domain will be more beneficial to the guests because of the improvements made in Server 2016.

A Windows Server 2016 PDC delivers accurate time due to the positive changes to its algorithms, which also acts as a credible source.

You may not have an option of replacing the PDC but can add a Windows Server 2016 DC with the GTIMESERV flag as one way of upgrading time accuracy for the domain.

Windows Server 2016 DC delivers better time to lower clients but is always good to use it as a source NTP time.

As already stated above, clock polling and refresh, frequencies are modified in Windows Server 2016.

You can also change the settings manually to match the down-level DCs or make the changes using the changes using group policy.

Versions that came prior to Windows Server 2016 have a problem with keeping accurate time that ends in system drifting immediately you make a change.

Obtaining samples from accurate NTP sources and conditioning the clock leads to small changes in system clock leading to better time keeping on the low-level OS versions.

In some cases involving the guest domain controllers, samples from the Hyper-V TimeSync is capable of disrupting time synchronization. However, for Server 2016, it should no longer be an issue when the guest machines run on Server 2016 Hyper-V hosts.

You can use the following registry keys to disable the Hyper-V TimeSync service from giving samples to w32time



Allow Linux to Use Hyper-V Host Time

Guest machines using Linux and run the Hyper-V; it is normal for clients to use the NTP daemon for time synchronization against the NTP servers.

If the Linux distribution supports version 4 TimeSync protocol with an enabled TimeSync integration on the guest, then synchronization will take place against the host time. Enabling both methods will lead to inconsistency.

Administrators are advised to synchronize against the host time by disabling the NTP time synchronization by using any of the following:

  • Disabling NTP servers in the ntp.conf file
  • Disabling the NTP Daemon

In this particular configuration, the Time Server Parameter is usually the host, and it should poll at a frequency of 5 seconds the same as the Clock Update Frequency.

Exclusive synchronization over NTP demands that you disable the TimeSync integration service in the guest machine.

NOTE: Linux accurate timing support must have a feature supported in the latest upstream Linux Kernels. As at now, it is not available across most Linux distros.

Specify Local Reliable Time Service Using the GTIMESERV

The GTIMESERV allows you to specify one or more domain controllers as the accurate source clocks.

For example, you can use a specific domain controller with a GPS hardware and flag it as GTIMESERV to make sure that your domain references to a clock based on a GPS hardware.

TIMESERV is a Domain Services Flag that indicates whether the machine is authoritative and can be changed if the DC loses connection.

When the connection is lost, the DC returns the “Unknown Stratum” error when you query via the NTP. After several attempts, the DC will log System Event Time Service Event 36.

The configuration of a DC as your GTIMESERV use the following command

w32tm /config /manualpeerlist:”master_clock1,0x8 master_clock2,0x8” /syncfromflags:manual /reliable:yes /update

If the DC has a GPS hardware, use the following steps to disable the NTP client and enable the NTP server.

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpClient /v Enabled /t REG_DWORD /d 0 /f

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpServer /v Enabled /t REG_DWORD /d 1 /f

Restart Windows Time Service

net stop w32time && net start w32time

Finally, tell network hosts that this machine has a reliable time source using the command

w32tm /config /reliable:yes /update

Confirm the changes, run the following commands, which indicate the results as shown

w32tm /query /configuration


Expected Setting


5 (Local)




C:\WINDOWS\SYSTEM32\w32time.DLL (Local)


1 (Local)



w32tm /query /status /verbose


Expected Setting


1 (primary reference – syncd by radio clock)


0x4C4F434C (source name: “LOCAL”)


Local CMOS Clock

Phrase Offset


Server Role

576 (Reliable Time Service)

Windows Server 2016 on 3rd party Virtual Platforms

The virtualization of windows means that the time responsibility defaults to the Hypervisor.

However new members of the domain need to be synchronized with the Domain Controller for the AD to work effectively. The best that you can do is to disable time virtualization between guests and 3rd party virtual platforms.

Discover the Hierarchy

The chain of time hierarchy to the master clock is dynamic and non-negotiated. You must query the status of a specific machine to get its time source. This analysis helps in troubleshooting issues relating to synchronizations.

If you are ready to troubleshoot, find the time source by using the w32tm command.

w32tm /query /status

The output will be the source. Finding the source is the initial step in time hierarchy. The next thing to do is to use the source entry and /Stripchart parameter to find the next time source.

w32tm /stripchart /computer:MySourceEntry /packetinfo /samples:1

The command below gives a list of domain controller found in a specific domain and relays the results that you use to determine each partner. The command also includes machines with manual configurations.

w32tm /monitor /domain:my_domain

You can use the list to trace the results through the domain and know their hierarchy and time offset at each step. Marking the point where time offset increases, you can get to know the cause of incorrect time.

Using Group Policy

Group policy is used to accomplish strict accuracy by making sure clients are assigned specific NTP servers. Clients can control how down-level OS should work when virtualized.

Look at the list of all possible scenarios and relevant Group Policy settings:

Virtualized Domains

To gain control over the Virtualized Domain Controllers in Windows 2012 R2, disable the registry entry corresponding to the virtual domain controllers. You may not want to disable the PDC entry because in most cases, Hyper-V host delivers a stable time source. The entry to the registry requires that you restart the w32time service after making changes.



Accuracy Sensitive Loads

For any workload that is sensitive to time accuracy, ensure that the group machines are set to use the NTP servers and any related time settings like update frequency and polling.

This is a task handled by a domain, but if you want to have more control, target specific machines to point to the master clock

Group Policy Setting

New Value




6-64 seconds


6 seconds


100 to once per second


3 – All special time logging

NOTE: The NtpServer and EventLogFlags are located on the System\Windows Time Service\Time Providers, if you follow the Configure Windows NTP Client Settings. The other three are under the System\Windows Time Service if you follow through the Global Configuration Settings

Remote Accuracy Sensitive Loads Remote

Systems running on the branch domains such as the Retail and Payment Credit Industry (PCI), windows will use the current site data and DC Locator to search the local DC unless you have a manual NTP time source configured.

In such an environment, you need 1 second accuracy with the option of using the w32time services to move the clock backwards. If you can meet the requirements, use the table below to create a policy

Group Poilicy Settings

New Value


1, if more than on second, set clock to correct time.

The MaxAllowedPhaseOffset is a setting you will find it under System\Windows Time Service using global Configuration settings.

Azure and Windows IaaS Consideration

Azure Virtual Machine; Active Directory Domain Services

If you have Azure VM running Active Directory Domain Services runs as part of the existing configuration in a Domain Forest, then the TimeSync (VMIC) should not be running. Disabling VMIC allows all DCs in both physical and virtual forests to use a single time sync hierarchy.

Azure Virtual Machine: Domain –Joined Machine

If you have a host whose domain links to an existing Active Directory Forest whether virtual or physical the best you can do is to disable TimeSync for the guest and make sure the W32Time is set to synchronize with the Domain Controller.

Azure Virtual Machine: Standalone Workgroup Machine

If your Azure is not part of a domain and it is not a Domain Controller, you can keep the default time configuration and let the VM synchronize with the host.

Windows Application that Requires Accurate Time

Time Stamp API

Programs or applications that need time accuracy in line with the UTC should use the GetSystemTimePreciseAsFileTime API to get the time as defined by Windows Time Service.

UDP Performance

An application that uses UDP to communicate during network transactions, you should minimize latency. You have the registry options to use when configuring different ports. Note that any changes to the registry should be restricted to system administrators.

Windows Server 2012 and Windows Server 2008 need a Hotfix to avoid datagram loss.

Update Network Drivers

Some network cards have updates that help improve performance and buffering of UDP packets.

Logging for System Auditors

Time tracing regulation may force you to comply by archiving the w32tm logs, performance monitor, and event logs. Later these records can to confirm your compliance at a specific time in the past.

  • Use the following factors to indicate time accuracy:
  • Clock accuracy using the computed time offset counter
  • Clock source looking for “peer response from” in the w32tm event logs
  • Clock condition status using the w32tm logs to validate the occurrence of “ClockDispl Discipline:*SKEW*TIME*.”

Event Logging

The log that gives you a complete story in the information it stores. Filter out the Time-Server logs you will discover any influences that have changed the time. Group policy can affect the events of the logs.

W32time Debug Logging

Use the command utility w32tm to enable audit logs. The logs will have clock updates as well as showing the source clock. Restarting the service enables new logging.

Performance Monitor

Windows Server 2016 Windows Time service counters can collect logging that auditor’s need. You can log the data locally or remotely by recording the machines Time Offset and Round Trip Delays.

Like any other counter, you can create remote monitors and alerts using the System Center Operations Manager. You can set an alert for any change of accuracy when it happens.

Windows Traceability Example

Using sample log files from the w32tm utility, you can validate two pieces of information where the Windows Time Service conditions the first log file at a given time.

151802 20:18:32.9821765s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:223 CR:156250 UI:100 phcT:65 KPhO:14307

151802 20:18:33.9898460s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:64 KPhO:41

151802 20:18:44.1090410s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:65 KPhO:38

All the messages that start with “ClockDisplin Discipline” are enough proof that your system is interacting with the system clock via the w32time.

The next step is to find the last report before the time change to get the source computer that is the current reference clock. Like in the example below, we have the Ipv4 address of as the reference clock. Another reference could point to the computer name or the VMIC provider.

151802 20:18:54.6531515s – Response from peer,0×8 (ntp.m|0x8|>, ofs: +00.0012218s

Now that the first section is validated investigate the log file on the reference time source using the same steps. This will give you a physical clock such as the GPS or a known time source like the National Institute of Standards and Technology (NIST). If the clock is a GPS hardware, then manufacturer logs may be required.

Network Considerations

The NTP protocol algorithm depends on the network symmetry making it difficult to predict the type of accuracies needed for certain environments. Use the Performance Monitor and new Windows Time Counters for Windows Server 2016 to create baselines.

The Precision Time Protocol (PTP) and the Network Time Protocol (NTP) are the two that you can use to gauge accurate time.

Clients that are not part of a domain, windows use the Simple NTP by default. Clients found within a Windows domain use the secure NTP also referred to as MS-SNTP that help in leveraging domain communication giving an advantage over Authenticated NTP.

Reliable Hardware Clock (RTC)

Windows will not step time unless some conditions are beyond the norm. The implication is that the w32tm changes the frequency at regular intervals relying on the Clock Update Frequency Settings, which is 1 second on Windows Server 2016. It will move the frequency if it is behind and vice versa when it is ahead of time.

This reason explains why you need to have acceptable results during the baseline test. If what you get for the “Computed Time Offset” is not stable, then you may have to verify the status of the firmware.

Troubleshooting Time Accuracy and NTP

The Discovering Hierarchy section gave us an understanding of the source and inaccurate time. When looking for the time offset to identify the point where the divergence takes place from its NTP Sources. Once you can trace the hierarchy of time, you need to focus on the divergent system to gather more information to help determine the issues causing all these inconsistencies.

Here are some tools that you can use:

System event logs

  • Enable logging:

w32tm logs – w32tm /debug /enable /file:C:\Windows\Temp\w32time-test.log /size:10000000 /entries:0-300

w32Time Registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time

  • Local network interfaces
  • Performance counters
  • W32tm /stripchart /computer:UpstreamClockSource
  • PING UpstreamClockSource (gauging latency and understand the number of hops to source)

Tacert UpstreamClockSource




Local TSC unstable

Use perfmon-Physical computer- Sync clock stable clock

Update firmware or try an alternative hard to confirm that it does display the same issue

Network latency

W32tm stripchart displays the RoundTripDelay exceeding 10ms. Use Tracert to find where the latency thrives

Locate a nearby source clock for time. Install a source clock on the same domain segment or point to one that is geographically closer. Domain environment needs a client with the GtimerServ role.

Unable to reliably reach the NTP source

W32tm /stripchart gives “request time out”

NTP source unresponsive

NTP Source is not responsive

Check Perfmon counters for NTP client Source Count, NTP server outgoing responses, and NTP Server Incoming Requests. Determine the outcome with your baseline tests results

Use server performance counters to determine change in load or if there is any network congestion

Domain Controller not using the most accurate clock

Changes in topology or a recently added master clock

w32tm /resync /rediscover

Clients Clocks are drifting

Time-Service event 36 in System event log or you see a text log with the following description: “NTP Client Time Source Count” going from 1 to 10

Identify errors in the upstream source and query if it may be experiencing performance issues

Baselining Time

Baseline tests are important because they give you an understanding of the expected performance accuracy of the network.

Use the figures to detect problems on your Windows Server 2016 in the future. The first thing to baseline is the root PDC or any machine with the role of GTIMESRV.

Every PDC in the forest should have a baseline test results and eventually pick DCs that are critical and get the baseline results too.

It is important if you can baseline Windows 2016 and 2012 R2 using the w32tm /stripchart tool as a comparison tool. You can use two similar machines and compare the results.

Using the performance counters, collect all information for at least one week to give you enough reference when accounting for various network time issues.

The more figures you have for comparison gives you enough confidence that your time and accuracy is stable.

NTP Server Redundancy

A manual NTP server configuration in a non-domain network means that you should have a good redundancy measure to get better accuracy when other components are also stable.

On the other hand, your topology does not have a good design and other resources not stable, leads to poor accuracy levels. Take caution to limit timeservers’ w32time to 10.

Leap Seconds

The climatic and geological activities on planet earth lead to the varying rotation periods. In an ideal scenario, the rotation varies every two years by one second.

When the atomic time grows, there will be a correction of a second up or down called the leap second. When doing the correction, it never exceeds 0.9 seconds. The correction is always announced six months before time.

Before Windows Server 2016, the Microsoft Time Service did not account for the leap seconds and relied on external time service to handle the adjustments.

The changes made to Windows Server 2016, Microsoft is working on a suitable solution to handle the leap second.

Secure Time Seeding

W32time in Windows Server 2016 includes the Secure Time Seeding Feature that determines the approximate current time of the outgoing Secure Sockets Layer Connection (SSL). The value helps in correcting gross errors on the local system clock.

You can decide not to use the Secure Time Seeding feature and use the default configurations instead. If you intend to disable the feature, use the following steps:

Set the UtilizeSSLTimeData registry value to 0 using the command below

reg add KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Config /v UtilizeSslTimeData /t REG_DWORD /d 0 /f

If the machine does not detect any changes and does not ask for a reboot, notify the W32time service about the changes. This will stop enforcing time monitoring based on data coming from the SSL connections.

W32tm.exe /config /update

Rebooting the machine activates the setting immediately and directs the machine to stop collecting data from SSL connections.

For the above setting to be effective on the entire domain, set the UtilizeSSLTimeData value in W32time using the Group Policy Setting to 0 and make the setting public.

The moment the setting is picked a Group Policy Client, the W32time service gets the notification and stop enforcing and monitoring SSL time data.

If the domain has some portable laptops or tablets, you can exclude them from the policy change because when they lose battery power, they will need to access the Secure Time Seeding feature to acquire the current time.


The latest developments in the world of Microsoft Windows Server 2016 means that you can now get the most accurate on your network once you observe some conditions.

Accuracy matters in almost everything that we do and the question of the relevance of time making a lot of sense judging by what we already covered in the document.

The Windows Time Service (W32Time) main work is to give your machine time regardless of whether it is a standalone or working on a network environment.

The primary use of time in a Windows Server 2016 Environment is to make sure that there is enough security for Kerberos authentication.

The W32Time makes it almost impossible to have replay attacks in an Active Directory or when running Virtual Machines on Hyper-V hosts.

How to Optimize Your Active Directory for Windows Server 2016

Microsoft Windows Server 2016 is still a valid choise in the market and organizations are already asking their IT experts to evaluate its added value and possible challenges that one may encounter when moving from the current systems to the new server platform. In addition to the features found on Windows Server 2012 and 2012 R2, Windows Server 2016 presents new possibilities and capabilities that are missing on previous Windows Server platforms. Any new Windows Server Operating System that breaks the market gets more attention. Windows Server 2016 had made tremendous improvements to its Active Directory.

The best approach to take before implementing Windows Server 2016 is to test its readiness by looking for ways of minimizing the likely impact of migration. Another way to look at it would be to identify organizational needs and how they can be integrated for future implementations. The reason Administrators would want to try on the Windows Server 2016 Active Directory is to provide an opportunity for growth, offer flexibility, and enhance security setup in the organization.

Why Does Windows Server 2016 Matter

Windows Server 2016 is a representation of combinations from different principles that define computation, identity, management and automation, security and assurance, and storage. All these are broken down into the core elements of the Server Operating System that consists of Visualization, System Administration, Network Management, and Software Defined Network (SDN) technologies, Cloud Integration and Management, Disk Management and Availability. All these are supposed to bring organizations to the future of technology without the need to discard some of the infrastructures being used in the current environment.

Windows Server 2016 is a full-featured server Operating System boasting of solid performance with modern advancements. This new server shares so many similarities with the Data Center edition that incorporates support for Hyper-V containers and new storage features and enhanced security solely to protect virtual machines and network communications that have no trust configured between them.

This article should help you the reader learn more about Windows Server 2016 features, factors to consider before moving from old to a new setup, and how to optimize your Active Directory. More details on how to prepare to move and migrate efficiently by managing the new environment effectively.

Windows Server 2016 New Features

Several features and enhancements form part of this server operating system. Here are some of the highlights:

Temporary Group Membership

This form of membership gives Administrators a way of adding new users to a security group for a limited time. For this feature to work, Windows Server 2016 Active Directory must be operating at the functional level. System Administrators need to know beforehand all the system installation requirements during and after the transition.

Active Directory Federation Service

There are essential changes that come with Microsoft Windows 2016 Server Federation Service:

Conditional Access Control

Active Directory in previous installations had straightforward access controls because the assumption had always been that all users would be logging in from a computer joined to a domain with proper Group Policy Security settings. The conditional access gives users access to resources that have been assigned to them.

In the current technological setup users’, access resources from different types of devices that are not connected to the domain and usually work outside the organizations operating norms. This is a direct call for the improvement of security by introducing a Conditional Access Control Feature enabling administrators to have better controls over users whose requests should be handled on per application basis. For example, administrators may enforce multi-factor authentication when the compliant devices try to access business applications.

Support for Lightweight Directory Access Protocol (LDAP) v3

Another change that has been introduced in line with regard to the Active Directory Federation Systems is the Support for Lightweight Directory Access Protocol. The capability makes it easier to centralize identities across different directories. For example, an organization that uses non-Microsoft directory format for identification and access control can centralize identities to office Azure cloud or Office 365. LDAP v3 making it easier to configure a single sign-on for SaaS applications.

Domain Naming Service (DNS)

Active Directory and DNS go hand in hand because of the dependency of Windows Server systems on DNS. There have been no significant changes in the Windows Server DNS service until the arrival of Windows Server 2016. The following are new features under the DNS:

DNS Policies

The inherent ability to create new DNS policies is said to be the most significant. These policies enable administrators to control the way DNS responds to different queries. Some examples of these policies are load balancing and Blocking of DNS requests coming from IP addresses whose domain have been listed as malicious.

Response Rate Limit

The rate of the server response to DNS queries can now be controlled. This control is designed to help defend against external attacks such as denial of service by limiting the number of times in a second a DNS can respond to a client

Microsoft IP Address Management (Microsoft IPAM)

The most significant improvement to the DNS is in its IP Address Management System that helps in the tracking of IP address usage. The integration of Microsoft IPAM feature on DHCP has been robust while the DNS one is minimal. The introduction of Windows Server 2016 brings in some new changes like DNS management capabilities by recording inventory. The support for multiple Active Directory forests by IPAM is a welcome feature. Supporting multiple forests is only possible if there is already an existing trust between them and that IPAM is installed on each forest.

Migration Considerations

Planning is critical when moving from an earlier Windows Server version to Server 2016. The goal of any migration should be minimizing its impact on business operations. Going ahead with the migration should be an opportunity for administrators to set up a scalable, flexible, compliant, and secure platform.

Understanding the Existing Server Environment.

It is a rookie mistake to jump into implementation without a proper analysis of the current server environment. Assessment at this stage should look at users, groups, distribution lists, applications, folders, and Active Directory. On the business side, there is a workflow, emails, programs, and any infrastructure used that should be assessed before making the big move.

It is also vital that you:

  • Understand what needs to be moved and what is to be left as it is. For example, there is no need of moving inactive accounts and old data that is no longer relevant. All active data stores, mailboxes, and users are part of what you should not leave behind.
  • You will also want to analyze applications, users, and processes that needs access and should be migrated to ensure that the relevant resources are available during and after the transfer.

Improving Active Direct Security and Compliance Settings

Another critical factor to consider during migration is security and delegation by controlling who makes changes to Window Active Directory objects and policies. Most organizations choose to give access to Active Directory objects to solve an immediate problem and never clear the permissions. Proper controls should be in place to manage what can be added to the AD and who should be responsible for making such changes.

Continuous monitoring of activities in the Active Directory to ascertain if they comply with both internal and external performance regulations should be ongoing. Microsoft Windows Server and AD can audit events with visible output and can be implemented quickly in a busy setup. Having a coherent AD audit cluster with analytical capabilities is critical for marking unauthorized changes, spotting inappropriate use of the AD and related resources, tracking users in the entire infrastructure, and give compliance reports to the auditors.

Ensuring Application Compatibility

Before making an effort to initiate migration, make sure that all software and third-party application used on your organization are compatible and can work with Windows Server 2016. All the in-house applications should also be tested to make sure they work correctly in the new environment.

Minimizing Impact on Business

Minimizing in-house software compatibility is one aspect of reducing the cost of migration on the business. As an Administrator, you need to know how the issue of downtime will be handled when moving from legacy to new system. One thing you need to avoid is underestimating the impact of migration on users and operations by failing to analyze all access points. Many such challenges can be avoided by scheduling resource intensive migration tasks during off-peak hours.

Failure to have a smooth transition between legacy and the new system can lead to service disruptions lost productivity and increased the cost of doing business. The co-existence of both the old and the new system is essential in any Active Directory migration because users still need to access resources to ensure continuity. Directory synchronization is important at this stage to make sure that users can access their data.

Restructure the Active Directory

Moving from your legacy system to Windows Server 2016 should be taken seriously and not treated like any other routine IT task. This is an opportunity to restructure your Active Directory to meet its current and future needs. Every time there is a significant system upgrade, changes in organizational models and requirements may have prompted it. Changes in the IT technology is also a major force that influences restructuring of the Active Directory.

Determine the number of domains and forests needed. Examine the need to merge some forests or create new ones. You can also take an opportunity to join new infrastructure to remote offices that may not have been in existence in the legacy system.

Active Directory Management and Recovery

Every IT management faces challenges when managing the Active Directory on a daily basis. The configuration of user properties is time-consuming and error-prone when dealing with a large and a complex Windows Network. Some of these duties have to be performed manually leading repetitive and mundane tasks that end up taking up most the Administrators time. However, when you decide to accomplish the above tasks using Windows Native Tools or the PowerShell means that you must have a deeper understanding of how the Active Directory and its features work.

The use of software to manage the Active Directory repetitive tasks simplifies the process. You can also get detailed reports on tasks and their status. Using software offers solutions that help in the planning and execution of an efficient AD restructuring, which will eventually help you, implement a secure system. Managing AD using a software gives a common console where the management can view and manage Active Directory, users, computers, and groups. Some software’s enable the administration to plan for a secure way of delegating repetitive tasks and perform controlled automation of the Active Directory Structure.

Software Implementation

Two popular software being used in the management of Active Directory optimization tasks are:

  1. ADManager Plus
  2. Quest Software

They both can help in the restructuring and consolidation of Windows Server 2016 in a new environment.

ADManager Plus

The ADManager Plus has additional features such as sending and receiving customized notifications via SMS or emails. The search options make it easier for IT managers to search the directory with ease through its software interface panel. Using the ADManager Plus, the IT department can execute windows optimization tasks with ease in addition to the integration of utilities such as ServiceNow, ServiceDesk, and AdselfService Plus.

Active Directory User Management

ADManager Plus manages thousands of your Active Directory through its interface. This property helps you create and modify users by configuring general attributes, exchange server attributes, and apply exchange policies, terminal service attributes, and remote logon permissions. You can set new users in Office 365 and G suite when creating the new accounts in the Active Directory. You can design templates that can help the help desk team to modify and configure user accounts and properties by a single action.

Active Directory Computer Management

This solution allows for the management of all computer in the existing environment from any location. You can create objects in bulk using CSV templates by modifying group and general attributes of computers, move them between organizational units, and enable/disable them.

Active Directory Group Management

The management of groups is made more flexible using the software modules used in the creation and modification of groups using templates and conduct all configuration attributes in an instant.

Active Directory Contact Management

You can use this software management tool to import and update Activate Directory contacts as a single process. Therefore, this implies that you do not have to select individual contacts for an update.

Active Directory Help Desk Delegation

The ADManager Plus delegation feature can help administrators to create help desk administrators, and delegate desired tasks related to user attributes. The various repetitive management tasks for users, group, computers, and contacts can be delegated using customized account creation templates. The help desk users can share the workload of the administrators which frees them up giving them more time to work on core duties.

Active Directory Reports and Management

The ADManager plus provides information on different objects within the AD which allows for the viewing and analysis of information on its web interface. For example, you can see a list of all inactive users and modify the accountant accordingly.


Quest software takes a different approach because it deals with preparation, recovery, security and compliance, migration, consolidation, and restructuring.


During preparation, Quest helps in the assessment of the existing environment with the enterprise reporter gives a detailed evaluation of the current setup that includes the Active Directory, Windows Server, and SQL Server. During this assessment, Quest can report the number of accounts you have in the Active Directory and isolate the active and the disabled ones. Knowing the exact status of your environment is paramount before the migration begins.

Quest helps discover identities and inventories on application servers that are dependent on the Active Domains that are being moved to enable you to fix or redirect them on the new server.

Migration, Consolidation, and Restructuring

The Migration Manager for Active Directory gives the Zero IMPACT AD restructuring and consolidation. The Migration Manager offers a peaceful coexistence to both the migrated and yet to be migrated by maintaining secure access to workstations and resources.

Secure Copy offers an automated solution for quick migration and restructuring files on the data server by maintaining the security and access points. Its robustness makes the tool to be rated as perfect for planning and verification of successful file transfers.

Migrator for Novell Directory Service (NDS) helps administrators move from Novel eDirectory to Active Directory. The tool also moves all data within Novell and re-assigns permission to new identities in the new server.

Security and Compliance

The change Auditor for Active Directory gives a complete evaluation of all the changes that have taken place in the Active Directory. The evaluation report contains information such as who made the changes, what kind of changes was made, what were the initial and final values before and after adjustment, and the workstation name where the change occurred. The change auditor tool also prevents changes, for example, you can disable the deletion of or transfer of Organization Units and changes that can be made Group Policy Settings.

Access Control

Active Roles modules ensure that security of the AD complies by enabling you to control access by delegating tasks using less privilege. This gives an opportunity to generate access rules based on defined administrative policies and access rights. You can use the Active Roles to bring together user groups and mailboxes as well as changing and removing access rights based on role changes.

Centralized Permission Management

The Security Explorer facilitates the management of Microsoft Dynamic Access Controls (DAC) by enabling administrators to add, remove, restore, backup, and copy permission all on a single console. The tool can make targeted or bulk changes to server emissions made possible by the enhanced by Dynamic Access Control management features such as the ability to grant, revoke, clone, and modify permissions.

Monitoring Users

The InTrust enables the secure collection, storage, and reporting alerts on the data log that complies with both internal and external regulations surrounding policies and security best practice. Using InTrust, you get an insight into user activities by auditing access to critical systems. You can see suspicious Logins in real time.

Management and Recovery

The easiest way the IT administrator can manage user accounts, computers, and objects via the Group Policy. Poor management of the Group Policy Objects (GPO) can cause many damages. For example, if your GPO is assigning proxy settings with wrong proxy values.

GPO Admin will automate Group Policies, and it has a workflow to enable the checking of changes before being approved by the GPOs. When GPO’s are used in the production industry, the management team will be impressed by the reduced tasks as it improves security.

Recovery is a critical process in any organization that runs its system based on Windows Server 2016. You can also recover the wrong entries and accounts that were removed. The Recovery Manager for Active Directory gives access to other features that report on the differences and help restore objects that were changed.

It is important to be prepared in readiness for disaster and data recovery. In case your domain finds itself in the wrong hands, or the entire network setup is corrupted, use the Recovery Manager for Active Directory utility.


Windows Server 2016 has a wealth of new features and capabilities to streamline and improve the management and facilitate better user experience. A successful implementation means that the Active Directory has a sound consolidation process. Administrators who have already tested this Server Operating Services should take advantage of the new capabilities.

The benefits of Active Directory tools and utilities are numerous because they help in setting up a flexible and secure Windows Server 2016 and Active Director that will work for your current and future environment. These utilities help managers who are not well conversant with some IT related Active Directory management tools who need to switch to the new server to comply with regional and international standards.

Windows Server Disk Quota – Overview

Windows Server system comes with a very handy feature that allows the creation of many user accounts on a shared system. This enables users to log in and have their own disk space and other custom settings. However, the drawback with this feature is that users have unlimited disk space usage, and with time, space eventually gets filled up leading to a slow or malfunctioning system, which is a real mess. Have you ever wondered how you can avert this situation and set user limits to disk volume usage?

Worry no more. To overcome the scenario described above Windows came up with the disk quota functionality. This feature allows you to dictate or set limits on hard disk utilization space such that users are restricted to the size of disk space they can use for their files. The functionality is available for both Windows and Unix systems like Linux that are being shared by many users. In Linux, it supports ext2, ext3, ext4 and XFS filesystems. In Windows operating systems, it’s supported in Windows 2000 and later versions. It’s important to point out that in Windows, this functionality can only be configured on NTFS file systems only. So, If you are starting out with a Windows server or client system, you may to consider formatting the volumes to NTFS filesystem to avert complications later on. Quotas can be applied to both client and server systems like Windows server 2008, 2012 and 2016. In addition to that, quotas cannot be configured on individual files or folders. They can only be set on volumes and restrictions apply to those volumes only. To be able to administer a disk quota, one must either be an administrator or have administrative privileges, that is, be a member of Administrator’s group.

The idea behind setting limits is to prevent the hard disk from getting filled up and thereby causing the system or server to freeze or behave abnormally. When a quota is surpassed, the user receives an “insufficient disk space error” warning and cannot, therefore, create or save any more files. A quota is a limit that is normally set by the administrator to restrict disk space utilization. This will prevent careless or unmindful users from filling up the disk space leading to a host of other problems including slowing down or freeing of the system. Quotas are ideally applicable in enterprise environments where many users access the server to save or upload documents. An administrator will want to assign a maximum disk space limit so that end users are confined to uploading work files only like Word, PowerPoint and Excel documents. The idea behind this is to prevent them from filling the disk with other non-essential and personal files like images, videos and music files which take up a significant amount of space. A disk quota can be configured as per user or per group basis. A perfect example of disk quota usage is in Web hosting platforms such as cPanel or Vesta CP whereby users are allocated a fixed disk space usage according to the subscription payment.

When a disk quota system is implemented, users cannot save or upload files to the system beyond the limit threshold. For instance, if an administrator sets a limit of 10 GB on disk space for all logon users, the users cannot save files exceeding the 10G limit. If a limit is exceeded, the only way out is to delete existing files, request another user to take ownership of some files or request the administrator, who’s the God of the system, to allocate you more space. It’s important to note that you cannot increase the disk space by compressing files. This is because quotas are based on uncompressed files and Windows treats compressed files based on their original uncompressed size. There are two types of limits: Hard limits and soft limits. A hard limit refers to the maximum possible space that the system can grant an end user. If for instance, a hard limit of 10G is set on a hard drive, the end user can no longer create and save files once the 10G limit is reached. This restriction will force them to look for an alternative storage location elsewhere or delete existing files

A soft limit, on the other hand, can temporarily be exceeded by an end user but should not go beyond the hard limit. As it approaches the hard limit, the end user will receive a string of email notifications warning them that they are approaching the hard limit. In a nutshell, a soft limit gives you a grace period but a hard limit will not give you one. A soft limit is set slightly below the hard limit. If a hard limit of, say 20G is set, a soft limit of 19G would be appropriate. It’s also worth mentioning that end users can scale up their soft limits up to the hard limit. They can also scale down their soft limits to zero. As for hard limits, end users can scale them down but cannot increase them. For purposes of courtesy, soft limits are usually configured for C level executives so that they can get friendly reminders when they are about to approach the Hard limit.

In summary, we have seen how handy disk quota is especially when it comes to a PC or a server that is shared by many users. Its ability to limit disk space utilization ensures that the disk is not filled up by users leading to malfunctioning or ‘freezing’ of the server. In our next topic, we’ll elaborate in detail how we apply or implement the quotas.

File System Attacks on Microsoft Windows Server

Most common File System Attacks on Microsoft Windows Server systems are an Active Directory targeted attacks, which is, based on a fact that AD is a “heart” of any Windows-based system. The bit less common, but still, very dangerous ( and interesting), can be File system attacks.

In this article, we investigated most common ways of filesystem attacks and protection against it.

The goal of File System Attacks is always the data, pieces of information stored on a server, important for any reason to any side that planned an attack. To get to data, first thing, attacker needs are credentials, as more elevated account, as better.

In this article, we will not write about credentials theft, that could be a topic for itself, but we will assume, that attacker already breached organization, and got the Domain administrator credentials.

Finding File Shares

The first step is finding the Data, a place where it “lives”.

For that, the tools are coming out the front. Most of the tools, attackers are using, are penetration testing tools, like, in our example smbmap, or PowerShell ( we will show both ways)

SMBMap, as git hub says “ allows users to enumerate samba share drives across an entire domain. List share drives, drive permissions, share contents, upload/download functionality, file name auto-download pattern matching, and even execute remote commands. This tool was designed with pen testing in mind, and is intended to simplify searching for potentially sensitive data across large networks”

So with a usage of Smbmap’s features, attackers will find all the file shares on those hosts and determine what sort of access, Permissions, and more detailed info about any file share on the system.

Another common way of determining the data location is PowerShell based.

By definition – PowerSploit is a collection of Microsoft PowerShell modules that can be used to aid penetration testers during all phases of an assessment.

And like smbmap, PowerSploit has a huge number of features. For finding data shares, attackers use Invoke-ShareFinder cmdlet, which will, in the combination of other PowerSploit features, show exactly the same things as smbmap, that means all information necessary to access and use data.


Of course, examples, above, are a just brief description of attacks that can list your data shares to the potential attacker, but, no matter, it is clear, that listing your data is a first step to getting it.

So here are some recommended actions to protect your system:

Removing open shares: Reduce open shares as much as possible. It is ok to have some if explicitly needed by a job, but sometimes, open shares are just result of lousy made permissions. Check out your default permissions ( default permissions are equivalent to open), change them properly, and avoid easy listing for the potential attacker

Monitor first time access activity – this is more an admin tip than a protection method, but it can be important. If you notice, a user has a right to share but never used it, and all the sudden, activity on that account changes, and steps out of “normal”, it could be a sign that account credentials are hijacked.

Check for potentially harmful software, not as malware, but a hint. SmbMap is built in python, so if noticed, sudden installation of python software, or PowerSploit module on your system, that could be an early alarm that something suspicious is going on your servers.

Finding Interesting Data

So now the potential attacker know where the data on our hypothetical server “ live”. The next step is narrowing the data down on “interesting”. There could be huge amounts of files in even the smallest organizations. How can attacker know which data is one he/she need.

With PowerSploit, functionality used is called Invoke-FileFinder.  It has a lot of filtering options, to narrow down data to “interesting”, and export it to CSV files, which allows attacker to explore it on his own system with wanted pace, and after detecting it, attacker can make a targeted attack, and get needed files to staging area, and transport them out of the network ( via FTP, or even Dropbox trial account).

The same thing happens with SmbMap. Just as PowerSploit, it will filter out the data with options, the tool can provide, and show the data, the attacker is interested in, with the same outcome, getting pieces of information.


With this example, a hypothetical attack is in the second phase. The attacker, successfully listed files and found the most interesting ones. The easy part is left undone. Just taking the data. How to protect from that? Together with earlier mentioned methods, the following can help administrator fortify system and files.

Password rotation – Can be very important action, especially for services and applications that store passwords in filesystems. Constantly rotating passwords and checking file content can present a very large obstacle for the attacker, and will make your system more secure.

Tagging, and encryption –  In combination with Data Loss Protection, will highlight and encrypt important data, which will stop simple type of attacks, at least, getting important data.


The final part of the file system attack. In the hypothetic scenario, we had listed and accessing data on the penetrated system. And here we will describe how attackers persist in the system, even when they get kicked out the first time.

Attackers hide some of their data into the NTFS file system, more accurate, in Alternate Data Stream ( ADS). Data of a file is stored in $DATA attribute of that file as NTFS tracks it. Malware vendors, and “bad guys” are tracking ADS and use it for entrance, but still, they need credentials.

So as usual, they can be stopped by correct permissions usage, and not allowing “write” permission to any account that is not specifically assigned for write operations.

File System Attacks are tricky, but they are leaving traces, and in general, most of the attacks should be prevented by System Administrator behavior and predictions. In this field, we can fully say: it’s better to prevent than to heal, and it is clear that only knowing your system fully, and full-time administration and monitoring will/can make your system safe.

Do you want to avoid Unwanted File System Attacks on Microsoft Windows Server?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!