Windows Server Core vs. Graphical User Interface (GUI) Debate

Windows servers in most environments allow for two forms of installation: the Server Core and the Desktop Experience, also known as the Graphical User Interface (GUI).

The main difference between the two installation options is that the Server Core does not have the GUI shell packages; the Server Core is simply the Windows Server Shell Package.

The PowerShell approach implies that there may be a simple way of switching between the two forms of installation, if all are available in a single Server installation.

Some IT professionals argue that the introduction of the Windows Admin Centre is a move in the right direction and it is a step closer to boosting the mass adoption of the Server Core.

However, this does not free the IT administrators from encountering more challenges.

Are We Ready for the Server Core to Take centre stage?

Windows Server Admins who are not sure where to place their feet in the Server Core vs. GUI debate should consider new positions. Microsoft has come up with new ways of running Windows Server in lightweight mode.

Microsoft, in an attempt to move towards the Server Core, still leaves some Administrators comfortable with the use of the full Windows Server installation because it gives access to the easy-to-use point-and-click GUI menus and tools.

Theoretically, managing many servers with a single or few lines of PowerShell sounds impressive until there is a lot of workload and pressure pile up. Even experienced IT administrators will run to what they have been using ever since—the GUI.

It is this lack of configuration options and the less capability of the shell programs that keep Server Administrators away from installing or using Server Core as a preferred line of defense when working on network issues.

The release of Windows Admin Centre came in at the same time when Windows Server 2019 Long Term Servicing Channel release hit the market. The development of this new tool came from customer feedback: to lower the hindrances to Server Core deployment.

The Windows Server 2019 made a debut with the Server Core App Compatibility Feature on Demand to increase the functionalities that some apps need to run.

During this time, Microsoft added support for Server Core as a deployment option for Exchange Server 2019.

What are the Effects of These Changes on the Server Core vs. GUI Debate?

For administrators who are still in doubt, Server Core is capable of handling infrastructure roles such as Active Directory Domain Controllers and Domain Name System Servers.

In the next session, we’re going to feature a seasoned IT director at the Canadian Museum for Human Rights in an interview that highlights some Server Core advantages and disadvantages.

He is also going to talk about why he thinks containers might tip the balance towards the Server Core in some organisations.

  • What deployment method do you use?

The Server Core 2016 is the base system that we use to implement three clusters that run Windows Server 2019 right now. The 2019 version is still new in the market, and the department decided to use GUI as the last emergency option in a crisis. Within six months to a year, we will phase out nodes running 2019 with GUI to 2019 Server Core.

  • Why did your organisation decide to use Server Core?

The advantage is dealing with smaller footprints for patches and resources. Server Core does not demand as many resources to run, but you can get more by running it.

There are fewer incidences of security attacks on the Server Core, and that is something we needed to consider.

  • Why is Microsoft pushing Server Core when administrators prefer to use GUI, do you have any challenges using Server Core?

Server Core will make an Administrator’s life easier. Windows Admin Centre happens to be better than the Remote Server Administration Tools (RSAT). RSAT gave a single panel for server management.

However, we are still to get a complete toolset. As it is, we may get into GUI for one or two reasons. Microsoft needs to be creative on that aspect and make Server Core a reality for all Administrators.

We are using Server Core 2016 and using GUI as necessary, but with some management challenges, though we take it as a learning curve.

Windows Admin Centre is an excellent place to start but still far from being the perfect replacement Microsoft was hoping would happen. Some of the Admin tools are not coming up as fast as I would like, meaning we are still using GUI more than our liking.

When you are in a production server environment that needs quick troubleshooting, you will struggle with the PowerShell tools or remote management tools like the Windows Admin Centre, but your managers may not give you enough time to solve the problem, so you end up going back to the GUI approach. Using PowerShell in such an environment offers no comfort zone.

The IT community is trying to adapt to the methodology of not relying on GUI to manage the Server. You can either use the GUI tool like the Windows Admin Centre from a management machine or use a remote PowerShell session. Consequently, PowerShell is a steep learning curve.

  • Is Server Core easier to manage with patches and security?

Yes, with the new patches, you will see Windows Server with GUI ending up being like 15 patches and 2 for Server Core. It is easy to find patches on other applications than it is to find consistency in patch releases for the Server Core.

  • Which applications did you use with Server Core that never worked? What are the functions that require PowerShell skills, even when using the Windows Admin Centre?

When we sought proposals for the purchase of the ticketing system, our specification was Server Core; unfortunately, they could not support it.

The Windows Admin Centre still does not support many features, even though they have come a long way, and they add new features in every release.

For instance, there are some things in Failover Cluster Manager that you need to do on the console using GUI. Reason being the feature is absent. This has increasingly necessitated System Administrators to keep pushing Microsoft to develop all tools in Windows Admin Centre and then offer support for the GUI.

  • Do you think Server Core is ready for the market?

The answer differs from one organisation to another; our organisation is not so huge, but we made a lot of progress on the learning curve. We managed to install Server Core as production servers on the new 2019 clusters within a year. Whereas embracing Server Core could be more fulfilling in the long term, most organisations are slow in taking on the new technology.

The biggest challenge to Server Core mass adoption is due to the players in the industry fear of taking risks, or they are not as fast as Microsoft would want them to be. The majority of the workforce does not have the day-to-day experience needed to set up and run the Server Core.

Most IT practitioners who take on Server Core installation and Administration tasks are those working in smaller organisations. A large organisation will always look ready and aggressive to take on new technology, but making the first step is the biggest challenge.

That explains why it is important to encourage people to try out the beta program as much as possible. In our case, we have a corporate plan of testing production workloads in a beta environment. Very few places have a testing environment, which is everything needed for the Windows Server Core to be in the mainstream.

  • Do you think containers might tip the balance towards the Server Core in some organisations?

Another thing that will force people to embrace Server Core is the use of containerisation. A container does not give room for GUI. Therefore, Windows Server for containers seems to be the likely place where Server Core is expected to flourish.

Server Core is the middle step between full-blown physical servers and virtual server containers. The Server Core App Compatibility Feature on Demand is bringing in the ability to include features without the need of installing them.

For example, if you need a .NET framework for any application, you will not install the entire .NET framework, as it will bring up the server in a container only for the features you need.

Do you know who has access to your data on Windows Servers?
Are you able to easily analyze NTFS permissions of your data?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!



Windows Server Deduplication: An Essential Introduction

Deduplication is one of the useful features of Windows Server since the launch of the 2008 R2 version.

It is a native feature added through the server manager that gives system administrators enough time to plan server storage and network volume management.

Most Server administrators rarely talk about this feature until it is time to address the organization’s storage crunch.

Data deduplication works by identifying similar data blocks and saving a copy as the central source, thus  reducing the spread of data all over the storage areas. Deduplication takes place on a file or block level, giving you more space in the server.

Special hardware components, which are relatively expensive, are required to explore the block level deduplication. The reason behind extra hardware is the complex processing requirements involved.

The file level deduplication is not complicated and, thus, does not require the additional hardware. As such, in most cases, administrators implementing deduplication prefer the file approach.

When to Apply Windows Server Deduplication

Since Windows Server file deduplication works on the file level, its operations work on a higher level than a block level, as it tries to match chunks of data.

File deduplication is an operating system level, meaning that you can enable this feature within a virtual guest in a hypervisors environment.

Growth in industries is also driving the demand for deduplication, although storage hardware components are becoming bigger and affordable.

Deduplication is all about fulfilling this growing demand.

Why is Deduplication Feature Found on Servers?

Severs are central to any organization’s data, as users store their information on repositories. Not all users embrace new technology on how to handle their work, while others feel safe making multiple copies of the same work.

Since most Server administrators do the work of  managing and backing up users’ data, using the Windows deduplication feature greatly enhances their productivity.

Data deduplication in a straightforward feature and will take a few minutes to make it active.

Deduplication is one of the server roles found on Windows Servers, and you do not need a restart for it to work.

However, it is safe to do so to make sure the entire process is configured correctly.

Preparing for Windows Server Duplication

  • Click on start
  • Click on the run command window
  • Enter the following command and press enter (this command runs against selected volume to analyze potential space for storage): DDEval.exe
  • Right click on the volume in Server Manager to activate data deduplication

The following wizard will guide you through the deduplication process depending on the type of server in place. (Choose a VDI or Hyper-V configuration or File Server)

Set up The Timing for Deduplication

Deduplication should run on scheduled time to reduce the strain on existing resources. You should not aim to save storage space at the expense of overworking the server.

The timing should be set when there is little strain on the server to allow for quick and effective deduplication.

Deduplication is a task that requires more CPU time because of the numerous activities and processes taken by each job.

Other deduplication demands include optimization, integrity scheduling, and garbage collection. All these deduplication activities should be running at peak hours unless the server has enough resources to withstand system slowdowns.

The capacity that deduplication reclaims varies depending on server use and storage available.

General files, ISOs, office applications files, and virtual disks usually consume much of the storage allocations.

Benefits of Windows Server Deduplication

Windows Server deduplication brings several benefits to an organization, including the following:

  • Reduced storage allocation

Deduplication can reduce storage space for files and backups. Therefore, an enterprise can get more storage space, reducing the annual cost of storage hardware. With enough storage, there is a lot of efficiency and speed, which eliminates the need for installing backup tapes.

  • Efficient volume replication

Deduplication ensures that only unique data is written to the disk, which reduces network traffic.

  • Increasing network bandwidth

If deduplication is configured to run at the source, then there is no need to transfer files over the network.

  • Cost-effective solution

Since power consumption is reduced, there is less space required for extra storage of both local and remote locations. The organization buys and spends less on storage maintenance, thus reducing the overall storage costs.

  • Fast file recovery process

Deduplication ensures faster file recoveries and restorations without straining the day’s business activities.

Features of Deduplication

1. Transparency and Ease of Use

Installation is straightforward on the target volume(s). Running applications and users will not know when deduplication is taking place.

The file system works well with NTFS file requirements. However, files using the encryption mode, Encrypted File System (EFS), files that have a capacity smaller than 32KB, or those with Extended Attributes (EAs), cannot be processed during deduplication.

In such cases, file interaction takes place through NTFS, and not deduplication. A file with an alternative data stream will only have its primary data stream deduplicated, as the alternative will be left on the disk.

2. Works on Primary Data

This feature, once installed on the primary data volumes, will operate without interfering with the server’s primary objective.

This feature will ignore hot data (active files at the time of deduplication) until it reaches a given number of days. The skipping of such files maintains consistency of the active files and shortens the deduplication time.

This feature uses the following approach when processing special files:

  • Post procession: when new files are created, the files go directly to the NTFS volume where they are evaluated on a regular schedule. The background processing confirms file eligibility for deduplication, every hour, by default. The scheduling for confirmation time is flexible
  • File age: a setting on the deduplication feature called MinimumFileAgeDays controls how long a file should stay on the queue before it is processed. The default number of days is 5. The administrator can configure it to 0 to process all files.
  • Type of file and location exclusions: you can instruct the deduplication feature not to process specific file types. You can choose to ignore CAB files, which do not help the process in any way as well as any other file type that requires a lot of compression space such as PNG files. There is an option of directing the feature not to process a particular folder.

3. Portability

Any volume that is under deduplication runs as an automatic unit. The volume can be backed up and moved to a different location.

Moving it to another server means that anything that was in that file is accessible on its new site.

The only thing that you need to change is schedule timings because the native task scheduler controls the scheduler.

If the new server location does not have a running deduplication feature, you can only access the files that have not been deduplicated.

4. Minimal Use of Resources

The default operations of the deduplication feature use minimal resources on the primary server.

In case the process is active, and there is a shortage of resources, deduplication will surrender the resources to the active process and resumes when enough is available.

Here’s how storage resources are utilized:

  • The hash index storage method uses low resources and reduces read/write operations to scale large datasets and deliver high edit/search performance. The index footprint left behind is excessively low and uses a temporary partition.
  • Deduplication verifies the amount of space before it executes. If no storage space is available, it will keep trying at regular intervals. You can schedule and run any deduplication tasks during off-peak hours or during idle time.

5. Sub-file Segmentation

The process segments files into different sizes, such as between 32 to 128 KB using an innovative algorithm developed by Microsoft and other researchers.

The segmentation splits each file into a sequence depending on its content. A Rabin fingerprint, which is a system based on the sliding Window hash, helps to identify the chunk boundaries.

The average size of every segment is 64KB and it is compressed and placed into a chunk store that is hidden in a folder located at the System Volume Information (SVI) folder.

A reparse point, which is a pointer to the map of all data streams, helps in replacing the normal files when requested.

6.BranchCache

Another feature you can get from deduplication is that sub-file segmentation and indexing engine is shared with BranchCache feature.

This sharing is important because when a Windows Server is running and all the data segments are already indexed, they can be quickly sent over the network as needed, consequently saving a lot of network traffic within the office or the branch.

How Does Deduplication Affect Data Access?

The fragmentations created by deduplication are stored on the disk as file segments that are spread all over, increasing the seek time.

Upon the processing of each file, the filter driver will work overtime to maintain the sequence by keeping the segments together in a random fashion.

Deduplication keeps a file cache to avoid repeating file segments, helping in their quick access. In case multiple users access the same resource simultaneously, that access pattern enables speeding up of the deduplication for each user.

Here are some important points to note:

  • No much difference is noted when opening an Office document; users cannot tell whether the feature is running or not
  • When copying one bulky file, deduplication will send end-to-end copy that is likely to be 1.5 times faster than it would take a non-deduplicated file.
  • During the transfer of multiple bulky files simultaneously, cache helps to transfer the file 30% times faster
  • When the file-server load simulator (File Server Capacity Tool) is used to test multiple file access scenarios, a reduction of about 10% in the number of users supported will be noticed.
  • Data optimization increases between 20-35 MB/Sec per job that easily translates to 100GB/hour for a single 2TB volume running on one core CPU with a 1GB RAM. This is an indicator that multiple volumes can be processed if additional CPU, disk resources, and memory allocations are available.

Reliability and Risk Preparedness

Even when you configure the server environment using RAID, there is still the risk of data corruption and loss attributed to disk malfunctioning, control errors, and firmware bugs.

Other environmental risks to stored data include radiation or disk vibrations.

Deduplication raises the risk of disk corruption, especially when one file segment referring to thousands of other files is located in a bad sector.

Such a scenario gives a possibility of losing thousands of users’ data.

Backups

Using the Windows Server Backup tool runs a selective file restore API to enable backup applications to pull files out of the optimized backup.

Detect and Report

When a deduplication filter comes across a corrupted file or section of the disk, a quick checksum validation will be done on data and metadata.

This validation helps to recognize any data corruption during file access, hence reducing accumulated failures.

Redundancy

An extra copy of critical data is created, and any file segment with more than 100 references is collected as most popular chunks.

Repair

Once the deduplication process is active, scanning and fixing of errors becomes a continuous process.

Inspection of the deduplication process and host volumes takes place on a regular basis to scrub any logged errors and fix them from alternative copies.

An optional deep scrubber will walk through the whole data set by identifying errors and fixing them, if possible.

When the disk configurations are set to mirror each other, deduplication will look for a better copy on the other side and use it as a replacement.

If there are no other alternatives, data will be recovered from an existing backup.

Verdict on Deduplication

Some of the features described above does not work in all Window Server 2012 editions and may be subject to limitations.

Deduplication was built for volumes that support the NTFS data structure.

Therefore, it cannot be used with Cluster Shared Volumes (CSV).

Also, Live Virtual Machines (VMs) and active SQL databases are not supported by deduplication.

Deduplication Data Evaluation Tool

To get a better understanding of the deduplication environment, Microsoft created a portable evaluation tool that installs into the \Windows\System32\ directory.

The tool can be tested on Windows 7 and later Windows operating systems.

It is installed through the DDPEval.exe and supports local drives, mapped, unmapped, and remote shares.

If you are using Windows NAS or an EMC /NetApp NAS, you can test it on a remote share.

Conclusion

The Windows Server native deduplication feature is now becoming a popular feature.

It mirrors the needs of a typical server administrator working in production deployments.

However, planning for deduplication before implementation is necessary because of the various  situations in which its use may not be applicable.

Windows Server 2016 —What’s New in Data Deduplication

Deduplication eliminates the need to repeat data to create a single instance. The creation of the single instance improves storage utility and efficiencies in a network with heavy network transfers.

Some may confuse deduplication with data compression, which identifies repeat data within single files and encodes the redundancy.

In simple terms, deduplication is a continuous process that eliminates excess copies of data; therefore, decreasing storage demands.

Data deduplication applies to Windows Server, the Semi-annual Channel, and Windows Server 2016.

Data deduplication in Windows Server 2016 is a highly optimized, manageable, and flexible process.

Here are the updated and new data deduplication features in Windows Server 2016.

The Updated Features

Here are two of the updated features.

1. Support for Large Volumes

In earlier versions, volumes were partitioned to fit data sizes that are above 10TB.

However, in Windows Server 2016, data deduplication supports volume sizes of up to 64TB.

  • What is the Added Value?

The volumes in Windows Server 2012 R2 had to be appropriately portioned in the correct sizes to ensure optimization demands keep up with the rate of data transfer.

The implication here was that data deduplication could only work on volumes with data of 10TB or less. The performance also depended on existing workloads on write patterns.

  • What is Different?

Windows Server 2012 R2 uses a single thread and an input and an output queue for every volume.

This is to maximize optimization and make sure jobs do not fall behind, which can affect the volume’s overall saving rate. This way, large data sets have to be broken into small volumes.

The volume size depends on the expected partition size; the maximum size is between 6 and 7TB for high volumes and 9 and 10TB for low volumes.

Windows Server 2016 has a new way of working with data deduplication: it runs on more than one thread and uses multiple inputs and outputs for every volume.

This introduces a new routine that was only possible after dividing data into small chunks.

2. Support for Large Files

In earlier versions, any file approaching the 1TB size was not eligible for deduplication.

However, Windows Server 2016 supports files with a maximum size of 1TB.

  • What is the Added Value?

In Windows Server 2012 R2, you cannot deduplicate large files due to reduced performance in the deduplication process queue.

In Windows Server 2016, deduplication of files of up to 1TB is possible.

Consequently, this enables you to save a large volume of work; for example, reduplicating large backup files.

  • What is Different?

Windows Server 2016 deduplication process uses new streaming and mapping structures to improve the deduplication output and its access.

Besides, the process can now be optimized when there is a failure, instead of restarting the entire process. Deduplication affects files with a capacity of 1TB.

The New Features

Here are three of the new features.

1. Support for Nano Servers

Nano servers support is a new feature that is available in any Nano Server Deployment option in Windows Server 2016.

  • What is the Added Value?

Nano servers is a headless deployment in Windows Server 2016 that need a smaller system for tracing resources. It enables quick startups and needs fewer updates and restarts than the Windows Server Core Deployment version.

2. Simple Backup Support

The Windows Server 2012 R2 support Virtualized Backups, like Microsoft Data Protection Manager, after successful manual configurations.

Windows Server 2016 has some new default backups that allow for seamless data deduplication for Virtual backups.

  • What is the Added Value?

For this to happen in earlier versions of the Windows Server, you needed to manually tune deduplication settings, as opposed to Windows Server 2016 that has a simplified process for its Virtualized backup applications.

Server 2016 enables deduplication for each volume, just the same way as the General Purpose File Servers.

3. Support for Clusters Operating System Rolling Upgrade

Data deduplication is capable of supporting the new Cluster OS Rolling Upgrade feature in Windows Server 2016.

  • What is the Added Value?

The failover clusters in Windows Server 2012 R2 can have a mix of nodes that run deduplication alongside nodes that operate Windows Server 2016 versions of deduplication.

This improvement adds full access to the data that is being deduplicated during the rolling upgrade.

Consequently, it allows the gradual rollout of the new version of data deduplication on an existing Windows Server 2012 R2 cluster without experiencing downtimes during the upgrading process.

  • What is Different?

In earlier versions of the Windows Server, a failover cluster required that all nodes in a cluster must be of the same Windows Server version.

However, in Windows Server 2016 version, the rolling upgrades allow clusters to run in mixed modes.

Upgrade and Conversion Options for Windows Server 2016 / 2019

It is always a good idea to start a new Windows Server 2016 / 2019 installation on a new slate. However, in some instances, you may be working on a site that will force you to upgrade from the current installation to the latest version.

The routines described here apply to the server versions of Windows 2016 and 2019. This article describes moving to Windows Server 2016 / 2019 from different lower server platforms.

The path to the new Operating System (OS) depends on the current system and configurations that you are running.

That being the case, the following terms define activities you are likely to encounter when deploying the 2016 Server.

Installation

The simplest way of installing a new OS to work on your hardware, and get a clean installation, demands that you delete the previous Operating System.

Migration

To move system settings to the new Windows Server using a virtual machine is what we call migration. The process also varies depending on the roles and system configurations already running.

Cluster OS Rolling Upgrade

This feature is new in Windows Server 2016, and its role is to make sure the Administrator can upgrade the Operating System of all nodes running Windows Server 2012 R2 to Windows Server 2016, without interfering with the Hyper-V or Scale-Out File Server workloads.

The feature also helps in reducing downtime, which may affect Service Level Agreements.

License Conversion

Some Operating Systems use releases that allow the conversion of one edition to another without so much struggling.

What you need is a simple command issued alongside a license key, and you end doing the license conversion.

Upgrade

When you want to use the latest software that comes with the newer versions, then you have to do an upgrade.

In-place upgrades mean using the same hardware for installing the new Operating System. For example, you can upgrade from evaluation to retail version or from a volume license to an ordinary retail edition.

NOTE 1: An upgrade will work well in virtual machines if you do not need specific OEM hardware drivers.

NOTE 2: Following the Windows Server 2016 release, you can only perform an upgrade on a version installed using the Desktop Experience (not a server core option).

NOTE 3: If you use NIC teaming, disable it before you perform an upgrade; and when the upgrade is complete, re-enable it.

Upgrade Retail Versions of Windows Server to Windows Server 2016 / 2019

Note the following general principles:

  • Upgrading a 32-bit to 64-bit architectures is not possible. Note that all Windows Server 2016 versions are only available in 64-bit.
  • You cannot upgrade from one language to another.
  • If you are running a domain controller, make sure you can handle the task, or read the following article: Upgrade Domain Controllers to Windows Server 2012 R2 and Windows Server 2012.
  • You cannot upgrade from a preview version.
  • You cannot switch from Server Core installation to a Server with a Desktop installation.
  • You cannot upgrade from a Previous Windows Server installation to an evaluation copy of Windows Server.

You can read from the table below that shows a summary of Windows Operating Systems available for upgrade. If you are unable to upgrade your current Windows version, then upgrading to Windows Server 2016 is impossible

Current Windows Edition Possible Upgrade Edition
  • Windows Server 2012 Standard
  • Windows Server 2016 Standard or Datacenter
  • Windows Server 2012 Datacenter
  • Windows Server 2016 Datacenter
  • Windows Server 2012 R2 Standard
  • Windows Server 2016 Standard or Datacenter
  • Windows Server 2012 R2 Datacenter
  • Windows Server 2016 Datacenter
  • Windows Server 2012 R2 Essentials
  • Windows Server 2016 Essentials
  • Windows Storage Server 2012 Standard
  • Windows Storage Server 2016 Standard
  • Windows Storage Server 2012 Workgroup
  • Windows Storage Server 2016 Workgroup
  • Windows Storage Server 2012 R2 Standard
  • Windows Storage Server 2016 Standard
  • Windows Storage Server 2012 R2 Workgroup
  • Windows Storage Server 2016 Workgroup

Per-Server-Role Considerations for Upgrading

It’s important to consider server roles before performing an upgrade.

For example, some server roles are part of the newer Windows versions and may only need additional preparation or actions to get the desired intent.

Converting Current Evaluation Version to Current Retail Version

It is possible to convert the trial version of Windows Server 2016 Standard to a Data 2016 Standard Server or a Datacenter version. The two conversions can be retail versions. You can also convert Windows Server 2016 Datacenter to the retail version.

Before making any conversion attempts to the retail version, ensure that your server is running an evaluation version; you can confirm this by following these steps:

  • From the administrator’s command prompt, run
slmgr.vbs /dlv;
  • The evaluation versions will include “EVAL” as the output
  • Open the control panel
  • Then click on System and Security
  • Click on System
  • View the activation status found on the activation area of the System page
  • Click view details, and you will see more information on your Windows Status
  • If your Windows is activated, you will see information showing the remaining time for the evaluation period.

If you are running a retail version, you will see the “Upgrading previous retail versions of Windows Server 2016” message prompting you to upgrade to Windows Server 2016.

In Windows Server 2016 Essentials, the conversion to retail version is possible if you have a retail volume license or OEM key in the command slmgr.vbs

In case you are running an evaluation version of Windows Server 2016 Standard or Windows Server 2016 Datacenter, the following conversions can help you:

  • If the server is a domain controller, it cannot change to the retail version. First, install another domain controller on a server that runs a retail version and remove the AD DS from the domain controller that has the evaluation version.
  • Read the license terms
  • From the administrator’s command prompt, enter this command to get the current edition:
DISM /online /Get-CurrentEdition

Note the edition ID, the abbreviation form of the edition name, and then run the following command:

DISM /online /Set-Edition:<edition ID> /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX /AcceptEula

Once you get the ID and product key, the server should restart twice.

You can convert the evaluation version of Windows Server 2016 Standard to the retail version of Windows Server 2016 Datacenter using the same command and product key.

Converting Current Retail Edition to a Different Current Retail Edition

After successful installation of Windows Server 2016, you can run setup to repair the installation using a process called “repair in place” that converts it to a different edition.

In case of Windows Server 2016 Standard, you can convert the system to Windows Server 2016 Datacenter by:

  • From the administrator’s command prompt, use the following command to determine the existing edition:
DISM /online /Get-CurrentEdition
  • Run this command to get the ID of the edition you want to upgrade to:
DISM /online /Get-TargetEditions
  • Note the ID edition, the name of the edition, and then run this command:
DISM /online /Set-Edition:<edition ID> /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX /AcceptEula
  • Once you get the ID and product key, the server should restart twice.

Converting Current Retail Version to Current Volume Licensed Version

Once you have Windows Server 2016 running, you can convert it to a retail version, an OEM version, or a volume-licensed version. The edition will not change.

If the starting point was an evaluation version, change it to retail version and then do as follows:

  • From the administrator’s command, run this command:
slmgr /ipk <key>
  • Insert the appropriate volume license, OEM or retail key instead of <key>

Conclusion

Upgrading Windows Server is a complicated process; therefore, Microsoft suggests that you migrate all roles and settings to Windows Server 2016 to avoid costly mistakes.

What’s New in Storage in Windows Server 2019 and 2016

Window Server Edition 2016 and 2019 have new features, which have made it possible to use storage migration capabilities for storing data.

The migration service helps in keeping inventory when moving from one platform to another.

This article will try to explain what is new in the storage systems of Windows Server 2016, Windows Server 2019, and other semiannual releases.

We will start by highlighting some of the key features added in the two server systems.

Managing Storage with Windows Admin Center

The Windows Admin Center is a new feature that runs on Windows Server 2019 and some latest versions of Windows.

It is the central location where an App handles the server functions, clusters, and hyper-converged infrastructure containing storage locations.

The Admin Center does this as part of the new server configurations.

Storage Migration Service

The Storage Migration Service is the latest technology that makes it easy to move servers from old to new server versions.

All the events take place via a graphical interface that displays data on the servers and transfers data and configurations to the new servers; thereafter, it optimally moves old server identities to the new ones, ensuring the settings for apps and users are matched.

Storage Spaces Direct Improvements (Available in Server 2019 only)

Several improvements have been made to Storage Spaces Direct in Server 2019, though they are not available in Windows Server, Semi-Annual channel.

Here are some of the improvements:

1. Deduplication and Compression of ReFS Volume

You will be able to store up to 10X more data on the same storage space using deduplication and compression of the ReFS system.

You only need to turn on this feature, using a single click, on the Windows Admin Center.

The increase in storage sizes, with an option to compress data, amplifies the saving rates.

Furthermore, the multi-threaded post processing feature assists in keeping performance impact low.

However, it supports a volume of up to 64TB and with each file reaching 1TB.

2. Native Support for Persistent Memory

Windows Server 2019 comes with native support for persistent memory.  This allows you to speed up performance for the continuous creation of memory modules, including the Intel Optane DC PM and NVDIMM-N.

You can use persistent memory as your cache to accelerate the active working set or use it as an extra space needed to facilitate low latency.

Of course, you can manage persistent memory the same way you can manage any other storage device in Windows Admin Center or PowerShell.

3. Nested Resiliency for Two-Node Hyper-Converged Infrastructure on the Edges

The all new software resiliency option, inspired by RAID 5 + 1, helps in surviving two hardware failures.

The nested resiliency for the two-node Storage Spaces Direct cluster offers continuous accessible storage for programs and virtual machines, even when one server node fails.

4. Two-Server Cluster Using USB Flash Drive as a Witness

You an use a low-cost USB flash plugged into your router to act as a witness between two servers in a cluster.

If the server is down, the USB will know which of the servers has more data.

5. Improved Windows Admin Center

The opportunity to manage and monitor Storage Spaces Direct with the newly built dashboard lets you create, delete, open, and expand volumes, with a few clicks.

You can follow performances of IOPS and IO latency, from the entire clusters to the individual hard disks and SSDs.

6. Increased Performance Logs Visibility

You can use the built-in history feature to see your server’s resource utilization and performance capabilities.

It has more than 50 counters that automatically collect  memory, computation, storage and network data, and store them in the cluster for a full year.

This feature works without the need to install or configure anything.

7. Scale up to 4PB for Every Cluster

The Windows Server 2019 Storage Spaces Direct feature supports up to 4 petabytes (PB) (4,000 terabytes).

This way, you can get to the level of multi-petabyte scale, which makes sense in media servers for backup and archiving purposes.

Other capacity guides are increased as well; for instance, you can create volumes reaching 64, and not 32.

More so, the clusters can be stitched together into a set to make the scaling that fits within one storage namespace.

8. Accelerated Parity is now 2X Faster

You can now create Storage Spaces Direct Volumes that are part mirror and part parity.

For example, you can mix RAID-1 and RAID -5/6 to harness the advantages of both.

In Windows Server 2019, the performance of mirror accelerated parity is twice that of Windows Server 2016, due to optimizations.

9. Drive Latency Outline Detection

Using proactive monitoring and the built-in outlier detection, which is an inspiration from Microsoft Azure, you can know which drives have abnormal latency.

You can see the failing drives that have been labeled automatically in the PowerShell and Windows Admin Center.

10. Manual Delimiting of Volume Allocations to Increase Fault Tolerance

In Storage Spaces Direct, the Admin can now manually change the limit of volume allocations.

Delimiting is usually done to increase fault tolerance in specific circumstances that consider management  complexities.

Storage Replica

The Storage Replica has the following improvements:

1. Introduction of Storage Replica in Windows Server, Standard Edition

It is now possible to use Storage Replica with Windows Server, Standard Edition, as well as the Datacenter editions.

Running Storage Replica on Windows Server, Standard Edition has the following weaknesses:

  • Storage replica can replicate a single volume and not an unlimited volume number
  • Volume varies with some taking up to 2TB, instead of taking an unlimited size

2. Storage Replica Log Performance Improvements

The Storage Replica comes with improvements that enhance the tracking of logs.

To get the increased performance, all members of the replication group must run Windows Server 2019.

3. Test Failover Improvements

You can mount a temporary snapshot of the replicated storage on destination server for testing or backing up purposes.

4. Windows Admin Center Support

Support for the graphical management of replication is made possible via the Server Manager Tool.

This involves server-to-server replication, cluster-to-cluster, and stretch cluster replication.

5. Miscellaneous Improvements

Storage Replica also has the following improvements:

  • Changes to asynchronous stretch cluster behaviors for automatic failover to take place.
  • Multiple bug fixes

SMB

SMB1 and Guest Authentication Removal

Windows Server does not install the SMB1 client and server by default, while, at the same time, the ability to authenticate guests in SMB2 if off by default.

SMB2/SMB3 Security and Compatibility

More options for security and applications compatibility were added, including disabling opLocks in SMB2+ for old applications.

This also covers the need for signing encryption on every connection from the client.

Data Deduplication

Data Deduplication Supports ReFS

You’ll not need to choose between the advantages of a modern file system with ReFS and Data Deduplication.

Anytime you enable Data Deduplication, enabling ReFS is also possible now.

Data Port API for Optimized Ingress/egress to Deduplicated Volumes

As a developer, you’ll now enjoy the advantages of data deduplication and possibilities of storing data in an efficient manner

File Server Resource Manager

The Windows Server 2019 can prevent the File Resources Manager service from creating a change (USN) journal on storage volumes.

This is to create and conserve more space on every volume; however, it will disable real-time classification.

This is the same effect that takes place in Windows Storage Server, Version 1803.

What’s New in Storage in Windows Server, Version 1709

Server Version 1709 is the first Windows Server release with a Semi-Annual Channel, which is a channel that is fully supported in production for 18 months, with a new version coming in every six months.

Storage Replica

Disaster recovery and protection is an added function of the Storage Replica, which is now expanded to include:

  • Test Failover

You now have an option of mounting the destination storage through a test failover.

You can also mount the snapshots temporarily for both testing and backup purposes.

  • Windows Admin Center Support

Thee is support for the graphical applications that are managing replications. You can access it via the  Server Manager Tool.

Storage Replica also has the following improvements:

  • Changes to asynchronous cluster behaviors to enable automatic failover
  • Multiple bug fixes

What’s New in Storage in Windows Server 2016

1. Storage Spaces Direct

The Storage Spaces Direct feature facilitates the availability and scalability of storage using servers with local storage.

This implies that it’s now possible to deploy and manage software that control storage systems, unlocking the use of new classes of storage devices.

These devices include SATA, SSD, and NVMe disks. Achieving such storage capabilities may not be possible using clustered Storage Spaces with Shared Disks.

What Value Does this Change Add?

Storage Spaces Direct allows service providers and enterprises to use industry standard servers with local storage.

The idea is to build highly available and scalable software-defined storage.

The use of servers with local storage decreases complexity, as it increases scalability and allows the use of storage devices such as SATA solid state disks. This lowers the cost of flash storage or NVMe sold state Disks

Storage Spaces Direct Removes the need to have a shared SAS fabric, which simplifies deployment and configuration.

This means that the server uses the network as the storage fabric while leveraging the SMB3 and SMB Direct (RDMA) for both high speed and low latency, as well as good use of the processing unit.

Adding more servers to the configuration increases storage capacity and input and output performance.

The Windows Server 2016 Storage Spaces Direct works differently, as explained below.

2. Storage Replica

It enables the storage, block-level stretching of failover clusters between sites, as well as the synchronous replication between servers.

Synchronous replication enables mirroring of data in physical sites with consistent volumes to ensure no data is lost at the file system level.

Asynchronous replication may increase the possibility of data loss.

What Value Does this Change Add?

It provides a single vendor disaster recovery solution for both planned and unplanned power loss situations.

You can use SMB3 transport and gain from proven performance, scalability, and reliability.

It will help you to:

  • Stretch Windows failover clusters further
  • Use Microsoft end-to-end software for storage and clustering, such as Hyper-V, Scale-Out File Server, Storage Replica, Storage Spaces, ReFS/ NTFS, and deduplication

It helps in reducing complexity costs by:

  • Being hardware agnostic with no specific requirements for storage configurations like DAS or SAN
  • Allowing for the storage of commodities and network technologies
  • Featuring easy graphical management interface for nodes and clusters through failover cluster manager
  • Including comprehensive and large scale scripting options through the Windows PowerShell
  • Helping in the reduction of downtimes and enhancing  large scale productivity
  • Providing supportability and performance metrics and diagnostic capabilities

What Works Differently

The functionality is new in Windows Server 2016

3. Storage Quality of Service

In Windows Server 2016, you can use the Storage Quality of Service (QoS) feature as a central monitor for end-to-end storage performance and developing management policies using Hyper-V and CSV clusters.

What Value Does this Change Add?

You can change the QoS policies in a CSV and assign one or more virtual disks on Hyper-V machines.

The storage automatically adjusts itself to meet the fluctuating policies and workloads.

This way, each policy can give a minimum reserve or create a maximum to be used when collecting data.

For example, a single virtual hard disk, a tenant, a service or a virtual machine can be used.

You can use Windows PowerShell or WMI to perform the following:

  • Create policies on CSV cluster
  • Assign the policies to virtual hard disks
  • Enumerate policies on the CSV clusters
  • Monitor flow performance and status of the policies

If you have several virtual hard disks sharing the same policy and performance is shared to meet the demands within the policy’s minimum and maximum settings, it means that the policy can manage virtual hard disks and a single or multiple virtual machines that constitute a service owned by a tenant.

What Works Differently

This is a new feature in Windows Server 2016.

The management of minimum reserves and monitoring the flow of all virtual disks over a cluster using a single command and central policy-based management are not possible in the previous Server releases.

4. Data Deduplication

Function

New or Updated

Description

Support large volumes

Updated Before Windows Server 2016, you had to specify sizes. Anything above 10TB did not qualify for deduplication. Server 2016 supports deduplication sizes of up to 64TB

Large file support

Updated Before Windows Server 2016, files with 1TB could not deduplicate. Server 2016 supports deduplication of files up to 1TB.

Nano Server Support

New Deduplication is available and fully supported for Server 2016

Simple Backup Support

New Windows Server 2012 R2 supported Virtual backups using the Microsoft’s Data Protection Manager. Windows Server 2016 simple backup is possible and is seamless

Cluster OS Rolling Upgrades Support

New Deduplication supports Cluster OS Rolling Upgrade and is available in Windows Server 2016

5. SMB Hardening Improvements for SYSVOL and NETLOGON Connections

Windows 10 and Windows Server 2016 client connections to the Active Directory Domain Service, the SYSVOL, and NETLOGON now all share domain controllers that require SMB signing and authentication via Kerberos.

What Value Does this Change Add?

It reduces the possibility of man-in-the-middle attacks

What Works Differently?

If the SMB and mutual authentication are not available, Windows 10 or Server 2016 will not access the domain-based Group Policy Scripts.

It is also good to note that the registry values of the settings are not present by default; the hardening rules will apply until a new policy change comes in through Group Policy or any relevant registry values.

6. Work Folders Improvements

The added changes to notifications are there when the Work Folder server is running on Windows Server 2016, and the Work Folder is on a client running Windows 10.

What Value Does this Change Add?

In Windows Server 2012 R2, when the changes in files are synchronized to the Work Folder, clients will get notified of the impending changes and wait for at least 10 minutes for the update to materialize.

When running Windows Server 2016, the Work Folders will immediately notify the Windows 10 client, and the synchronization changes take effect immediately.

What Works Differently

This is a new feature in Windows 2016.

For this feature to work, the client accessing the Work Folders must be a Windows 10.

In case you are using older clients, or if the Work Folder is on Windows Server 2012 R2, the client will poll every 10 minutes for any new changes.

7. ReFS Improvements

The ReFS (Resilient File System) offers support for large scale data storage allocation with varying workloads, reliability, resiliency, and scalability.

What Values Does this Change Add?

ReFS brings in the following improvements:

  • Implementing new storage tiers that help in delivering fast performance and increased capacity
  • Multipling resiliency on the same virtual disk through mirroring and parity tiers
  • Enhancing responsiveness to drifting working sets
  • Introducing a block of cloning and improvements to VM operations such as vhdx checkpoint merge operations
  • Helping in the recovery of leaked storage and keeping them from being corrupted

What Works Differently?

These functionalities are new in Windows Server 2016.

Conclusion

With so many features available in Windows Server 2019, this article covered the fully supported features.

At the time of writing this post, some features were partially supported in earlier versions but are getting full support in the latest Server versions.

From this read, you can see that Windows Server 2019 is a good upgrade experience.

Windows Server 2016 and GDPR

“As the world continues to change and business requirements evolve, some things are consistent: a customer’s demand for security and privacy.”
Satya Nadella, Microsoft’s CEO

An important topic in European IT world these days is GDPR ( General Data Protection Regulation ).

A new European data and privacy protection law will be activated on May 25, 2018, referred to all citizens of EU with a purpose of protecting and enabling the privacy rights of individuals.

The GDPR regulates protection and enabling private data of any individual, no matter where data is sent, processed or stored.

The GDPR forms complex set of rules regarding any organization that offers goods or services to citizens of EU or collects and analyzes data regarding EU citizens in any form, no matter of the location of business included.

The Key Elements of the GDPR can be settled on three key points

  • Enhanced personal privacy rights
  • An increased duty of protecting personal data
  • Mandatory personal data breach reporting

Those points, in short lines, define protection of EU residents by granting access to their personal data, and rights to manage it in any way ( correct, erase or move ), awareness and responsibility of organisations that process personal data, and mandatory reporting of detected breaches to supervisory authorities, no later then 72 hours after detection.

How does the GDPR define personal and sensitive data, and how those definitions relate to data held by organizations?

Personal data, considered by GDPR, is any information related to an identified or identifiable natural person, direct identification (legal name etc.) indirect identification ( specific information that can identify you in data references), and online identifiers ( IP, mobile ID’s and location data).

The GDPR sets specific definitions for generic data ( an individual’s gene sequence) and biometric data. This type of data, along with other subcategories of personal data (data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership: data concerning health; or data concerning a person’s sex life or sexual orientation) are treated as personal data, and require individual’s acceptance where these data are to be processed.

In case of processing any sensitive or personal data on a physical or virtual server, the GDPR require implementation of technical and organizational security measures to protect personal data and processing systems from today’s security risks, like Ransomware attacks, or any type of cyberterrorism.

An additional type of problem occurs with Ransomware attacks regarding the GDPR estimated penalties, which make any company’s system that contains personal and sensitive data, potential-rich targets. Depending on the kind of infringement, there might be monetary penalties from 2% up to 4% of the total worldwide annual turnover, not less than 10 to 20 million Euro.

What does GDPR mean for Windows Server security and protection, and how does Windows Server supports GDPR compliance?

At Microsoft server 2016, security is placed on architectural principle, and it can be seen as four major points:

  • Protect – Focus and innovation on preventive measures
  • Detect – Monitoring tools with the purpose to spot abnormalities and respond to attacks faster
  • Respond – Usage of response and recovery technologies and experts
  • Isolate – Isolation of operating system components and data secrets, limited administrator privileges, and rigorously measured host health.

Those points implemented in Windows Server, greatly improve the defense of possible data breaches.

Key features within Windows Server are pointed to help user efficiently and effectively implement the security and privacy mechanisms the GDPR requires for compliance.

Windows Server 2016 helps block the common attack vectors used to gain illegal access to user systems: stolen credentials, malware, and a compromised virtualization fabric.

In addition to reducing business risk, the security components built into Windows Server 2016 help address compliance requirements for key government and industry security regulations.

These identities, operating system, and virtualization protections enable better protection of datacenter running Windows Server as a VM in any cloud, and limit the ability of attackers to compromise credentials, launch malware, and remain undetected. Likewise, when deployed as a Hyper-V host, Windows Server 2016 offers security assurance for virtualization environments through Shielded Virtual Machines and distributed firewall capabilities. With Windows Server 2016, the server operating system becomes an active participant in data center security.

The GDPR specifically regulates control over access to personal data, and system that process it, including administrator/privileged accounts. It defines privileged identities as any accounts that have elevated privileges, such as user accounts that are members of the Domain Administrators, Enterprise Administrators, local Administrators, or even Power Users groups.

Those kinds of accounts are protected from compromising with protecting guidelines, all organizations should implement:

  • Reasonable allocation of privileges – User should not have more privileges than needed for successful job completion.
  • Limit sign in time for privileged accounts to “strictly work-related operations”.
  • Social engineering research – In goal to prevent email phishing, and a possibility for the security breach, even though “harmless”, lower level accounts
  • Every account with unnecessary domain admin-level privileges increases exposure to attackers seeking to compromise credentials. To minimize the surface area for attack, it is recommended to provide only the specific set of rights that an admin needs to do the job – and only for the window of time needed to complete it. That way of administration is called Just Enough Administration and Just-in-Time Administration, and it is highly recommended,

Windows Server 2016 offers various types of prevention and protection tools and features, for various types of user accounts, such as

  • Microsoft Identity Manager 2016
  • Local Administration password solution
  • Windows Defender Credential Guard
  • Windows Defender Device Guard
  • Control Flow Guard

which cover the areas of protecting the user/admin credentials, trusted software-only installation, breach notification, and jump-oriented programming (JOP) defense.

It actively alerts administrators to potential breach attempts with enhanced security auditing that provides more detailed information, which can be used for faster attack detection and forensic analysis. It logs events from Control Flow Guard, Windows Defender Device Guard, and other security features in one location, making it easier for administrators to determine what systems may be at risk.

A newly introduced feature is Shielded VMs. They include a virtual TPM (Trusted Platform Module) device, which enables organizations to apply BitLocker Encryption to the virtual machines and ensure they run only on trusted hosts to help protect against compromised storage, network, and host administrators. Shielded VMs are created using Generation 2 VMs, which support Unified Extensible Firmware Interface (UEFI) firmware and have virtual TPM.

The GDPR can have a significant impact on any business that uses any type of personal data. it should be taken seriously, and implemented as soon as possible, no matter time, funds, or planning required.

Windows Server – How To Close Open Files

Here I will describe how to close open server files and processes.

Every system admin on Microsoft Windows Server systems, at least once, will come in a situation that some file is open on a server, and it is needed to check what kind of process or user opened it.

This open files can cause some troubles, like upgrade errors, or reboot hold up etc.

It could be a huge problem, which, if not thought through, can cause the delay of updates, or errors in server maintenance.

More common, but less extreme issues regarding this could come from users. Sometimes, in situations when users leave shared files open on their accounts, some other users, when opening the same file can experience error messages, and cannot access the same file.

This article will show you the way how to deal with that kind of issues, how to find and close open files/process. The operations can be applied to Microsoft Windows Server systems 2008, 2012, 2016 and Windows 10 for workstations.

There are a lot of working methods to deal with that kind of problems, first, one that we will describe is a usage of computer management:

View open files on a shared folder

In a situation of locked files on the server, made by users, this method could come in handy to troubleshoot it.

Use right click on start menu and select Computer Management ( or in start menu search type compmgmt.msc)

The procedure is very simple, and in most cases, it works with no problems.

Click on Shared Folders”, and after that, on Open Files.

That should open the screen with a list of files that are detected as open, the user that opened it, possible locks, and mode that is opened in.

By right click on the wanted file, choose an option, “Close open file”, and that will close it.

With processes and file details, the process is bit different.

Usage of Windows Task Manager

Task Manager will not close opened shared files, but it can close processes on the system.

It can be opened with a combination of keys ctrl, alt, del ( and choose Task Manager), or right-clicking on the taskbar then choose open task manager option.

Under tab processes, you can see all active processes and line it by parameters CPU, Memory etc…

If there is a process that you want to terminate, it can be done by simply right click on the process, and then choose End Process option.

Usage of Resource Monitor

For every system administrator, Resource Monitor is “the tool” that allows control and overview overall system processes and a lot more.

Resource Monitor can be opened by typing “resource monitor” in a start menu search box.

Another option is to open up the task manager, click the performance tab and then click Open Resource Monitor.

When Resource Monitor opens, it will show tabs, and one, needed for this operation is Disk.

It shows disk activity, and processes, files open, PID, read and write bytes per second etc.

If the system is running a lot of “live” processes, it can be confusing, so Resource Monitor offers “stop live monitoring” option, which will stop processes on screen running up and down, and will give you an overview of all processes up to “stop moment”.

Resource monitor offers an overview of opened files paths and processes on the system, and with that pieces of information, it is not a problem to identify and close files or processes.

Powershell cmdlet approach

Of course, PowerShell can do everything, GUI apps can, maybe even better, and in this case, there are several commands, that can and will close all your system’s opened files and processes.

There are more than one solutions with PowerShell scripts, and it is not recommended for administrators without experience in scripting.

For this example, we will show some of the possible solutions with PowerShell usage.

The following examples are applied to  Server Message Block (SMB) supported systems, and for systems that do not support SMB, the following examples will show how to close the file with NET file command approach.

In situations where one, or small numbers of exact known open files should be closed, this cmdlet can be used. It is, as usual, used from elevated PowerShell, and applies to a single file ( unsaved data on open files, in all examples, won’t be saved).

Close-SmbOpenFile -FileId ( id of file )
Confirm 
Are you sure you want to perform this action? 
Performing operation 'Close-File' on Target ‘( id of file)’. 
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): N

There is a variation of cmdlet which allows closing open files for a specific session.

Close-SmbOpenFile -SessionId ( session id )

This command does not close a single file, it applies to all opened files under the id of the specific session.

The other variation of the same cmdlet is applying to a file name extension ( in this example DOCX).

The command will check for all opened files with DOCX extension on all system clients and it will force close it. As mentioned before, any unsaved data on open files, will not be saved.

Get-SmbOpenFile | Where-Object -Property ShareRelativePath -Match ".DOCX" | Close-SmbOpenFile -Force

There are a lot more this cmdlet flags, and variations which allow applying a lot of different filters and different approaches to closing open files.

Powershell Script approach

With PowerShell scripts, the process of closing open files and processes can be automated.

$blok = {$adsi = [adsi]"WinNT://./LanmanServer"

$resources = $adsi.psbase.Invoke("resources") | Foreach-Object {

 New-Object PSObject -Property @{

 ID = $_.gettype().invokeMember("Name","GetProperty",$null,$_,$null)

 Path = $_.gettype().invokeMember("Path","GetProperty",$null,$_,$null)

 OpenedBy = $_.gettype().invokeMember("User","GetProperty",$null,$_,$null)

 LockCount = $_.gettype().invokeMember("LockCount","GetProperty",$null,$_,$null)

 }

}

$resources | Where-Object { $_.Path -like '*smbfile*'} |ft -AutoSize

$resources | Where-Object { $_.Path -like '*smbfile*'} | Foreach-Object { net files $_.ID /close }

}

Invoke-Command -ComputerName pc1 -ScriptBlock $blok

Our example script enables closing a file specified by path, that should be inserted In the script.

This way of closing open files is not recommended for administrators without PowerShell scripting experience, and if you are not 100% sure, that you are up to the task, do not use this way.

Close A File On Remote Computer Using Command Line

There are two other ways to close the open files. Either Net File or PSFile (Microsoft utility) could be used to close them. The first command can be ruined by usage of NET File command using the Psexec.exe remotely. The NET command does not support any Remote APIs.

Net file command can list all open shared files and the number of file lock per file. The command can be used to close files and remove locks ( similar to SMB example before) and it is used, similar to example before, when user leave a file open or locked.

It can be done with the following syntax

C:>net file [id [/close]]

In this syntax, ID parameter is the identification number of file ( we want to close), and of course, parameter close, represents action we want to apply to ID ( file).

Best practice of NET file command usage is to list open files by running Net File command, which lists all open files and sign it with numbers 0, 1, etc

So when files are listed, the command which will close open files is ( for example),

C:>net file 1 /close

So command will apply in a way that will close a file signed with number 1.

PsFile usage

PsFile is a third party application, but I will not put it in a list of third parties, as any good system administrator should use it as “normal”.

commands are similar to net file commands, with a difference that it doesn’t truncate long file names, and it can show files opened on remote systems locally.

It uses the NET API, documented in platform tools, and it becomes available by downloading PsTools package.

 psfile [\\RemoteComputer [-u Username [-p Password]]] [[Id | path] [-c]]

Usage of PsFile “calls” remote computer with valid username and Password, and with path inserted it will close the open files on the remote system

For Processes opened on the remote system, there is a similar command called PsKill, which on same principle “kills” processes.

Release a File Lock

In some situations, a problem with closing files can be handled by releasing a file lock. There are many examples of users locking their files, and leave it open ( for some reason, the most common type of locked files are excel files).

So all other users get an error message of type: Excel is locked for editing by another user, and get no option to close it or unlock.

As an administrator, you should have elevated rights and with right procedure, that can be fixed easily.

With pressing windows key and R, you will get windows run dialog.

In run dialog type mmc ( Microsoft Management Console).

By going to option File > Add/Remove Snap-in, add a “Shared Folders” snap-in.

If you are already an operating system that has the issue, choose Local Computer option, if not, choose Another computer option and find a wanted computer name.

Expand the Shared Folders, then select open files option.

Choose locked/open file, and close it by right click and selection of Close open file.

The described procedure will unlock and close an open file ( similar as in the first example of an article), and users will be able to access it.

Usage of Third-party apps

There is a lot of third-party apps with the purpose of handling open server files on the market.

We will describe a few of most used ones in this purpose.

Process Explorer – a freeware utility solution from Windows Sysinternals, initially created by Winternals, but acquired by Microsoft. It can be seen as Windows Task Manager with advanced features. One of many features is the close open files feature, and it is highly recommended for Server Administrators and IT professionals.

Sysinternals can be accessed on the following link :

https://docs.microsoft.com/en-us/sysinternals/

OpenedFilesView – Practically a single executable file application, displays the list of all opened files on your system. For each opened file, additional information is displayed: handle value, read/write/delete access, file position, the process that opened the file, and more.

To close a file or kill a process, right-click any file and select the desired option from the context menu.

It can be downloaded on the following link :

https://www.nirsoft.net/utils/opened_files_view.html

Lockhunter – usually a tool with a purpose of deletion of blocked files ( to recycle bin). It can be a workaround for open files, and it has a feature of listing and unlocking locked files on your system. It is very powerful, and helpful in a situation when system tools fail.

It could be downloaded on following the link: http://lockhunter.com/

Long Path Tool – Long Path Tool is a shareware program provided by KrojamSoftthat, as its name suggests, helps you fix a dozen issues you’ll face when a file’s path is too long. Those issues include not being able to copy, cut, or delete the files in question because its path is too long. With a bunch of features, this could maybe be an “overkill” for this purpose, but it is definitely a quality app for all sysadmins.

It could be downloaded on following link: https://longpathtool.com/

How to Set Accurate Time for Windows Server 2016

Accurate Time For Windows Server 2016

It is important for Windows Server 2016 to maintain an accuracy of 1ms in sync with the UTC time. This is because new algorithms and periodic time checks are obtained from a valid UTC server.

The Windows time service is a component that uses a plugin for the client and server for synchronization.

Windows has two built-in client time providers that link with the third party plugins.

One of the providers uses the Network Time Protocol (NTP) or the Microsoft Network Time Protocol (MS-NTP) to manage the synchronizations to the nearest server.

Windows has a habit of picking the best provider if the two are available.

This article will discuss the three main elements that relate to an accurate time system in Windows Server 2016:

  • Measurements
  • Improvements
  • Best practices

Domain Hierarchy

Computers that are members of a domain use the NTP protocol that authenticates to a time reference in relating to security and authenticity.

The domain computers synchronize with the master clock that is controlled by domain hierarchy and the scoring system.

A typical domain has hierarchical stratum layers where each Domain Controller (DC) refers to the parent DC with accurate time.

The hierarchy revolves around the Primary Domain Controller (PDC) or a DC with the root forest, or a DC with a Good Time Server for the Domain (GTIMESERV) flag.

Standalone computers use the time.windows.com service. The name resolution takes place when the Domain Name Service resolves to a time owned by a Microsoft resource.

Like any other remotely located time references, network outages do not allow synchronization to take place. Paths that are not symmetrical in a network reduce time accuracy.

Hyper-V guests have at least two windows time providers; therefore, it is possible to observe different behaviors with either the domain or the standalone.

NOTE: stratum refers to a concept in both the NTP and the Hyper-V providers. Each has a value indicating clock location in the hierarchy. Stratum 1 is for high-level clock, and stratum 0 is for hardware. Stratum 2 servers communicate to stratum 1 servers, stratum 3 to stratum 2, and the cycle continues. The lower strata show clocks that are more accurate with the possibility of finding errors. The command line tool w32tm (W32time) takes time from stratum 15 and below.

Factors Critical For Accurate Time

1. Solid Source Clock

The original source of the clock needs to be stable and accurate at all times. This implies that during the installation of the Global Positioning Service (GPS) pointing to stratum 1, you should take #3 into consideration.

Therefore, if the source clock shows stability, then the entire configuration will have a constant time.

Securing the original source time means that a malicious person will not be able to expose the domain to time-based threats.

2. Stable Client Clock

A stable client takes the natural drift of the oscillator to make sure that it is containable. The NTP uses multiple samples to condition the local clocks on standalone to stay on course.

If the time oscillation on the client computers is not stable, there will be fluctuations between adjustments leading to malfunctioning of the clock.

Some machines may require hardware updates for proper functioning.

3. Symmetrical NTP Communication

The NTP connection should be symmetrical at all times because the NTP uses calculation adjustments to set time as per the symmetry levels.

If the NTP request takes longer than the expected time on its return, time accuracy is affected. You may note that the path could change due to changes in topology or routing of packets through different interfaces.

The battery-powered devices may use different strategies, which in some cases require that the device be updating every second.

Such a setting consumes more power and can interfere with power saving modes. Some battery run devices have some power settings that can interfere with the running of other applications and hence interfere with the W32time functions.

Mobile devices are never 100% accurate, especially if you look at the various environmental factors that interfere with the clock accuracy. Therefore, battery-operated devices should not have high time accuracy settings.

Why is Time Important

A typical case in a Windows environment is the operation of the Kerberos that needs at least 5 minutes accuracy between the clients and servers.

Other instances that require time include:

  • Government regulations, for example, the United States of America uses 50ms for FINRA, and the EU uses 1ms ESMA or MiFID II.
  • Cryptography
  • Distributed systems like the databases
  • Block chain framework for bitcoin
  • Distributed logs and threat analysis
  • AD replication
  • The Payment Card Industry (PCI)
  • The Time Improvements for Windows Server 2016
  • Windows Time Service and NTP

The algorithm used in Windows Server 2016 has greatly improved the local clock when synchronizing with the UTC. The NTP has four values to calculate the time offset based on timestamps of client requests or responses and server requests and responses.

The modern network environment has too much congestion and related factors that affect the free flow of communication.

Windows Server 2016 uses different algorithms to cancel out the disturbances. Besides, the source used in Windows for time references uses improved Application Programming Interface (API) with the best time resolution, giving an accuracy of 1ms.

Hyper-V

Windows 2016 Server made some improvements that include accurate VM start and VM restore. The change gives us an accuracy of 10µs of the host with a root mean square (RMS) of 50µs for a machine carrying a 75% load.

Moreover, the stratum level at the host sends to guests more transparently. Earlier hosts would be fixed at stratum 2, regardless of its accuracy and the changes in Windows Server 2016 the host reports at stratum 1, which gives better timing for the virtual machines.

Domains created in Windows 2016 Server will find time to be more accurate because the time does not default to the host and that is the reason behind manually disabling the Hyper-V time provider settings in Windows joining a Windows 2012R2 and below.

Monitoring

Counters tracking the performance counters are now part of the Windows Server 2016, they allow for monitoring, troubleshooting, and baselining time accuracy.

The counters include:

a. Computed Time Offset

This feature indicates the absolute time between the system clock and the chosen time source in microseconds. The time updates whenever a new valid sample is available. Clock accuracy is traced using the performance counter that has an interval of 256 seconds or less.

b. Clock Frequency Adjustment

This adjustment indicates the time set by the local W32Time measured in parts per billion. The counter is important when it comes to visualizing actions taken by W32time.

c. NTP Roundtrip Delay

NTP Roundtrip Delay is the time taken during the transmission of a request to the NTP server and when the response is valid.

This counter helps in characterizing the delays experienced by the NTP client. If the roundtrip is large or varies, it can lead to noise, especially when the NTP computes time, thereby affecting time accuracy.

d. NTP Client Source Count

The source count parameter holds the number of clients and unique IP addresses of servers that are responding to client requests. The number may be large or small compared to active peers.

e. NTP Server Incoming Requests

A representation of the number of requests received by the NTP server indicated as request per second.

f. NTP Server Outgoing Responses

A representation of the number of answered requests by the NTP server indicated as responses per second.

The first three show the target scenarios for troubleshooting accuracy issues. The last three cover NTP server scenarios, which help to determine the load and setting a base for the current performance.

Configuration Updates per Environment

The following is a description that changes the default configurations between Windows 2016 and earlier versions.

The settings for Windows Server 2016 and Windows 10 build 14393 are now taking unique settings.

Role

Settings

Server 2016

Windows 10

Servers 12 and 08 and Windows 10

Standalone or a Nano Server

    
 

Time server

time.windows.com

N/a

time.windows.com

 

Poling frequency

64-1024 seconds

N/a

Once a week

 

Clock update frequency

Once a second

N/a

Once a hour

Standalone Client

    
 

Time server

N/a

time.windows.com

time.windows.com

 

Polling frequency

N/a

Once a day

Once a week

 

Clock update frequency

N/a

Once a day

Once a week

Domain Controller

    
 

Time server

PDC/GTIMESERV

N/a

PDC/GTIMESERV

 

Polling frequency

64 to 1024 seconds

N/a

1024 to 32768 seconds

 

Clock update frequency

Once a day

N/a

Once a week

Domain Member Server

    
 

Time server

DC

N/a

DC

 

Polling frequency

64 to 1024 seconds

N/a

1024 to 32768 seconds

 

Clock update frequency

Once a second

N/a

Once every 5 minutes

Domain Member Client

    
 

Time server

N/a

DC

DC

 

Polling frequency

N/a

1024 to 32768 seconds

1024 to 32768 seconds

 

Clock update frequency

N/a

Once every 5 minutes

Once every 5 minutes

Hyper-V Guest

    
 

Time server

Chooses the best alternative based on host stratum and time on the server

Chooses the best alternative based on host stratum and time server

Defaults to host

 

Polling frequency

Based on the role above

Based on the role above

Based on the role above

 

Clock update frequency

Based on the role above

Based on the role above

Based on the role above

Impact of Increased Polling and Clock Update Frequency

To get the most accurate time, the defaults for polling frequencies and clock updates will give you the ability to make adjustments more frequently.

The adjustments lead to more UDP and NTP traffic that will in no way affect the broadband links.

Battery devices do not store the time when turned off, and when turned on, it may lead to frequent time adjustments. Increasing the polling frequency will lead to instability, and the device will use more power.

Domain controllers should have less interference after multiple effects of increasing updates from NTP clients and AD domain. NTP does not require many resources compared to other protocols.

You can reach the limits of the domain functionality before getting a warning, indicating increased settings in Windows Server 2016.

The AD does not use secure NTP, which does not synchronize time accurately but will increase the clients two strata away from the PDC.

You can reserve at least 100NTP requests per second for every core. If you have a domain with 4 CPUs each, the total NTP should be serving 1,600 NTP requests per second.

As you set up the recommendations, ensure you have a large dependency on the processor speeds and loads. Administrators should conduct all baseline tests onsite.

If your DCs are running on sizeable CPU load of more than 40%, the system is likely to generate some noise when NTP is responding to requests, which may impair domain time accuracy.

Time Accuracy Measurements

Methodology

Different tools can be used to gauge the time and accuracy of Windows Server 2016.

The techniques are applicable when taking measurements and tuning the environment to determine if the test outcome meet the set requirements.

The domain source clock has two precision NTP servers and GPS hardware.

Some of these tests need a highly accurate and reliable clock source as a reference point adding to your domain clock source.

Here are four different methods for measuring accuracy in physical and virtual machines:

  • Take the reading of the local clock conditioned by a w32tm and reference it against a test machine with a separate GPS hardware.
  • Measure pings coming from the NTP server to its clients using the “stripchart” of the W32tm utility
  • Measure pings from the client to the NTP server using “stripchart” of the W32tm utility.
  • Measure the Hyper-V output from the host to the guests using the Time Stamp Counter (TSC). After getting the difference of the host and client time in the VM, use the TSC to estimate the host time from the guest. We also consider the use of TSV clock to factor out delays and the API latency.

Topology

For comparison purposes, testing both the Windows Server 2012R2 and Windows Server 2016 based on topology is sensible.

The topologies have two physical Hyper-V hosts that point to a 2016 Server with a GPS hardware installed. Each of these hosts runs at least three domains joining the Windows guests, taking the arrangement shown in the diagrams below.

TOPOLOGY 1. Image Source

The lines on the diagram indicate time hierarchy and the transport or protocol used.

TOPOLOGY 2. Image Source

Graphical Results Overview

The following graph is a representation of the time accuracy between two members of a domain. Every graph shows both Windows Server 2012R2 and 2016 outcome.

The accuracy was a measurement taken from the guest machine in comparison to the host. The graphical data shown indicate both the best and worst case scenarios.

TOPOLOGY 3. Image Source

Performance of the Root Domain PDC

The root PDC synchronizes with the Hyper-V host using a VMIC that is present in Windows Server 2016 GPS hardware, which shows stability and accuracy. This is critical because a 1ms accuracy is needed.

Performance of the Child Domain Client

The child domain client is attached to a Child Domain PDC for sending communication to the Root PDC. Its timing should also be within the 1ms accuracy.

Long Distance Test

Long distance test could involve comparing a single virtual network hop to 6 physical network hops on Windows Server 2016.

Increasing network hops mean increasing latency and extending time differences. The 1ms accuracy may negatively change, which demonstrates a symmetrical network.

Do not forget that every network is different and measurements taken depend on varying environmental factors.

Best Practices for Accurate Timekeeping

1. Solid Source Clock

The machine timing is as good as its source clock. To achieve the 1ms accuracy, a GPS hardware or time appliance should be installed to refer to the master source clock.

The default time.windows.com may not give an accurate or stable local time source. Also, as you move away from the source clock, you are bound to lose time.

2. Hardware GPS Options

The different hardware solutions that offer accurate time depend on GPS antennas. Use of radio and dial-up modem solutions is also accepted. The hardware options connect through PCIe or USB ports.

Different options give varying time accuracy and the final time depends on the environment.

Environmental factors that interfere with accuracy depends on GPS availability, network stability, the PC hardware and network load.

3. Domain and Time Synchronization

Computers in a domain use the domain hierarchy to determine the machine to be used as a source for time synchronization.

Every domain member will look for a machine to sync with and save it as its source. Every domain member will follow a different route that leads to its source time. The PDC in the Forest Root should be the default source clock for all machines in the domain.

Here is a list of how roles in the domain find their original time source.

  • Domain Controller with PDC role

This is the machine with authority on time source for the domain. Most of the time, its issues are accurate and must synchronize with the DC in the parent domain–with exceptional cases where GTIMESERV role is active.

  • Other Domain Controller

This will take the role of a time source for clients and member servers in the domain. A DC synchronizes with the PDC of its domain or any DC in the parent domain.

  • Clients or Member Servers

This type of machine will synchronize with any DC or PDC within its domain or picks any DC or PDC in the parent domain.

When sourcing for the original clock, the scoring system is used to identify the best time source. Scoring takes into account the reliable time source based on the relative location, which happens only once when the time service starts.

To fine-tune time synchronization, add good timeservers in a specific location and avoid redundancy.

Mixed Operating System Environments (Windows 2012 R2 and Windows 2008 R2)

In a pure Windows Server 2016 domain environment, you need to have the best time accuracy.

Deploying a Windows Server 2016 Hyper-V in a Windows 2012 domain will be more beneficial to the guests because of the improvements made in Server 2016.

A Windows Server 2016 PDC delivers accurate time due to the positive changes to its algorithms, which also acts as a credible source.

You may not have an option of replacing the PDC, but you can add a Windows Server 2016 DC with the GTIMESERV flag as one way of upgrading time accurately for the domain.

Windows Server 2016 DC delivers better time to lower clients, but it’s always good to use it as a source NTP time.

As already stated above, clock polling and refresh frequencies are modified in Windows Server 2016.

You can also change the settings manually to match the down-level DCs or make the changes using the group policy.

Versions that came prior to Windows Server 2016 have a problem with keeping accurate time since their systems drift immediately you make a change.

Obtaining samples from accurate NTP sources and conditioning the clock leads to small changes in system clock, ensuring better time keeping on the low-level OS versions.

In some cases involving the guest domain controllers, samples from the Hyper-V TimeSync is capable of disrupting time synchronization. However, for Server 2016, it should no longer be an issue when the guest machines run on Server 2016 Hyper-V hosts.

You can use the following registry keys to disable the Hyper-V TimeSync service from giving samples to w32time:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider

“Enabled”=dword:00000000

Allow Linux to Use Hyper-V Host Time

For guest machines using Linux and run the Hyper-V, it is normal for clients to use the NTP Daemon for time synchronization against the NTP servers.

If the Linux distribution supports version 4 TimeSync protocol with an enabled TimeSync integration on the guest, then synchronization will take place against the host time. Enabling both methods will lead to inconsistency.

Administrators are advised to synchronize against the host time by disabling the NTP time synchronization by using any of the following methods:

  • Disabling NTP servers in the ntp.conf file
  • Disabling the NTP Daemon

In this particular configuration, the Time Server Parameter is usually the host, and it should poll at a frequency of 5 seconds, which is the same as the Clock Update Frequency.

Exclusive synchronization over NTP demands that you disable the TimeSync integration service in the guest machine.

NOTE: Linux accurate timing support must have a feature supported in the latest upstream Linux Kernels. As at now, it is not available across most Linux distros.

Specify Local Reliable Time Service Using the GTIMESERV

The GTIMESERV allows you to specify one or more domain controllers as the accurate source clocks.

For example, you can use a specific domain controller with a GPS hardware and flag it as GTIMESERV to make sure that your domain references to a clock based on a GPS hardware.

TIMESERV is a Domain Services Flag that indicates whether the machine is authoritative and can be changed if the DC loses connection.

When the connection is lost, the DC returns the “Unknown Stratum” error when you query via the NTP. After several attempts, the DC will log System Event Time Service Event 36.

When configuring a DC as your GTIMESERV, use the following command:

w32tm /config /manualpeerlist:”master_clock1,0x8 master_clock2,0x8” /syncfromflags:manual /reliable:yes /update

If the DC has a GPS hardware, use the following steps to disable the NTP client and enable the NTP server:

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpClient /v Enabled /t REG_DWORD /d 0 /f

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpServer /v Enabled /t REG_DWORD /d 1 /f

Then, restart Windows Time Service

net stop w32time && net start w32time

Finally, tell network hosts that this machine has a reliable time source using this command:

w32tm /config /reliable:yes /update

Confirm the changes, run the following commands, which indicate the results as shown:

w32tm /query /configuration

Value

Expected Setting

AnnounceFlags

5 (Local)

NtpServer

(Local)

DIIName

C:\WINDOWS\SYSTEM32\w32time.DLL (Local)

Enabled

1 (Local)

NtpClient

(Local)

w32tm /query /status /verbose

Value

Expected Setting

Stratum

1 (primary reference – syncd by radio clock)

ReferenceId

0x4C4F434C (source name: “LOCAL”)

Source

Local CMOS Clock

Phrase Offset

0.0000000s

Server Role

576 (Reliable Time Service)

Windows Server 2016 on 3rd party Virtual Platforms

The virtualization of Windows means that the time responsibility defaults to the Hypervisor.

However, new members of the domain need to be synchronized with the Domain Controller for the AD to work effectively. The best that you can do is to disable time virtualization between guests and 3rd party virtual platforms.

Discover the Hierarchy

The chain of time hierarchy to the master clock is dynamic and non-negotiated. You must query the status of a specific machine to get its time source. This analysis helps in troubleshooting issues relating to synchronizations.

If you are ready to troubleshoot, find the time source by using the w32tm command:

w32tm /query /status

The output will be the source. Finding the source is the initial step in time hierarchy.

The next thing to do is to use the source entry and /Stripchart parameter to find the next time source.

w32tm /stripchart /computer:MySourceEntry /packetinfo /samples:1

The command below gives a list of domain controllers found in a specific domain and relays the results that you can use to determine each partner. The command also includes machines with manual configurations.

w32tm /monitor /domain:my_domain

You can use the list to trace the results through the domain and know their hierarchy and time offset at each step.

If you mark the point where time offset increases, you can get to know the cause of incorrect time.

Using Group Policy

Group policy is used to accomplish strict accuracy by making sure clients are assigned specific NTP servers. Clients can control how down-level OS should work when virtualized.

Look at the following list of all possible scenarios and relevant Group Policy settings:

  • Virtualized Domains

To gain control over the Virtualized Domain Controllers in Windows 2012 R2, disable the registry entry corresponding to the virtual domain controllers.

You may not want to disable the PDC entry because in most cases, Hyper-V host delivers a stable time source. The entry to the registry requires that you restart the w32time service after making changes.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider]

“Enabled”=dword:00000000

  • Accuracy Sensitive Loads

For any workload that is sensitive to time accuracy, ensure that the group machines are set to use the NTP servers and any related time settings like update frequency and polling.

This is a task handled by a domain, but if you want to have more control, target specific machines to point to the master clock

Group Policy Setting

New Value

NtpServer

ClockMasterName,0x8

MinPollInterval

6-64 seconds

MaxPollInterval

6 seconds

UpdateInterval

100 to once per second

EventLogFlags

3 – All special time logging

NOTE: The NtpServer and EventLogFlags are located on the System\Windows Time Service\Time Providers, if you follow the Configure Windows NTP Client Settings. The other three are under the System\Windows Time Service, if you follow the Global Configuration Settings

Remote Accuracy Sensitive Loads Remote

For systems running on the branch domains, such as the Retail and Payment Credit Industry (PCI), Windows will use the current site data and DC Locator to search the local DC, unless you have a manual NTP time source configured.

In such an environment, you need 1 second accuracy with the option of using the w32time services to move the clock backwards.

If you can meet the requirements, use the table below to create a policy.

Group Policy Settings

New Value

MaxAllowedPhaseOffset

1, if more than on second, set clock to correct time.

The MaxAllowedPhaseOffset is a setting you will find under System\Windows Time Service using global Configuration settings.

Azure and Windows IaaS Consideration

  • Azure Virtual Machine: Active Directory Domain Services

If you have Azure VM running Active Directory Domain Services as part of the existing configuration in a Domain Forest, then the TimeSync (VMIC) should not be running.

Disabling VMIC allows all DCs in both physical and virtual forests to use a single time sync hierarchy.

  • Azure Virtual Machine: Domain –Joined Machine

If you have a host whose domain links to an existing Active Directory Forest, whether virtual or physical, the best you can do is to disable TimeSync for the guest and make sure the W32Time is set to synchronize with the Domain Controller.

  • Azure Virtual Machine: Standalone Workgroup Machine

If your Azure is not part of a domain and it is not a Domain Controller, you can keep the default time configuration and let the VM synchronize with the host.

Windows Applications that Require Accurate Time

Stamp API

Programs or applications that need time accuracy in line with the UTC should use the GetSystemTimePreciseAsFileTime API to get the time as defined by Windows Time Service.

UDP Performance

An application that uses UDP to communicate during network transactions should minimize latency. You have the registry options to use when configuring different ports. Note that any changes to the registry should be restricted to system administrators.

Windows Server 2012 and Windows Server 2008 need a Hotfix to avoid datagram losses.

Update Network Drivers

Some network cards have updates that help improve performance and buffering of UDP packets.

Logging for System Auditors

Time tracing regulation may force you to comply by archiving the w32tm logs, performance monitors, and event logs. Later, these records may be used to confirm your compliance at a specific time in the past.

You can use the following to indicate time accuracy:

  • Clock accuracy using the computed time offset counter
  • Clock source looking for “peer response from” in the w32tm event logs
  • Clock condition status using the w32tm logs to validate the occurrence of “ClockDispl Discipline:*SKEW*TIME*.”

Event Logging

An event log can give you a complete story in the information it stores. If you filter out the Time-Server logs, you will discover the influences that have changed the time. Group policy can affect the events of the logs.

W32time Debug Logging

Use the command utility w32tm to enable audit logs. The logs will show clock updates as well as the source clock.

Restarting the service enables new logging.

Performance Monitor

The Windows Server 2016 Time service counters can collect the logging information that auditor’s need. You can log the data locally or remotely by recording the machine’s Time Offset and Round Trip Delays.

Like any other counter, you can create remote monitors and alerts using the System Center Operations Manager. You can set an alert for any change of accuracy when it happens.

Windows Traceability Example

Using sample log files from the w32tm utility, you can validate two pieces of information where the Windows Time Service conditions the first log file at a given time.

151802 20:18:32.9821765s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:223 CR:156250 UI:100 phcT:65 KPhO:14307

151802 20:18:33.9898460s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:64 KPhO:41

151802 20:18:44.1090410s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:65 KPhO:38

All the messages that start with “ClockDisplin Discipline” are enough proof that your system is interacting with the system clock via the w32time.

The next step is to find the last report before the time change to get the source computer that is the current reference clock.

Like in the example below, we have the Ipv4 address of 10.197.216.105 as the reference clock. Another reference could point to the computer name or the VMIC provider.

151802 20:18:54.6531515s – Response from peer 10.197.216.105,0×8 (ntp.m|0x8|0.0.0.0:123->10.197.216.105:123), ofs: +00.0012218s

Now that the first section is valid, investigate the log file on the reference time source using the same steps.

This will give you a physical clock such as the GPS or a known time source like the National Institute of Standards and Technology (NIST). If the clock is a GPS hardware, then manufacturer logs may be required.

Network Considerations

The NTP protocol algorithm depends on the network symmetry, making it difficult to predict the type of accuracies needed for certain environments.

You an use the Performance Monitor and new Windows Time Counters for Windows Server 2016 to create baselines.

The Precision Time Protocol (PTP) and the Network Time Protocol (NTP) are the two that you can use to gauge accurate time.

If clients are not part of a domain, Windows use the Simple NTP by default. Clients found within a Windows domain use the secure NTP protocol, also referred to as MS-SNTP, which help in leveraging domain communication, consequently giving an advantage over Authenticated NTP.

Reliable Hardware Clock (RTC)

Windows will not step time unless some conditions are beyond the norm. The implication is that the w32tm changes the frequency at regular intervals while relying on the Clock Update Frequency Settings, which is 1 second on Windows Server 2016.

It will move the frequency if it is behind, and vice versa when it is ahead of time.

This reason explains why you need to have acceptable results during the baseline test. If what you get for the “Computed Time Offset” is not stable, then you may have to verify the status of the firmware.

Troubleshooting Time Accuracy and NTP

The Discovering Hierarchy section gave us an understanding of the source and inaccurate time.

You need to look for time offset to identify the point where the divergence takes place from its NTP Sources. Once you can trace the hierarchy of time, you need to focus on the divergent system to gather more information in determining the issues causing all these inconsistencies.

Here are some tools that you can use:

System event logs

  • Enable logging:

w32tm logs – w32tm /debug /enable /file:C:\Windows\Temp\w32time-test.log /size:10000000 /entries:0-300

w32Time Registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time

  • Local network interfaces
  • Performance counters
  • W32tm /stripchart /computer:UpstreamClockSource
  • PING UpstreamClockSource (gauging latency and understanding the number of hops to source)

Tacert UpstreamClockSource

Problem

Symptoms

Resolution

Local TSC unstable

Use perfmon-Physical computer- Sync clock stable clock

Update firmware or try an alternative hard to confirm that it does display the same issue

Network latency

W32tm stripchart displays the RoundTripDelay exceeding 10ms. Use Tracert to find where the latency thrives

Locate a nearby source clock for time. Install a source clock on the same domain segment or point to one that is geographically closer. Domain environment needs a client with the GtimerServ role.

Unable to reliably reach the NTP source

W32tm /stripchart gives “request time out”

NTP source unresponsive

NTP Source is not responsive

Check Perfmon counters for NTP client Source Count, NTP server outgoing responses, and NTP Server Incoming Requests. Determine the outcome with your baseline tests results

Use server performance counters to determine change in load or if there is any network congestion

Domain Controller not using the most accurate clock

Changes in topology or a recently added master clock

w32tm /resync /rediscover

Clients Clocks are drifting

Time-Service event 36 in System event log or you see a text log with the following description: “NTP Client Time Source Count” going from 1 to 10

Identify errors in the upstream source and query if it may be experiencing performance issues

Baselining Time

Baseline tests are important because they give you an understanding of the expected performance accuracy of the network.

You can use the output to detect problems on your Windows Server 2016 in the future. The first thing to baseline is the root PDC or any machine with the role of GTIMESRV.

Every PDC in the forest should have a baseline test results. Eventually, you need to pick DCs that are critical and get their baseline results too.

It is important to baseline Windows 2016 and 2012 R2 using the w32tm /stripchart as a comparison tool. If you use two similar machines, you can compare their results and make comprehensive analysis.

Using the performance counters, you can collect all information for at least one week to give you enough references when accounting for various network time issues.

If you have more figures for comparison, you’ll gain enough confidence that your time accuracy is stable.

NTP Server Redundancy

A manual NTP server configuration in a non-domain network means that you should have a good redundancy measure to get better accuracy when other components are also stable.

On the other hand, if your topology does not have a good design and other resources are not stable, it’ll lead to poor accuracy levels. Take caution to limit timeservers’ w32time to 10.

Leap Seconds

The climatic and geological activities on planet earth lead to varying rotation periods. In an ideal scenario, the rotation varies every two years by one second.

When the atomic time grows, there will be a correction of a second up or down called the leap second. When doing the correction, it never exceeds 0.9 seconds. The correction is always announced six months before time.

Before Windows Server 2016, the Microsoft Time Service did not account for the leap seconds and relied on external time service to handle the adjustments.

The changes made to Windows Server 2016, Microsoft is working on a suitable solution to handle the leap second.

Secure Time Seeding

W32time in Windows Server 2016 includes the Secure Time Seeding Feature that determines the approximate current time of the outgoing Secure Sockets Layer Connection (SSL). The value helps in correcting gross errors on the local system clock.

You can decide not to use the Secure Time Seeding feature and use the default configurations, instead.

If you intend to disable the feature, use the following steps:

  • Set the UtilizeSSLTimeData registry value to 0 using the command below:

reg add KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Config /v UtilizeSslTimeData /t REG_DWORD /d 0 /f

  • If the machine does not detect any changes and does not ask for a reboot, notify the W32time service about the changes. This will stop enforcing time monitoring based on data coming from the SSL connections.

W32tm.exe /config /update

  • Rebooting the machine activates the settings immediately and directs the machine to stop collecting data from SSL connections.

For the above setting to be effective on the entire domain, set the UtilizeSSLTimeData value in W32time using the Group Policy Setting to 0, and make the setting public.

The moment the setting is picked by a Group Policy Client, the W32time service gets the notification and stops enforcing and monitoring SSL time data.

If the domain has some portable laptops or tablets, you can exclude them from the policy change because when they lose battery power, they will need to re-access the Secure Time Seeding feature to acquire the current time.

Conclusion

The latest developments in the world of Microsoft Windows Server 2016 means that you can now get the most accurate time on your network once you observe some conditions.

The Windows Time Service (W32Time) main work is to give your machine time, regardless of whether it is a standalone or part of a network environment.

The primary use of time in a Windows Server 2016 environment is to make sure that there is enough security for Kerberos authentication.

The W32Time makes it almost impossible to have replay attacks in an Active Directory or when running Virtual Machines on Hyper-V hosts.

How to Optimize Your Active Directory for Windows Server 2016

Microsoft Windows Server 2016 is still a valid choise in the market and organizations are already asking their IT experts to evaluate its added value and possible challenges that one may encounter when moving from the current systems to the new server platform. In addition to the features found on Windows Server 2012 and 2012 R2, Windows Server 2016 presents new possibilities and capabilities that are missing on previous Windows Server platforms. Any new Windows Server Operating System that breaks the market gets more attention. Windows Server 2016 had made tremendous improvements to its Active Directory.

The best approach to take before implementing Windows Server 2016 is to test its readiness by looking for ways of minimizing the likely impact of migration. Another way to look at it would be to identify organizational needs and how they can be integrated for future implementations. The reason Administrators would want to try on the Windows Server 2016 Active Directory is to provide an opportunity for growth, offer flexibility, and enhance security setup in the organization.

Why Does Windows Server 2016 Matter

Windows Server 2016 is a representation of combinations from different principles that define computation, identity, management and automation, security and assurance, and storage. All these are broken down into the core elements of the Server Operating System that consists of Visualization, System Administration, Network Management, and Software Defined Network (SDN) technologies, Cloud Integration and Management, Disk Management and Availability. All these are supposed to bring organizations to the future of technology without the need to discard some of the infrastructures being used in the current environment.

Windows Server 2016 is a full-featured server Operating System boasting of solid performance with modern advancements. This new server shares so many similarities with the Data Center edition that incorporates support for Hyper-V containers and new storage features and enhanced security solely to protect virtual machines and network communications that have no trust configured between them.

This article should help you the reader learn more about Windows Server 2016 features, factors to consider before moving from old to a new setup, and how to optimize your Active Directory. More details on how to prepare to move and migrate efficiently by managing the new environment effectively.

Windows Server 2016 New Features

Several features and enhancements form part of this server operating system. Here are some of the highlights:

Temporary Group Membership

This form of membership gives Administrators a way of adding new users to a security group for a limited time. For this feature to work, Windows Server 2016 Active Directory must be operating at the functional level. System Administrators need to know beforehand all the system installation requirements during and after the transition.

Active Directory Federation Service

There are essential changes that come with Microsoft Windows 2016 Server Federation Service:

Conditional Access Control

Active Directory in previous installations had straightforward access controls because the assumption had always been that all users would be logging in from a computer joined to a domain with proper Group Policy Security settings. The conditional access gives users access to resources that have been assigned to them.

In the current technological setup users’, access resources from different types of devices that are not connected to the domain and usually work outside the organizations operating norms. This is a direct call for the improvement of security by introducing a Conditional Access Control Feature enabling administrators to have better controls over users whose requests should be handled on per application basis. For example, administrators may enforce multi-factor authentication when the compliant devices try to access business applications.

Support for Lightweight Directory Access Protocol (LDAP) v3

Another change that has been introduced in line with regard to the Active Directory Federation Systems is the Support for Lightweight Directory Access Protocol. The capability makes it easier to centralize identities across different directories. For example, an organization that uses non-Microsoft directory format for identification and access control can centralize identities to office Azure cloud or Office 365. LDAP v3 making it easier to configure a single sign-on for SaaS applications.

Domain Naming Service (DNS)

Active Directory and DNS go hand in hand because of the dependency of Windows Server systems on DNS. There have been no significant changes in the Windows Server DNS service until the arrival of Windows Server 2016. The following are new features under the DNS:

DNS Policies

The inherent ability to create new DNS policies is said to be the most significant. These policies enable administrators to control the way DNS responds to different queries. Some examples of these policies are load balancing and Blocking of DNS requests coming from IP addresses whose domain have been listed as malicious.

Response Rate Limit

The rate of the server response to DNS queries can now be controlled. This control is designed to help defend against external attacks such as denial of service by limiting the number of times in a second a DNS can respond to a client

Microsoft IP Address Management (Microsoft IPAM)

The most significant improvement to the DNS is in its IP Address Management System that helps in the tracking of IP address usage. The integration of Microsoft IPAM feature on DHCP has been robust while the DNS one is minimal. The introduction of Windows Server 2016 brings in some new changes like DNS management capabilities by recording inventory. The support for multiple Active Directory forests by IPAM is a welcome feature. Supporting multiple forests is only possible if there is already an existing trust between them and that IPAM is installed on each forest.

Migration Considerations

Planning is critical when moving from an earlier Windows Server version to Server 2016. The goal of any migration should be minimizing its impact on business operations. Going ahead with the migration should be an opportunity for administrators to set up a scalable, flexible, compliant, and secure platform.

Understanding the Existing Server Environment.

It is a rookie mistake to jump into implementation without a proper analysis of the current server environment. Assessment at this stage should look at users, groups, distribution lists, applications, folders, and Active Directory. On the business side, there is a workflow, emails, programs, and any infrastructure used that should be assessed before making the big move.

It is also vital that you:

  • Understand what needs to be moved and what is to be left as it is. For example, there is no need of moving inactive accounts and old data that is no longer relevant. All active data stores, mailboxes, and users are part of what you should not leave behind.
  • You will also want to analyze applications, users, and processes that needs access and should be migrated to ensure that the relevant resources are available during and after the transfer.

Improving Active Direct Security and Compliance Settings

Another critical factor to consider during migration is security and delegation by controlling who makes changes to Window Active Directory objects and policies. Most organizations choose to give access to Active Directory objects to solve an immediate problem and never clear the permissions. Proper controls should be in place to manage what can be added to the AD and who should be responsible for making such changes.

Continuous monitoring of activities in the Active Directory to ascertain if they comply with both internal and external performance regulations should be ongoing. Microsoft Windows Server and AD can audit events with visible output and can be implemented quickly in a busy setup. Having a coherent AD audit cluster with analytical capabilities is critical for marking unauthorized changes, spotting inappropriate use of the AD and related resources, tracking users in the entire infrastructure, and give compliance reports to the auditors.

Ensuring Application Compatibility

Before making an effort to initiate migration, make sure that all software and third-party application used on your organization are compatible and can work with Windows Server 2016. All the in-house applications should also be tested to make sure they work correctly in the new environment.

Minimizing Impact on Business

Minimizing in-house software compatibility is one aspect of reducing the cost of migration on the business. As an Administrator, you need to know how the issue of downtime will be handled when moving from legacy to new system. One thing you need to avoid is underestimating the impact of migration on users and operations by failing to analyze all access points. Many such challenges can be avoided by scheduling resource intensive migration tasks during off-peak hours.

Failure to have a smooth transition between legacy and the new system can lead to service disruptions lost productivity and increased the cost of doing business. The co-existence of both the old and the new system is essential in any Active Directory migration because users still need to access resources to ensure continuity. Directory synchronization is important at this stage to make sure that users can access their data.

Restructure the Active Directory

Moving from your legacy system to Windows Server 2016 should be taken seriously and not treated like any other routine IT task. This is an opportunity to restructure your Active Directory to meet its current and future needs. Every time there is a significant system upgrade, changes in organizational models and requirements may have prompted it. Changes in the IT technology is also a major force that influences restructuring of the Active Directory.

Determine the number of domains and forests needed. Examine the need to merge some forests or create new ones. You can also take an opportunity to join new infrastructure to remote offices that may not have been in existence in the legacy system.

Active Directory Management and Recovery

Every IT management faces challenges when managing the Active Directory on a daily basis. The configuration of user properties is time-consuming and error-prone when dealing with a large and a complex Windows Network. Some of these duties have to be performed manually leading repetitive and mundane tasks that end up taking up most the Administrators time. However, when you decide to accomplish the above tasks using Windows Native Tools or the PowerShell means that you must have a deeper understanding of how the Active Directory and its features work.

The use of software to manage the Active Directory repetitive tasks simplifies the process. You can also get detailed reports on tasks and their status. Using software offers solutions that help in the planning and execution of an efficient AD restructuring, which will eventually help you, implement a secure system. Managing AD using a software gives a common console where the management can view and manage Active Directory, users, computers, and groups. Some software’s enable the administration to plan for a secure way of delegating repetitive tasks and perform controlled automation of the Active Directory Structure.

Software Implementation

Two popular software being used in the management of Active Directory optimization tasks are:

  1. ADManager Plus
  2. Quest Software

They both can help in the restructuring and consolidation of Windows Server 2016 in a new environment.

ADManager Plus

The ADManager Plus has additional features such as sending and receiving customized notifications via SMS or emails. The search options make it easier for IT managers to search the directory with ease through its software interface panel. Using the ADManager Plus, the IT department can execute windows optimization tasks with ease in addition to the integration of utilities such as ServiceNow, ServiceDesk, and AdselfService Plus.

Active Directory User Management

ADManager Plus manages thousands of your Active Directory through its interface. This property helps you create and modify users by configuring general attributes, exchange server attributes, and apply exchange policies, terminal service attributes, and remote logon permissions. You can set new users in Office 365 and G suite when creating the new accounts in the Active Directory. You can design templates that can help the help desk team to modify and configure user accounts and properties by a single action.

Active Directory Computer Management

This solution allows for the management of all computer in the existing environment from any location. You can create objects in bulk using CSV templates by modifying group and general attributes of computers, move them between organizational units, and enable/disable them.

Active Directory Group Management

The management of groups is made more flexible using the software modules used in the creation and modification of groups using templates and conduct all configuration attributes in an instant.

Active Directory Contact Management

You can use this software management tool to import and update Activate Directory contacts as a single process. Therefore, this implies that you do not have to select individual contacts for an update.

Active Directory Help Desk Delegation

The ADManager Plus delegation feature can help administrators to create help desk administrators, and delegate desired tasks related to user attributes. The various repetitive management tasks for users, group, computers, and contacts can be delegated using customized account creation templates. The help desk users can share the workload of the administrators which frees them up giving them more time to work on core duties.

Active Directory Reports and Management

The ADManager plus provides information on different objects within the AD which allows for the viewing and analysis of information on its web interface. For example, you can see a list of all inactive users and modify the accountant accordingly.

Quest

Quest software takes a different approach because it deals with preparation, recovery, security and compliance, migration, consolidation, and restructuring.

Preparation

During preparation, Quest helps in the assessment of the existing environment with the enterprise reporter gives a detailed evaluation of the current setup that includes the Active Directory, Windows Server, and SQL Server. During this assessment, Quest can report the number of accounts you have in the Active Directory and isolate the active and the disabled ones. Knowing the exact status of your environment is paramount before the migration begins.

Quest helps discover identities and inventories on application servers that are dependent on the Active Domains that are being moved to enable you to fix or redirect them on the new server.

Migration, Consolidation, and Restructuring

The Migration Manager for Active Directory gives the Zero IMPACT AD restructuring and consolidation. The Migration Manager offers a peaceful coexistence to both the migrated and yet to be migrated by maintaining secure access to workstations and resources.

Secure Copy offers an automated solution for quick migration and restructuring files on the data server by maintaining the security and access points. Its robustness makes the tool to be rated as perfect for planning and verification of successful file transfers.

Migrator for Novell Directory Service (NDS) helps administrators move from Novel eDirectory to Active Directory. The tool also moves all data within Novell and re-assigns permission to new identities in the new server.

Security and Compliance

The change Auditor for Active Directory gives a complete evaluation of all the changes that have taken place in the Active Directory. The evaluation report contains information such as who made the changes, what kind of changes was made, what were the initial and final values before and after adjustment, and the workstation name where the change occurred. The change auditor tool also prevents changes, for example, you can disable the deletion of or transfer of Organization Units and changes that can be made Group Policy Settings.

Access Control

Active Roles modules ensure that security of the AD complies by enabling you to control access by delegating tasks using less privilege. This gives an opportunity to generate access rules based on defined administrative policies and access rights. You can use the Active Roles to bring together user groups and mailboxes as well as changing and removing access rights based on role changes.

Centralized Permission Management

The Security Explorer facilitates the management of Microsoft Dynamic Access Controls (DAC) by enabling administrators to add, remove, restore, backup, and copy permission all on a single console. The tool can make targeted or bulk changes to server emissions made possible by the enhanced by Dynamic Access Control management features such as the ability to grant, revoke, clone, and modify permissions.

Monitoring Users

The InTrust enables the secure collection, storage, and reporting alerts on the data log that complies with both internal and external regulations surrounding policies and security best practice. Using InTrust, you get an insight into user activities by auditing access to critical systems. You can see suspicious Logins in real time.

Management and Recovery

The easiest way the IT administrator can manage user accounts, computers, and objects via the Group Policy. Poor management of the Group Policy Objects (GPO) can cause many damages. For example, if your GPO is assigning proxy settings with wrong proxy values.

GPO Admin will automate Group Policies, and it has a workflow to enable the checking of changes before being approved by the GPOs. When GPO’s are used in the production industry, the management team will be impressed by the reduced tasks as it improves security.

Recovery is a critical process in any organization that runs its system based on Windows Server 2016. You can also recover the wrong entries and accounts that were removed. The Recovery Manager for Active Directory gives access to other features that report on the differences and help restore objects that were changed.

It is important to be prepared in readiness for disaster and data recovery. In case your domain finds itself in the wrong hands, or the entire network setup is corrupted, use the Recovery Manager for Active Directory utility.

Conclusion

Windows Server 2016 has a wealth of new features and capabilities to streamline and improve the management and facilitate better user experience. A successful implementation means that the Active Directory has a sound consolidation process. Administrators who have already tested this Server Operating Services should take advantage of the new capabilities.

The benefits of Active Directory tools and utilities are numerous because they help in setting up a flexible and secure Windows Server 2016 and Active Director that will work for your current and future environment. These utilities help managers who are not well conversant with some IT related Active Directory management tools who need to switch to the new server to comply with regional and international standards.

Windows Server Disk Quota – Overview

Windows Server system comes with a very handy feature that allows the creation of many user accounts on a shared system. This enables users to log in and have their own disk space and other custom settings. However, the drawback with this feature is that users have unlimited disk space usage, and with time, space eventually gets filled up leading to a slow or malfunctioning system, which is a real mess. Have you ever wondered how you can avert this situation and set user limits to disk volume usage?

Worry no more. To overcome the scenario described above Windows came up with the disk quota functionality. This feature allows you to dictate or set limits on hard disk utilization space such that users are restricted to the size of disk space they can use for their files. The functionality is available for both Windows and Unix systems like Linux that are being shared by many users. In Linux, it supports ext2, ext3, ext4 and XFS filesystems. In Windows operating systems, it’s supported in Windows 2000 and later versions. It’s important to point out that in Windows, this functionality can only be configured on NTFS file systems only. So, If you are starting out with a Windows server or client system, you may to consider formatting the volumes to NTFS filesystem to avert complications later on. Quotas can be applied to both client and server systems like Windows server 2008, 2012 and 2016. In addition to that, quotas cannot be configured on individual files or folders. They can only be set on volumes and restrictions apply to those volumes only. To be able to administer a disk quota, one must either be an administrator or have administrative privileges, that is, be a member of Administrator’s group.

The idea behind setting limits is to prevent the hard disk from getting filled up and thereby causing the system or server to freeze or behave abnormally. When a quota is surpassed, the user receives an “insufficient disk space error” warning and cannot, therefore, create or save any more files. A quota is a limit that is normally set by the administrator to restrict disk space utilization. This will prevent careless or unmindful users from filling up the disk space leading to a host of other problems including slowing down or freeing of the system. Quotas are ideally applicable in enterprise environments where many users access the server to save or upload documents. An administrator will want to assign a maximum disk space limit so that end users are confined to uploading work files only like Word, PowerPoint and Excel documents. The idea behind this is to prevent them from filling the disk with other non-essential and personal files like images, videos and music files which take up a significant amount of space. A disk quota can be configured as per user or per group basis. A perfect example of disk quota usage is in Web hosting platforms such as cPanel or Vesta CP whereby users are allocated a fixed disk space usage according to the subscription payment.

When a disk quota system is implemented, users cannot save or upload files to the system beyond the limit threshold. For instance, if an administrator sets a limit of 10 GB on disk space for all logon users, the users cannot save files exceeding the 10G limit. If a limit is exceeded, the only way out is to delete existing files, request another user to take ownership of some files or request the administrator, who’s the God of the system, to allocate you more space. It’s important to note that you cannot increase the disk space by compressing files. This is because quotas are based on uncompressed files and Windows treats compressed files based on their original uncompressed size. There are two types of limits: Hard limits and soft limits. A hard limit refers to the maximum possible space that the system can grant an end user. If for instance, a hard limit of 10G is set on a hard drive, the end user can no longer create and save files once the 10G limit is reached. This restriction will force them to look for an alternative storage location elsewhere or delete existing files

A soft limit, on the other hand, can temporarily be exceeded by an end user but should not go beyond the hard limit. As it approaches the hard limit, the end user will receive a string of email notifications warning them that they are approaching the hard limit. In a nutshell, a soft limit gives you a grace period but a hard limit will not give you one. A soft limit is set slightly below the hard limit. If a hard limit of, say 20G is set, a soft limit of 19G would be appropriate. It’s also worth mentioning that end users can scale up their soft limits up to the hard limit. They can also scale down their soft limits to zero. As for hard limits, end users can scale them down but cannot increase them. For purposes of courtesy, soft limits are usually configured for C level executives so that they can get friendly reminders when they are about to approach the Hard limit.

In summary, we have seen how handy disk quota is especially when it comes to a PC or a server that is shared by many users. Its ability to limit disk space utilization ensures that the disk is not filled up by users leading to malfunctioning or ‘freezing’ of the server. In our next topic, we’ll elaborate in detail how we apply or implement the quotas.