Managing Disk Quotas on Windows Server 2019

Disk quota management provides a way of controlling the disk space available to users accessing the server and storing files.

When a user exceeds their quota, then they can no longer add additional data.

The File Server Resource Manager (FSRM) feature in Windows Server 2019 gives system administrators the ability to set the storage quota as well as determine the type of files that can be saved on the file server.

What is FSRM in Windows Server?

FSRM is a Windows Server feature which allows you to classify and save data in file servers. FSRM gives you extra control of the data on your computer.

The history of FSRM dates back to 2005 when Microsoft launched the product. It was initially used in the Windows 2003 server R3 edition. The feature provides a one-in-all solution, especially for volumes that keep increasing.

Disk quotas have been there for a long time. Without them, people could upload large volumes of data (mostly pirated videos, movies or MP3 songs) into your servers.

The large volume uploads can slow down your server and increase your operating costs. It also presents a challenge with copyright content.

Some tools, like Windows Explorer, allow you to assign quotas to volumes, but with some limitations.

Administrators can easily switch to Windows Server 2019 via the Storage Migration Service. With a myriad of useful features, Windows Server 2019 is the ultimate server for modern businesses.

You can integrate your server with existing applications and utilize every storage feature of FSRM.

The Storage Migration service identifies your old server’s data and moves it to your new server.

Whether you want to move into the cloud or latest Azure’s servers, this migration service is your best tool.

The migration server works with all servers from Windows Server 2003 to Server 2019.

It has no limitations, and will only require you to domain-link your old and new server. However, it only supports file transfers and not applications; you’ll have to manually reinstall your applications later.

FSRM Features

FSRM has the following five main features that help in accomplishing its tasks:

i. File classification structure

This disk quota feature automates all the data arranging processes. It helps administrators to access and use meaningful data.

In Windows Server 2019, the file classification structure comes with additional features for organizing server data more logically.

Examples of classification structures in Windows Servers include dynamic access control, file expiration, and file encryption. The dynamic access control policy limits users from accessing some files. Windows Server 2019 has complex file encryption techniques that protect your data from unauthorized users.

ii. File management tasks

This feature is available in most Windows Server versions. It assists the admin to apply policies or conditions on the data depending on how they are classified.

Such conditions include file properties, such as the date it was modified, file location, and last access date. The files classification can be done automatically by following the stipulated classification rules.

Alternatively, you can manually classify them by altering the file properties.

iii. Quota management

This feature enables an administrator to limit the size of folders or volumes. It is a useful feature, especially for new volumes and folders.

Besides, you can use this feature to create quota templates, which you can apply later to new folders and volumes.

iv. Storage reports

This component of FSRM is useful in identifying disk usage trends. It also helps administrators to understand how FSRM carries out data classification.

You can use the feature to monitor whether users are uploading unauthorized files.

v. File screening management

End-users may upload large gigabytes of data into your servers, resulting into a slow server with high manageability costs. You may also encounter challenges with compliance due to pirated movies and other content.

The file screening management feature helps you to tackle this problem by allowing you to regulate what end users can upload to your server.

File screening also lets you limit the extensions one can store in your shared files. For instance, you can create a file that prevents users from adding m4a. files on personal folders in the shared server system.

NB: FSRM only supports volumes with NTFS formats. It doesn’t support resilient volume types.

As we’ll demonstrate later, you can configure and manage these features that come with the File Server Resource Manager by using the FSRM app or the Windows PowerShell utility.

What you can do with FSRM

  • Create a policy that allows access to folders and files depending on their organization in the file server
  • Expire a file that has not been modified for a certain period of time
  • Create up to 200 Megabyte (MB) quota for each user and notify them when their storage usage exceeds 180MB
  • Schedule a day for reporting, such as Sundays. From the report, you can gauge the most accessed files in the previous two days before the report.
  • Prevent users from adding music to personal shared folders
  • Generate a file classification rule that categorizes files with more than ten types of information as having identifiable information.

Benefits of FSRM

1. Supports advanced quota management capabilities

Some tools, like Windows Explorer, have lesser capabilities for arranging and managing quotas. FSRM not only brings you a centralized console for managing your quotas per volume, but also per folders and per files. It has on-top notifications that you can use to effectively manage your quotas.

With FSRM quota tools, you can apply quotas on different paths in the same volume. NTFS quotas allow you to apply quotas per volume only and are less useful.

To apply a quota to different paths, you’ll need to set a quota template. Therefore, you can simply modify these quotas by updating the template. You can also create both soft and hard quotas.

2. Regulates server content

Managing quotas is not enough. You may need to keep potentially sensitive data off your servers. Piracy concerns are a major issue and require compliance from all stakeholders. Keep off pirated movies, MP3s or MP4s off your server, as having them could infringe on copyright regulations.

Copyright infringement is a serious offense and could lead to consequences like server closure. So, you need to be on the safe side by enabling the file screening capability. You can introduce file screening on a volume, folder, or file. For instance, you can stop users from saving files with .mp3 extension into C:/Personal finance.

Since the folder belongs only to finance-related files, other items like mp3 songs shouldn’t be allowed in the folder. If a user tries to save the file into the folder, the system generates an error.

The pop-up error will affect all users who want to upload such content to the server. Such users will not upload unwanted content to the site, hence improving your data management strategy. More so, you can monitor users who violate your regulations by posting unauthorized content.

To implement file screening, you will have to use file groups. Besides, you can apply file exclusions when configuring the file groups.

For example, you may want to block video files except those with .mp4 patterns. This way, users will be unable to save files unless they meet your defined pattern requirements. File exclusion is also possible with certain naming formats.

As such, users will be unable to save files unless they match your stipulated naming format.

3. Generates storage utilization reports

FSRM is probably the first tool which provides the data and statistics of volume usage. Microsoft initially perceived the tool to be used as a departmental server.

You can use FSRM to generate the following data reports:

  • File location
  • Duplicate files
  • Last modified date
  • Last access date
  • File type
  • Files and folder by property
  • Folder by property
  • Least and most recently accessed files
  • Quota usage

These properties make FSRM an effective storage resource manager.

4. Locates files easily

FSRM allows you to locate files by sorting them. You can locate files easily by using file properties or performing tasks against these files.

Furthermore, you can search your files by sorting them using the last modified, name, or creation time property. Alternatively, you can use its location or file type property to locate a file easily among many other files.

How to install FSRM step-by-step

FSRM is the best tool for managing quotas and creating file screens. The techniques for installing the tool differ with every Windows Server edition. However, the basic rules apply to Windows Server 2019.

You’ll be required to download FSRM before starting the installation.

There are two main methods for installing FSRM in Windows Server 2019:

  • PowerShell installation
  • Graphical User Interface (GUI) installation

1. PowerShell installation

To install the FSRM role feature, you can follow the steps below:

  1. Open the PowerShell utility by pressing Windows key+ R
  2. Type PowerShell and press OK
  3. In the next window that appears, type the command below and press Enter:

Install-WindowsFeature -Name FS-Resource-Manager, RSAT-FSRM-Mgmt

Entering the code will initiate the installation process for the FSRM role. You don’t have to restart your computer after the installation completes.

The FS-Resource-Manager will install FSRM while RSAT-FSRM-Mgmt will install the FSRM feature. This feature is important for accessing the GUI component, which manages FSRM. With the feature, you can easily run a server without the GUI. You’ll only be required to install the RSAT remotely; you can then access the FSRM system remotely.

2. GUI installation

In case you want to install FSRM using the GUI instead of PowerShell, you’ll use the Server Manager. With the GUI, you’ll have to perform many tasks than when using the PowerShell installation.

Here are the steps for installing FSRM using GUI.

Open Server Manager and click on ‘Add roles and Features’:

On the next Window, click ‘Next’:

On the next screen, choose the “role-based or feature-based” option:

Choose a virtual hard disk or server you want to add its roles and features. Then, click “Next”:

On the “Select server roles” page, expand the File and Storage services tab:

After checking the “File Server Resource Manager” box, click the “Next” button.

The next page will prompt you to install FSRM RSAT tools. Click “Add Features” and proceed:

Confirm the FSRM installation.

After waiting for the installation to complete, you can launch the FSRM from the server manager:

To access FSRM, Go to Tools > File Server Resource Manager:

The FSRM window will open:

Configuring quotas

As mentioned earlier, quotas put a limit on the disk space allowed on a drive or a folder. Importantly, quotas come in handy when sharing a personal drive among many users. For example, you can put a limit of 10 GB on a drive or a folder.

There are two types of quotas:

  • Quota on the path—this is a scenario where quota is applied on the main folder only
  • Auto apply template and create quota—this is where the main folder together with the sub folders are all under a defined quota. That is, if you apply a quota of 10 GB on the parent folder, each of the subfolders will have a quota of 10 GB

Creating quota templates

Before embarking on creating quotas, let’s talk about how to create quota templates or predefined templates. Here are the steps.

First, right click on Quota Templates and click on ‘Create Quota Template’ option:

Specify the quota name and the quota limit, as shown below. A hard quota implies that you cannot exceed the limit. For soft quota, you get notified when you exceed the limit.

When all is done, click ‘OK’:

If you wish to send notifications via email, click the ‘Add’ button:

Then, fill out all the necessary details, such as the threshold for getting notifications and the administrator’s email.

Then click “OK”:

How to configure FSRM

Configuring FSRM is a piece of cake that consists of three main areas:

  • Quota management
  • File screening
  • Storage reporting

1. How to create and manage quotas

In a shared server system, users can add their files to the server. To prevent overwhelming the server, the administrator can use quotas to allocate each user a storage portion.

There are two main types of quotas:

  • Soft quotas whereby users can exceed the set limit. The administrator assesses the users in a specific path and their current storage usage; and, once the soft limit is passed, the server sends a usage warning or generates event logs.
  • Hard quotas whereby a user cannot exceed the set limit. Once a user reaches the limits, they will be unable to store more data to the file path.
How to create a quota

First, go to the FSRM management interface and select “Quotas”. Then, Select the “Create Quota” option:

Next, you can create a new quota by specifying various options, such as:

  • The path for new quota—indicates the path for your new quota. Alternatively, you can apply it to existing folders and subfolders.
  • Custom quota properties—you can choose soft and hard quota limits. For a soft limit, you’ll need to enter a warning notification.

Lastly, click the Create button.

Finally, the quota will now be created, and you can see it from the console, as shown below:

2. Configuring file screening

File screening is restricting some file formats from being stored in a path. It is only possible with file names and not file contents.

FSRM comes with two categories of file screening:

  • Active screening—does not allow users to save restricted files
  • Passive screening—enables users to save restricted data but with monitoring
How to configure file screening template

If you want to create a file screening template, right click on the File Screen Template option:

In the pull-down menu, click on “Create File Screen Template”. Then, fill all the required details in the Template Window:

3. Generating reports

FSRM allows you to generate various reports that assist with your file server management tasks. You can schedule these reports over a certain period to monitor disk usage trends.

The reports can help you to monitor a user or groups of users who may attempt to store unauthorized files on your server. The FSRM tool allows you to generate such reports instantly.

To access your reports, you can open the file report tasks. Click “generate report” and press “OK”. The system will generate a DHTML report and prompt you to open it.

Conclusion

Those are the everyday tasks you can perform with the File Server Resource Manager 2019. You can also perform complex tasks such as File Server Classification.

Do you have any comments or questions?

Please post them below.

Report the NTFS Permissions of Folders and Shares – fast and simple to use!

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Windows Server Deduplication: An Essential Introduction

Deduplication is one of the useful features of Windows Server since the launch of the 2008 R2 version.

It is a native feature added through the server manager that gives system administrators enough time to plan server storage and network volume management.

Most Server administrators rarely talk about this feature until it is time to address the organization’s storage crunch.

Data deduplication works by identifying similar data blocks and saving a copy as the central source, thus  reducing the spread of data all over the storage areas. Deduplication takes place on a file or block level, giving you more space in the server.

Special hardware components, which are relatively expensive, are required to explore the block level deduplication. The reason behind extra hardware is the complex processing requirements involved.

The file level deduplication is not complicated and, thus, does not require the additional hardware. As such, in most cases, administrators implementing deduplication prefer the file approach.

When to Apply Windows Server Deduplication

Since Windows Server file deduplication works on the file level, its operations work on a higher level than a block level, as it tries to match chunks of data.

File deduplication is an operating system level, meaning that you can enable this feature within a virtual guest in a hypervisors environment.

Growth in industries is also driving the demand for deduplication, although storage hardware components are becoming bigger and affordable.

Deduplication is all about fulfilling this growing demand.

Why is Deduplication Feature Found on Servers?

Severs are central to any organization’s data, as users store their information on repositories. Not all users embrace new technology on how to handle their work, while others feel safe making multiple copies of the same work.

Since most Server administrators do the work of  managing and backing up users’ data, using the Windows deduplication feature greatly enhances their productivity.

Data deduplication in a straightforward feature and will take a few minutes to make it active.

Deduplication is one of the server roles found on Windows Servers, and you do not need a restart for it to work.

However, it is safe to do so to make sure the entire process is configured correctly.

Preparing for Windows Server Duplication

  • Click on start
  • Click on the run command window
  • Enter the following command and press enter (this command runs against selected volume to analyze potential space for storage): DDEval.exe
  • Right click on the volume in Server Manager to activate data deduplication

The following wizard will guide you through the deduplication process depending on the type of server in place. (Choose a VDI or Hyper-V configuration or File Server)

Set up The Timing for Deduplication

Deduplication should run on scheduled time to reduce the strain on existing resources. You should not aim to save storage space at the expense of overworking the server.

The timing should be set when there is little strain on the server to allow for quick and effective deduplication.

Deduplication is a task that requires more CPU time because of the numerous activities and processes taken by each job.

Other deduplication demands include optimization, integrity scheduling, and garbage collection. All these deduplication activities should be running at peak hours unless the server has enough resources to withstand system slowdowns.

The capacity that deduplication reclaims varies depending on server use and storage available.

General files, ISOs, office applications files, and virtual disks usually consume much of the storage allocations.

Benefits of Windows Server Deduplication

Windows Server deduplication brings several benefits to an organization, including the following:

  • Reduced storage allocation

Deduplication can reduce storage space for files and backups. Therefore, an enterprise can get more storage space, reducing the annual cost of storage hardware. With enough storage, there is a lot of efficiency and speed, which eliminates the need for installing backup tapes.

  • Efficient volume replication

Deduplication ensures that only unique data is written to the disk, which reduces network traffic.

  • Increasing network bandwidth

If deduplication is configured to run at the source, then there is no need to transfer files over the network.

  • Cost-effective solution

Since power consumption is reduced, there is less space required for extra storage of both local and remote locations. The organization buys and spends less on storage maintenance, thus reducing the overall storage costs.

  • Fast file recovery process

Deduplication ensures faster file recoveries and restorations without straining the day’s business activities.

Features of Deduplication

1. Transparency and Ease of Use

Installation is straightforward on the target volume(s). Running applications and users will not know when deduplication is taking place.

The file system works well with NTFS file requirements. However, files using the encryption mode, Encrypted File System (EFS), files that have a capacity smaller than 32KB, or those with Extended Attributes (EAs), cannot be processed during deduplication.

In such cases, file interaction takes place through NTFS, and not deduplication. A file with an alternative data stream will only have its primary data stream deduplicated, as the alternative will be left on the disk.

2. Works on Primary Data

This feature, once installed on the primary data volumes, will operate without interfering with the server’s primary objective.

This feature will ignore hot data (active files at the time of deduplication) until it reaches a given number of days. The skipping of such files maintains consistency of the active files and shortens the deduplication time.

This feature uses the following approach when processing special files:

  • Post procession: when new files are created, the files go directly to the NTFS volume where they are evaluated on a regular schedule. The background processing confirms file eligibility for deduplication, every hour, by default. The scheduling for confirmation time is flexible
  • File age: a setting on the deduplication feature called MinimumFileAgeDays controls how long a file should stay on the queue before it is processed. The default number of days is 5. The administrator can configure it to 0 to process all files.
  • Type of file and location exclusions: you can instruct the deduplication feature not to process specific file types. You can choose to ignore CAB files, which do not help the process in any way as well as any other file type that requires a lot of compression space such as PNG files. There is an option of directing the feature not to process a particular folder.

3. Portability

Any volume that is under deduplication runs as an automatic unit. The volume can be backed up and moved to a different location.

Moving it to another server means that anything that was in that file is accessible on its new site.

The only thing that you need to change is schedule timings because the native task scheduler controls the scheduler.

If the new server location does not have a running deduplication feature, you can only access the files that have not been deduplicated.

4. Minimal Use of Resources

The default operations of the deduplication feature use minimal resources on the primary server.

In case the process is active, and there is a shortage of resources, deduplication will surrender the resources to the active process and resumes when enough is available.

Here’s how storage resources are utilized:

  • The hash index storage method uses low resources and reduces read/write operations to scale large datasets and deliver high edit/search performance. The index footprint left behind is excessively low and uses a temporary partition.
  • Deduplication verifies the amount of space before it executes. If no storage space is available, it will keep trying at regular intervals. You can schedule and run any deduplication tasks during off-peak hours or during idle time.

5. Sub-file Segmentation

The process segments files into different sizes, such as between 32 to 128 KB using an innovative algorithm developed by Microsoft and other researchers.

The segmentation splits each file into a sequence depending on its content. A Rabin fingerprint, which is a system based on the sliding Window hash, helps to identify the chunk boundaries.

The average size of every segment is 64KB and it is compressed and placed into a chunk store that is hidden in a folder located at the System Volume Information (SVI) folder.

A reparse point, which is a pointer to the map of all data streams, helps in replacing the normal files when requested.

6.BranchCache

Another feature you can get from deduplication is that sub-file segmentation and indexing engine is shared with BranchCache feature.

This sharing is important because when a Windows Server is running and all the data segments are already indexed, they can be quickly sent over the network as needed, consequently saving a lot of network traffic within the office or the branch.

How Does Deduplication Affect Data Access?

The fragmentations created by deduplication are stored on the disk as file segments that are spread all over, increasing the seek time.

Upon the processing of each file, the filter driver will work overtime to maintain the sequence by keeping the segments together in a random fashion.

Deduplication keeps a file cache to avoid repeating file segments, helping in their quick access. In case multiple users access the same resource simultaneously, that access pattern enables speeding up of the deduplication for each user.

Here are some important points to note:

  • No much difference is noted when opening an Office document; users cannot tell whether the feature is running or not
  • When copying one bulky file, deduplication will send end-to-end copy that is likely to be 1.5 times faster than it would take a non-deduplicated file.
  • During the transfer of multiple bulky files simultaneously, cache helps to transfer the file 30% times faster
  • When the file-server load simulator (File Server Capacity Tool) is used to test multiple file access scenarios, a reduction of about 10% in the number of users supported will be noticed.
  • Data optimization increases between 20-35 MB/Sec per job that easily translates to 100GB/hour for a single 2TB volume running on one core CPU with a 1GB RAM. This is an indicator that multiple volumes can be processed if additional CPU, disk resources, and memory allocations are available.

Reliability and Risk Preparedness

Even when you configure the server environment using RAID, there is still the risk of data corruption and loss attributed to disk malfunctioning, control errors, and firmware bugs.

Other environmental risks to stored data include radiation or disk vibrations.

Deduplication raises the risk of disk corruption, especially when one file segment referring to thousands of other files is located in a bad sector.

Such a scenario gives a possibility of losing thousands of users’ data.

Backups

Using the Windows Server Backup tool runs a selective file restore API to enable backup applications to pull files out of the optimized backup.

Detect and Report

When a deduplication filter comes across a corrupted file or section of the disk, a quick checksum validation will be done on data and metadata.

This validation helps to recognize any data corruption during file access, hence reducing accumulated failures.

Redundancy

An extra copy of critical data is created, and any file segment with more than 100 references is collected as most popular chunks.

Repair

Once the deduplication process is active, scanning and fixing of errors becomes a continuous process.

Inspection of the deduplication process and host volumes takes place on a regular basis to scrub any logged errors and fix them from alternative copies.

An optional deep scrubber will walk through the whole data set by identifying errors and fixing them, if possible.

When the disk configurations are set to mirror each other, deduplication will look for a better copy on the other side and use it as a replacement.

If there are no other alternatives, data will be recovered from an existing backup.

Verdict on Deduplication

Some of the features described above does not work in all Window Server 2012 editions and may be subject to limitations.

Deduplication was built for volumes that support the NTFS data structure.

Therefore, it cannot be used with Cluster Shared Volumes (CSV).

Also, Live Virtual Machines (VMs) and active SQL databases are not supported by deduplication.

Deduplication Data Evaluation Tool

To get a better understanding of the deduplication environment, Microsoft created a portable evaluation tool that installs into the \Windows\System32\ directory.

The tool can be tested on Windows 7 and later Windows operating systems.

It is installed through the DDPEval.exe and supports local drives, mapped, unmapped, and remote shares.

If you are using Windows NAS or an EMC /NetApp NAS, you can test it on a remote share.

Conclusion

The Windows Server native deduplication feature is now becoming a popular feature.

It mirrors the needs of a typical server administrator working in production deployments.

However, planning for deduplication before implementation is necessary because of the various  situations in which its use may not be applicable.

Windows Server 2016 —What’s New in Data Deduplication

Deduplication eliminates the need to repeat data to create a single instance. The creation of the single instance improves storage utility and efficiencies in a network with heavy network transfers.

Some may confuse deduplication with data compression, which identifies repeat data within single files and encodes the redundancy.

In simple terms, deduplication is a continuous process that eliminates excess copies of data; therefore, decreasing storage demands.

Data deduplication applies to Windows Server, the Semi-annual Channel, and Windows Server 2016.

Data deduplication in Windows Server 2016 is a highly optimized, manageable, and flexible process.

Here are the updated and new data deduplication features in Windows Server 2016.

The Updated Features

Here are two of the updated features.

1. Support for Large Volumes

In earlier versions, volumes were partitioned to fit data sizes that are above 10TB.

However, in Windows Server 2016, data deduplication supports volume sizes of up to 64TB.

  • What is the Added Value?

The volumes in Windows Server 2012 R2 had to be appropriately portioned in the correct sizes to ensure optimization demands keep up with the rate of data transfer.

The implication here was that data deduplication could only work on volumes with data of 10TB or less. The performance also depended on existing workloads on write patterns.

  • What is Different?

Windows Server 2012 R2 uses a single thread and an input and an output queue for every volume.

This is to maximize optimization and make sure jobs do not fall behind, which can affect the volume’s overall saving rate. This way, large data sets have to be broken into small volumes.

The volume size depends on the expected partition size; the maximum size is between 6 and 7TB for high volumes and 9 and 10TB for low volumes.

Windows Server 2016 has a new way of working with data deduplication: it runs on more than one thread and uses multiple inputs and outputs for every volume.

This introduces a new routine that was only possible after dividing data into small chunks.

2. Support for Large Files

In earlier versions, any file approaching the 1TB size was not eligible for deduplication.

However, Windows Server 2016 supports files with a maximum size of 1TB.

  • What is the Added Value?

In Windows Server 2012 R2, you cannot deduplicate large files due to reduced performance in the deduplication process queue.

In Windows Server 2016, deduplication of files of up to 1TB is possible.

Consequently, this enables you to save a large volume of work; for example, reduplicating large backup files.

  • What is Different?

Windows Server 2016 deduplication process uses new streaming and mapping structures to improve the deduplication output and its access.

Besides, the process can now be optimized when there is a failure, instead of restarting the entire process. Deduplication affects files with a capacity of 1TB.

The New Features

Here are three of the new features.

1. Support for Nano Servers

Nano servers support is a new feature that is available in any Nano Server Deployment option in Windows Server 2016.

  • What is the Added Value?

Nano servers is a headless deployment in Windows Server 2016 that need a smaller system for tracing resources. It enables quick startups and needs fewer updates and restarts than the Windows Server Core Deployment version.

2. Simple Backup Support

The Windows Server 2012 R2 support Virtualized Backups, like Microsoft Data Protection Manager, after successful manual configurations.

Windows Server 2016 has some new default backups that allow for seamless data deduplication for Virtual backups.

  • What is the Added Value?

For this to happen in earlier versions of the Windows Server, you needed to manually tune deduplication settings, as opposed to Windows Server 2016 that has a simplified process for its Virtualized backup applications.

Server 2016 enables deduplication for each volume, just the same way as the General Purpose File Servers.

3. Support for Clusters Operating System Rolling Upgrade

Data deduplication is capable of supporting the new Cluster OS Rolling Upgrade feature in Windows Server 2016.

  • What is the Added Value?

The failover clusters in Windows Server 2012 R2 can have a mix of nodes that run deduplication alongside nodes that operate Windows Server 2016 versions of deduplication.

This improvement adds full access to the data that is being deduplicated during the rolling upgrade.

Consequently, it allows the gradual rollout of the new version of data deduplication on an existing Windows Server 2012 R2 cluster without experiencing downtimes during the upgrading process.

  • What is Different?

In earlier versions of the Windows Server, a failover cluster required that all nodes in a cluster must be of the same Windows Server version.

However, in Windows Server 2016 version, the rolling upgrades allow clusters to run in mixed modes.

What’s New in Storage in Windows Server 2019 and 2016

Window Server Edition 2016 and 2019 have new features, which have made it possible to use storage migration capabilities for storing data.

The migration service helps in keeping inventory when moving from one platform to another.

This article will try to explain what is new in the storage systems of Windows Server 2016, Windows Server 2019, and other semiannual releases.

We will start by highlighting some of the key features added in the two server systems.

Managing Storage with Windows Admin Center

The Windows Admin Center is a new feature that runs on Windows Server 2019 and some latest versions of Windows.

It is the central location where an App handles the server functions, clusters, and hyper-converged infrastructure containing storage locations.

The Admin Center does this as part of the new server configurations.

Storage Migration Service

The Storage Migration Service is the latest technology that makes it easy to move servers from old to new server versions.

All the events take place via a graphical interface that displays data on the servers and transfers data and configurations to the new servers; thereafter, it optimally moves old server identities to the new ones, ensuring the settings for apps and users are matched.

Storage Spaces Direct Improvements (Available in Server 2019 only)

Several improvements have been made to Storage Spaces Direct in Server 2019, though they are not available in Windows Server, Semi-Annual channel.

Here are some of the improvements:

1. Deduplication and Compression of ReFS Volume

You will be able to store up to 10X more data on the same storage space using deduplication and compression of the ReFS system.

You only need to turn on this feature, using a single click, on the Windows Admin Center.

The increase in storage sizes, with an option to compress data, amplifies the saving rates.

Furthermore, the multi-threaded post processing feature assists in keeping performance impact low.

However, it supports a volume of up to 64TB and with each file reaching 1TB.

2. Native Support for Persistent Memory

Windows Server 2019 comes with native support for persistent memory.  This allows you to speed up performance for the continuous creation of memory modules, including the Intel Optane DC PM and NVDIMM-N.

You can use persistent memory as your cache to accelerate the active working set or use it as an extra space needed to facilitate low latency.

Of course, you can manage persistent memory the same way you can manage any other storage device in Windows Admin Center or PowerShell.

3. Nested Resiliency for Two-Node Hyper-Converged Infrastructure on the Edges

The all new software resiliency option, inspired by RAID 5 + 1, helps in surviving two hardware failures.

The nested resiliency for the two-node Storage Spaces Direct cluster offers continuous accessible storage for programs and virtual machines, even when one server node fails.

4. Two-Server Cluster Using USB Flash Drive as a Witness

You an use a low-cost USB flash plugged into your router to act as a witness between two servers in a cluster.

If the server is down, the USB will know which of the servers has more data.

5. Improved Windows Admin Center

The opportunity to manage and monitor Storage Spaces Direct with the newly built dashboard lets you create, delete, open, and expand volumes, with a few clicks.

You can follow performances of IOPS and IO latency, from the entire clusters to the individual hard disks and SSDs.

6. Increased Performance Logs Visibility

You can use the built-in history feature to see your server’s resource utilization and performance capabilities.

It has more than 50 counters that automatically collect  memory, computation, storage and network data, and store them in the cluster for a full year.

This feature works without the need to install or configure anything.

7. Scale up to 4PB for Every Cluster

The Windows Server 2019 Storage Spaces Direct feature supports up to 4 petabytes (PB) (4,000 terabytes).

This way, you can get to the level of multi-petabyte scale, which makes sense in media servers for backup and archiving purposes.

Other capacity guides are increased as well; for instance, you can create volumes reaching 64, and not 32.

More so, the clusters can be stitched together into a set to make the scaling that fits within one storage namespace.

8. Accelerated Parity is now 2X Faster

You can now create Storage Spaces Direct Volumes that are part mirror and part parity.

For example, you can mix RAID-1 and RAID -5/6 to harness the advantages of both.

In Windows Server 2019, the performance of mirror accelerated parity is twice that of Windows Server 2016, due to optimizations.

9. Drive Latency Outline Detection

Using proactive monitoring and the built-in outlier detection, which is an inspiration from Microsoft Azure, you can know which drives have abnormal latency.

You can see the failing drives that have been labeled automatically in the PowerShell and Windows Admin Center.

10. Manual Delimiting of Volume Allocations to Increase Fault Tolerance

In Storage Spaces Direct, the Admin can now manually change the limit of volume allocations.

Delimiting is usually done to increase fault tolerance in specific circumstances that consider management  complexities.

Storage Replica

The Storage Replica has the following improvements:

1. Introduction of Storage Replica in Windows Server, Standard Edition

It is now possible to use Storage Replica with Windows Server, Standard Edition, as well as the Datacenter editions.

Running Storage Replica on Windows Server, Standard Edition has the following weaknesses:

  • Storage replica can replicate a single volume and not an unlimited volume number
  • Volume varies with some taking up to 2TB, instead of taking an unlimited size

2. Storage Replica Log Performance Improvements

The Storage Replica comes with improvements that enhance the tracking of logs.

To get the increased performance, all members of the replication group must run Windows Server 2019.

3. Test Failover Improvements

You can mount a temporary snapshot of the replicated storage on destination server for testing or backing up purposes.

4. Windows Admin Center Support

Support for the graphical management of replication is made possible via the Server Manager Tool.

This involves server-to-server replication, cluster-to-cluster, and stretch cluster replication.

5. Miscellaneous Improvements

Storage Replica also has the following improvements:

  • Changes to asynchronous stretch cluster behaviors for automatic failover to take place.
  • Multiple bug fixes

SMB

SMB1 and Guest Authentication Removal

Windows Server does not install the SMB1 client and server by default, while, at the same time, the ability to authenticate guests in SMB2 if off by default.

SMB2/SMB3 Security and Compatibility

More options for security and applications compatibility were added, including disabling opLocks in SMB2+ for old applications.

This also covers the need for signing encryption on every connection from the client.

Data Deduplication

Data Deduplication Supports ReFS

You’ll not need to choose between the advantages of a modern file system with ReFS and Data Deduplication.

Anytime you enable Data Deduplication, enabling ReFS is also possible now.

Data Port API for Optimized Ingress/egress to Deduplicated Volumes

As a developer, you’ll now enjoy the advantages of data deduplication and possibilities of storing data in an efficient manner

File Server Resource Manager

The Windows Server 2019 can prevent the File Resources Manager service from creating a change (USN) journal on storage volumes.

This is to create and conserve more space on every volume; however, it will disable real-time classification.

This is the same effect that takes place in Windows Storage Server, Version 1803.

What’s New in Storage in Windows Server, Version 1709

Server Version 1709 is the first Windows Server release with a Semi-Annual Channel, which is a channel that is fully supported in production for 18 months, with a new version coming in every six months.

Storage Replica

Disaster recovery and protection is an added function of the Storage Replica, which is now expanded to include:

  • Test Failover

You now have an option of mounting the destination storage through a test failover.

You can also mount the snapshots temporarily for both testing and backup purposes.

  • Windows Admin Center Support

Thee is support for the graphical applications that are managing replications. You can access it via the  Server Manager Tool.

Storage Replica also has the following improvements:

  • Changes to asynchronous cluster behaviors to enable automatic failover
  • Multiple bug fixes

What’s New in Storage in Windows Server 2016

1. Storage Spaces Direct

The Storage Spaces Direct feature facilitates the availability and scalability of storage using servers with local storage.

This implies that it’s now possible to deploy and manage software that control storage systems, unlocking the use of new classes of storage devices.

These devices include SATA, SSD, and NVMe disks. Achieving such storage capabilities may not be possible using clustered Storage Spaces with Shared Disks.

What Value Does this Change Add?

Storage Spaces Direct allows service providers and enterprises to use industry standard servers with local storage.

The idea is to build highly available and scalable software-defined storage.

The use of servers with local storage decreases complexity, as it increases scalability and allows the use of storage devices such as SATA solid state disks. This lowers the cost of flash storage or NVMe sold state Disks

Storage Spaces Direct Removes the need to have a shared SAS fabric, which simplifies deployment and configuration.

This means that the server uses the network as the storage fabric while leveraging the SMB3 and SMB Direct (RDMA) for both high speed and low latency, as well as good use of the processing unit.

Adding more servers to the configuration increases storage capacity and input and output performance.

The Windows Server 2016 Storage Spaces Direct works differently, as explained below.

2. Storage Replica

It enables the storage, block-level stretching of failover clusters between sites, as well as the synchronous replication between servers.

Synchronous replication enables mirroring of data in physical sites with consistent volumes to ensure no data is lost at the file system level.

Asynchronous replication may increase the possibility of data loss.

What Value Does this Change Add?

It provides a single vendor disaster recovery solution for both planned and unplanned power loss situations.

You can use SMB3 transport and gain from proven performance, scalability, and reliability.

It will help you to:

  • Stretch Windows failover clusters further
  • Use Microsoft end-to-end software for storage and clustering, such as Hyper-V, Scale-Out File Server, Storage Replica, Storage Spaces, ReFS/ NTFS, and deduplication

It helps in reducing complexity costs by:

  • Being hardware agnostic with no specific requirements for storage configurations like DAS or SAN
  • Allowing for the storage of commodities and network technologies
  • Featuring easy graphical management interface for nodes and clusters through failover cluster manager
  • Including comprehensive and large scale scripting options through the Windows PowerShell
  • Helping in the reduction of downtimes and enhancing  large scale productivity
  • Providing supportability and performance metrics and diagnostic capabilities

What Works Differently

The functionality is new in Windows Server 2016

3. Storage Quality of Service

In Windows Server 2016, you can use the Storage Quality of Service (QoS) feature as a central monitor for end-to-end storage performance and developing management policies using Hyper-V and CSV clusters.

What Value Does this Change Add?

You can change the QoS policies in a CSV and assign one or more virtual disks on Hyper-V machines.

The storage automatically adjusts itself to meet the fluctuating policies and workloads.

This way, each policy can give a minimum reserve or create a maximum to be used when collecting data.

For example, a single virtual hard disk, a tenant, a service or a virtual machine can be used.

You can use Windows PowerShell or WMI to perform the following:

  • Create policies on CSV cluster
  • Assign the policies to virtual hard disks
  • Enumerate policies on the CSV clusters
  • Monitor flow performance and status of the policies

If you have several virtual hard disks sharing the same policy and performance is shared to meet the demands within the policy’s minimum and maximum settings, it means that the policy can manage virtual hard disks and a single or multiple virtual machines that constitute a service owned by a tenant.

What Works Differently

This is a new feature in Windows Server 2016.

The management of minimum reserves and monitoring the flow of all virtual disks over a cluster using a single command and central policy-based management are not possible in the previous Server releases.

4. Data Deduplication

Function

New or Updated

Description

Support large volumes

Updated Before Windows Server 2016, you had to specify sizes. Anything above 10TB did not qualify for deduplication. Server 2016 supports deduplication sizes of up to 64TB

Large file support

Updated Before Windows Server 2016, files with 1TB could not deduplicate. Server 2016 supports deduplication of files up to 1TB.

Nano Server Support

New Deduplication is available and fully supported for Server 2016

Simple Backup Support

New Windows Server 2012 R2 supported Virtual backups using the Microsoft’s Data Protection Manager. Windows Server 2016 simple backup is possible and is seamless

Cluster OS Rolling Upgrades Support

New Deduplication supports Cluster OS Rolling Upgrade and is available in Windows Server 2016

5. SMB Hardening Improvements for SYSVOL and NETLOGON Connections

Windows 10 and Windows Server 2016 client connections to the Active Directory Domain Service, the SYSVOL, and NETLOGON now all share domain controllers that require SMB signing and authentication via Kerberos.

What Value Does this Change Add?

It reduces the possibility of man-in-the-middle attacks

What Works Differently?

If the SMB and mutual authentication are not available, Windows 10 or Server 2016 will not access the domain-based Group Policy Scripts.

It is also good to note that the registry values of the settings are not present by default; the hardening rules will apply until a new policy change comes in through Group Policy or any relevant registry values.

6. Work Folders Improvements

The added changes to notifications are there when the Work Folder server is running on Windows Server 2016, and the Work Folder is on a client running Windows 10.

What Value Does this Change Add?

In Windows Server 2012 R2, when the changes in files are synchronized to the Work Folder, clients will get notified of the impending changes and wait for at least 10 minutes for the update to materialize.

When running Windows Server 2016, the Work Folders will immediately notify the Windows 10 client, and the synchronization changes take effect immediately.

What Works Differently

This is a new feature in Windows 2016.

For this feature to work, the client accessing the Work Folders must be a Windows 10.

In case you are using older clients, or if the Work Folder is on Windows Server 2012 R2, the client will poll every 10 minutes for any new changes.

7. ReFS Improvements

The ReFS (Resilient File System) offers support for large scale data storage allocation with varying workloads, reliability, resiliency, and scalability.

What Values Does this Change Add?

ReFS brings in the following improvements:

  • Implementing new storage tiers that help in delivering fast performance and increased capacity
  • Multipling resiliency on the same virtual disk through mirroring and parity tiers
  • Enhancing responsiveness to drifting working sets
  • Introducing a block of cloning and improvements to VM operations such as vhdx checkpoint merge operations
  • Helping in the recovery of leaked storage and keeping them from being corrupted

What Works Differently?

These functionalities are new in Windows Server 2016.

Conclusion

With so many features available in Windows Server 2019, this article covered the fully supported features.

At the time of writing this post, some features were partially supported in earlier versions but are getting full support in the latest Server versions.

From this read, you can see that Windows Server 2019 is a good upgrade experience.

Windows Server Disk Quota – Troubleshooting

DISK QUOTA CHALLENGES AND TROUBLESHOOTING

Disk quotas come in handy and allow system administrators to equitably distribute disk space among multiple users in shared servers or PCs. This avoids a situation where a careless user ends up filling the entire hard drive and wreaking havoc in the system. However, quotas do not always work as intended.

As easy as it may seem in setting up disk quotas, sometimes things may go a bit askew. Occasionally, users can get allocated a volume of disk space which is less than what was specified in the settings. This usually happens when the server runs out of space. However, there are situations where you may get the impression that they have received less hard drive space than what was configured. The reason behind this is the misconception that shrouds the meaning of quota allocation when it comes to a user’s files. What users do not realize is that quotas take into account all files that are owned by a user. And this includes files in the recycle bin. This is true especially if disk quotas are implemented on local PCs. Since the recycle bin resides on the PC, this is scenario or discrepancy is most likely to occur.

Another unusual thing that may arise is the unavailability of space despite a user relinquishing the ownership of their files. A user may create a file, change its ownership but still, the file will be counted in the quota.

Another confusing scenario is the use of compressed folders. Windows looks at compressed folders not in their compressed size, but rather, in their original size. This means that quotas look at compressed files in their original uncompressed format, not according to the current size they occupy on the hard drive in their compressed format.

Sometimes, when the disk space limit is exceeded, the user may realize that deleting files in the volume may not free up space as expected. This occurrence has been noted in Windows Server 2008 R2. This happens due to incorrect filling of the file content structure when the deletion happens.

As a solution to this issue, Microsoft released a hotfix which can be downloaded from their official site via this link https://support.microsoft.com/hotfix/kbhotfix.aspx?kbnum=2679054&kbln=en-US

Once you apply the hotfix, run the command below

dirquota quota scan /path: d:\users\scratch

For instance, the above command will apply on scratch folder located in users directory.

After running the command, reboot the system to effect the hotfix settings.

This occurs because the file context structure is not filled in correctly when you delete the files.

In case a user is using a system whose hard drive is formatted using FAT or FAT-32 filesystem, they’ll be required to format it to NTFS filesystem since NTFS filesystem is the only filesystem that acknowledges the concept of quotas as well as file ownership. This compels the system administrator to first perform a file backup of the files contained in the FAT & FAT-32 partitions and later format the volumes to NTFS filesystem. This can be quite tedious and cumbersome. It’s therefore important to ensure all volumes are formatted in NTFS filesystem if you are planning to have several users using or backing up data in the system. This is because disk quotas only work with NTFS volumes only.

Windows Server Disk Quota – Setup and Configure

In the previous post, we looked at the disk quota functionality and how quotas are handy in limiting disk space utilization for shared systems. This is crucial in ensuring that all users get equal space allocation and systems’ performance is kept at an optimal level. In this post, we’ll take a practical approach and see how we can manage and control disk space utilization to prevent users from filling up the hard disk and leaving no more space for anyone. To recap some of the important features about disk quotas, the quota can only be applied to volumes which have been formatted in NTFS filesystem. They are mostly used in corporate networks but can as well be used on a home PC running Windows OS including the basic Windows 10 home. You can choose to set quotas per individual user or apply them on everyone. However, you cannot implement limits on groups. For best practices, quotas should be configured or set per volume basis and not per computer, and upon execution, newly added users will begin using them as expected. That said, let’s dive deeper and see how you can implement this functionality to manage and control hard drive space utilization.

Setting Quota Limits

Although implementing quotas can be done on any disk volume, it can prove quite tricky setting limits on Drive C, which is the Windows installation volume. Try as much as possible to enable quotas on secondary volumes or partitions and plan accordingly. There are two ways of setting quotas. You can set them per account or on a volume basis. Let’s see how you can set quotas on Account basis:

Setting up Quota Per Account basis on Windows

If you want to set disk space limit on end users, while at the same time having your account occupy unlimited space, follow the steps outlined below:

  1. Fire up the File Explorer. This is done by using the (Windows key + E) shortcut.
  2. On Windows 10, Locate This PC tab and click on it.
  3. Under “Devices and drives,” right-click on the preferred drive that you wish to manage. In the menu that appears, select Properties option.
  4. Select the Quota tab.
  5. Click the Show Quota settings tab.
  6. The Quota status Windows will open. Check the first option, which is the Enable quota management option.
  7. Just below the option in 6 above, Locate and Check the Deny disk space to users exceeding quota limit option. This option enables disk space limitation.
  8. Next, Click on the Quote Entries button at the bottom right corner of the window
  9. In case the account you want to restrict is not listed, click Quota, and select New Quota Entry.
  10. In the “Select Users” tab, click on the Advanced button. This displays a pop up window
  11. Next, click on the Find Now button.
  12. At the bottom of the Windows, a list of user accounts will be listed. Select the account you’d want to effect limits on.
  13. Next, Press OK.
  14. Press OK again in the previous Window.
  15. Select the Limit disk space to radio button option.
  16. Set the desired volume of space you’d want and specify the restriction unit size (for instance, MB, GB or TB).
  17. Set the preferred space size before a warning is triggered and specify the size unit (for instance, MB, GB or TB).
  18. Click on Apply option.
  19. Finally, Click on OK.

After following and completing the above procedure, the quotas will take effect immediately users login in. Users will be restricted to the amount of disk space size set and get a warning when approaching the limit as specified in step 16 and 17 above.

Setting up Quota Per Volume basis on Windows

Should you decide to limit the available storage space for all users, follow the steps outlined below:

  1. Fire up File Explorer. This is done by using the (Windows key + E) shortcut.
  2. On Windows 10, Locate This PC tab and click on it.
  3. Under the “Devices and drives,” section, right-click on the preferred drive that you wish to manage. In the menu that appears, select Properties option.
  4. Hit on the Quota tab.
  5. Click the Show Quota Settings tab.
  6. The Quota status Windows will open. Check the first option, which is the Enable quota management option.
  7. Next, Locate and Check the Deny disk space to users exceeding quota limit option. This option enables disk space limitation.
  8. Select the Limit disk space to option.
  9. Set the desired amount of space and specify the size unit (e.g., MB, GB).
  10. Set the amount of space before a warning is triggered to the user and specify the size unit (for example, MB or GB).
  11. Click Apply.
  12. Click OK.
  13. Finally, Reboot your computer.

Once you completed the above procedure, all accounts on your system will now be able to use part of the total available disk space storage. A warning will be triggered to alert or warn users that they’re approaching their maximum storage quota. Should the threshold be reached, users will no longer be able to create and store any more files on the volume. They will either have to delete existing files or move them to another location.

One can always adjust – increase or decrease – the storage quota by making changes to the Limit disk space to & Set warning level to options in step 8

If you decide that you no longer want to put restrictions on the hard disk volume users can use on a drive, you can use the same instructions. In step number 8, select Do not limit disk usage option and uncheck the Deny disk space to users exceeding quota limit as well as the Enable quota management options.

In summary, we have seen how you can plan and implement disk quotas on Windows Systems, both on per user account and volume basis. In the next post, we’ll see some of the challenges that are likely to occur and how you can go around them.

Windows Server Disk Quota – Overview

Windows Server system comes with a very handy feature that allows the creation of many user accounts on a shared system. This enables users to log in and have their own disk space and other custom settings. However, the drawback with this feature is that users have unlimited disk space usage, and with time, space eventually gets filled up leading to a slow or malfunctioning system, which is a real mess. Have you ever wondered how you can avert this situation and set user limits to disk volume usage?

Worry no more. To overcome the scenario described above Windows came up with the disk quota functionality. This feature allows you to dictate or set limits on hard disk utilization space such that users are restricted to the size of disk space they can use for their files. The functionality is available for both Windows and Unix systems like Linux that are being shared by many users. In Linux, it supports ext2, ext3, ext4 and XFS filesystems. In Windows operating systems, it’s supported in Windows 2000 and later versions. It’s important to point out that in Windows, this functionality can only be configured on NTFS file systems only. So, If you are starting out with a Windows server or client system, you may to consider formatting the volumes to NTFS filesystem to avert complications later on. Quotas can be applied to both client and server systems like Windows server 2008, 2012 and 2016. In addition to that, quotas cannot be configured on individual files or folders. They can only be set on volumes and restrictions apply to those volumes only. To be able to administer a disk quota, one must either be an administrator or have administrative privileges, that is, be a member of Administrator’s group.

The idea behind setting limits is to prevent the hard disk from getting filled up and thereby causing the system or server to freeze or behave abnormally. When a quota is surpassed, the user receives an “insufficient disk space error” warning and cannot, therefore, create or save any more files. A quota is a limit that is normally set by the administrator to restrict disk space utilization. This will prevent careless or unmindful users from filling up the disk space leading to a host of other problems including slowing down or freeing of the system. Quotas are ideally applicable in enterprise environments where many users access the server to save or upload documents. An administrator will want to assign a maximum disk space limit so that end users are confined to uploading work files only like Word, PowerPoint and Excel documents. The idea behind this is to prevent them from filling the disk with other non-essential and personal files like images, videos and music files which take up a significant amount of space. A disk quota can be configured as per user or per group basis. A perfect example of disk quota usage is in Web hosting platforms such as cPanel or Vesta CP whereby users are allocated a fixed disk space usage according to the subscription payment.

When a disk quota system is implemented, users cannot save or upload files to the system beyond the limit threshold. For instance, if an administrator sets a limit of 10 GB on disk space for all logon users, the users cannot save files exceeding the 10G limit. If a limit is exceeded, the only way out is to delete existing files, request another user to take ownership of some files or request the administrator, who’s the God of the system, to allocate you more space. It’s important to note that you cannot increase the disk space by compressing files. This is because quotas are based on uncompressed files and Windows treats compressed files based on their original uncompressed size. There are two types of limits: Hard limits and soft limits. A hard limit refers to the maximum possible space that the system can grant an end user. If for instance, a hard limit of 10G is set on a hard drive, the end user can no longer create and save files once the 10G limit is reached. This restriction will force them to look for an alternative storage location elsewhere or delete existing files

A soft limit, on the other hand, can temporarily be exceeded by an end user but should not go beyond the hard limit. As it approaches the hard limit, the end user will receive a string of email notifications warning them that they are approaching the hard limit. In a nutshell, a soft limit gives you a grace period but a hard limit will not give you one. A soft limit is set slightly below the hard limit. If a hard limit of, say 20G is set, a soft limit of 19G would be appropriate. It’s also worth mentioning that end users can scale up their soft limits up to the hard limit. They can also scale down their soft limits to zero. As for hard limits, end users can scale them down but cannot increase them. For purposes of courtesy, soft limits are usually configured for C level executives so that they can get friendly reminders when they are about to approach the Hard limit.

In summary, we have seen how handy disk quota is especially when it comes to a PC or a server that is shared by many users. Its ability to limit disk space utilization ensures that the disk is not filled up by users leading to malfunctioning or ‘freezing’ of the server. In our next topic, we’ll elaborate in detail how we apply or implement the quotas.

File System Attacks on Microsoft Windows Server

Most common File System Attacks on Microsoft Windows Server systems are an Active Directory targeted attacks, which is, based on a fact that AD is a “heart” of any Windows-based system. The bit less common, but still, very dangerous ( and interesting), can be File system attacks.

In this article, we investigated most common ways of filesystem attacks and protection against it.

The goal of File System Attacks is always the data, pieces of information stored on a server, important for any reason to any side that planned an attack. To get to data, first thing, attacker needs are credentials, as more elevated account, as better.

In this article, we will not write about credentials theft, that could be a topic for itself, but we will assume, that attacker already breached organization, and got the Domain administrator credentials.

Finding File Shares

The first step is finding the Data, a place where it “lives”.

For that, the tools are coming out the front. Most of the tools, attackers are using, are penetration testing tools, like, in our example smbmap, or PowerShell ( we will show both ways)

SMBMap, as git hub says “ allows users to enumerate samba share drives across an entire domain. List share drives, drive permissions, share contents, upload/download functionality, file name auto-download pattern matching, and even execute remote commands. This tool was designed with pen testing in mind, and is intended to simplify searching for potentially sensitive data across large networks”

So with a usage of Smbmap’s features, attackers will find all the file shares on those hosts and determine what sort of access, Permissions, and more detailed info about any file share on the system.

Another common way of determining the data location is PowerShell based.

By definition – PowerSploit is a collection of Microsoft PowerShell modules that can be used to aid penetration testers during all phases of an assessment.

And like smbmap, PowerSploit has a huge number of features. For finding data shares, attackers use Invoke-ShareFinder cmdlet, which will, in the combination of other PowerSploit features, show exactly the same things as smbmap, that means all information necessary to access and use data.

Protection

Of course, examples, above, are a just brief description of attacks that can list your data shares to the potential attacker, but, no matter, it is clear, that listing your data is a first step to getting it.

So here are some recommended actions to protect your system:

Removing open shares: Reduce open shares as much as possible. It is ok to have some if explicitly needed by a job, but sometimes, open shares are just result of lousy made permissions. Check out your default permissions ( default permissions are equivalent to open), change them properly, and avoid easy listing for the potential attacker

Monitor first time access activity – this is more an admin tip than a protection method, but it can be important. If you notice, a user has a right to share but never used it, and all the sudden, activity on that account changes, and steps out of “normal”, it could be a sign that account credentials are hijacked.

Check for potentially harmful software, not as malware, but a hint. SmbMap is built in python, so if noticed, sudden installation of python software, or PowerSploit module on your system, that could be an early alarm that something suspicious is going on your servers.

Finding Interesting Data

So now the potential attacker know where the data on our hypothetical server “ live”. The next step is narrowing the data down on “interesting”. There could be huge amounts of files in even the smallest organizations. How can attacker know which data is one he/she need.

With PowerSploit, functionality used is called Invoke-FileFinder.  It has a lot of filtering options, to narrow down data to “interesting”, and export it to CSV files, which allows attacker to explore it on his own system with wanted pace, and after detecting it, attacker can make a targeted attack, and get needed files to staging area, and transport them out of the network ( via FTP, or even Dropbox trial account).

The same thing happens with SmbMap. Just as PowerSploit, it will filter out the data with options, the tool can provide, and show the data, the attacker is interested in, with the same outcome, getting pieces of information.

Protection

With this example, a hypothetical attack is in the second phase. The attacker, successfully listed files and found the most interesting ones. The easy part is left undone. Just taking the data. How to protect from that? Together with earlier mentioned methods, the following can help administrator fortify system and files.

Password rotation – Can be very important action, especially for services and applications that store passwords in filesystems. Constantly rotating passwords and checking file content can present a very large obstacle for the attacker, and will make your system more secure.

Tagging, and encryption –  In combination with Data Loss Protection, will highlight and encrypt important data, which will stop simple type of attacks, at least, getting important data.

Persistence

The final part of the file system attack. In the hypothetic scenario, we had listed and accessing data on the penetrated system. And here we will describe how attackers persist in the system, even when they get kicked out the first time.

Attackers hide some of their data into the NTFS file system, more accurate, in Alternate Data Stream ( ADS). Data of a file is stored in $DATA attribute of that file as NTFS tracks it. Malware vendors, and “bad guys” are tracking ADS and use it for entrance, but still, they need credentials.

So as usual, they can be stopped by correct permissions usage, and not allowing “write” permission to any account that is not specifically assigned for write operations.

File System Attacks are tricky, but they are leaving traces, and in general, most of the attacks should be prevented by System Administrator behavior and predictions. In this field, we can fully say: it’s better to prevent than to heal, and it is clear that only knowing your system fully, and full-time administration and monitoring will/can make your system safe.

Do you want to avoid Unwanted File System Attacks on Microsoft Windows Server?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Introduction to Data Deduplication on Windows Server 2016

Data Deduplication is a Microsoft Windows Server feature, initially introduced in Windows Server 2012 edition. 

As a simple definition, we can tell, data deduplication is an elimination of redundant data in data set and storing only one copy of the same data. It is done by identifying double byte patterns through data analysis, removing double data and replacing it with reference pointed to stored, single piece of data. 

In 2017, according to IBM, an output of world data creation was 2.5 quintillions (1018) bytes a day. That fact shows that today’s servers handle huge portions of data in every aspect of human life. 

Definitely, some percentage drops on duplicated data in any form, and that data is nothing more than the unnecessary load on servers. 

Microsoft knew the trends, way back in 2012 when Data deduplication was introduced and kept developing it, so in Windows Server 2016 system, Data deduplication is more advanced, as more important. 

But let’s start with 2012, and understand the feature in its basic. 

Data Deduplication Characteristics: 

Usage –  Data deduplication is very easy to use. It can be enabled on a data volume in “one-click”, with no delays or impacts on a system functionality.  In simple words, if the user requests a file, he will get it, as usual, no matter is that file affected by deduplication process. 

Deduplication is made not to aim to all files. For example, files smaller than 32KB, encrypted files ( encrypted with a usage of EFS), and files that have Extended attributes, will not be affected by the deduplication process. 

If files have an alternate data stream, the only primary stream will be affected, but alternate, will not.  

Deduplication can be used on Primary data volumes without affecting files that are being written to until files get to certain age, which allows great performance of feature active files and saves on other files. It sorts files in categories by criteria, and those that are categorized as “in policy” files are affected with deduplication, while others are not. 

Deduplication does not change write-path of new files. It allows writing of new files directly to NTFS and evaluates them later through background monitoring process. 

When files get to a  certain age, MinimumFileAgeDays setting decides ( previously set up by admin), are the files eligible for deduplication. The default setting is 5 days, but it can be changed, to a minimum of 0 days, which processes it, no matter of age. 

Some file types can be excluded, like PNG or CAB file types, with compression, if it is decided, the system will not benefit much from mentioned file type processing. 

In need of backing up and restoring to another server, deduplication will not make problems. All settings are maintained on the volume, and in need of relocation, they will be relocated too, all except scheduled settings, that are not written on volume. If relocation is made to a server that does not use deduplication, a user will not be able to access files affected by the process. 

Resource Control 

The feature is made to follow server workload and adapt to system resources. Servers usually have roles to fill, and storage, as seen by admin is only necessary to store background data, so deduplication is adapting to that philosophy. If there are resources to deduplicate, the process will run, if not, the process will stand by and wait for resources to become available. 

A feature is designed to use low resources and reduce the Input/output operations per second ( IOPS) so it can scale large data and improve the performance, with index footprint of only 6 bytes of RAM per chunk (average size of 64 KB) and temporary partitioning.  

– As mentioned, deduplication works on “chunks” principle, it uses an algorithm with chunks a     file in a 64KB pieces, compresses it, and store in a hidden folder. If a user requests that file, it “regenerate” file from the pieces and serve it to the user. 

–  BranchCache:  the feature that sub-file chunking and indexing engine are shared with. It has an option to send, if needed,  already indexed chunk over the WAN to the branch office, and saves a lot of time and data. 

Is there a  Fragmentation, and what about data access? 

The question that is imposed when reading about deduplication, is fragmentation!? 

Is there a fragmentation on a hard drive, based on spreading chunks around your hard drive? 

Answer is no, deduplication ’s filter driver has a task to keep the sequence of unique chunks together on disk locality, so distribution doesn’t go randomly, plus, deduplication has its own cache, so in situation of multiple requests for a file in an organization, the access pattern will speed things up, and will not start multiple file “recovery” processes, and user will have the same “answer time” as with file without deduplication, and in need of copying one large file, we see end-to-end copy times that can be 1.5 times what it takes on a non-deduplicated volume. But real quality and savings are coming up when copying multiple large files at the same time. The time of copying, due to the cache can speed up to an amazing 30%. 

Deduplication Risks and solutions 

Of course, like all other features, this way of works has some risks. 

In cases of any type of data corruption, there are serious risks, but solutions too. 

There is possibility with errors caused by  disk anomalies, controller errors, firmware bugs or environmental factors, like radiation or disk vibrations, that chunks errors can cause major problems as multiple files loss., but with good admin organization, usage of backup tools,  on time corruption detection, redundancy copies and regular checkups can minimize risks of corrupted data, and loses. 

Deduplication in Windows Server 2016 

As with all other features, data deduplication went through some upgrades and new features in the latest edition of Microsoft Server. 

We will describe the most important ones, and show a way to enable and configure that feature in Microsoft Server 2016 environment. 

Multithreading  

Multithreading is flagged as a most important change in 2016 when compared with Windows Server 2012 R2. On Server 2012 R2, deduplication operates in a single-threaded mode, and it uses one processor core by the single volume. In Microsoft, they saw it as a performance limit, and in 2016, they introduced multi-threaded mode. Now each volume uses multiple threads and an I/O queues. It changed limits of size per file or volume. In Server 2012 R2, maximum volume size was 10 TB, and in 2016 edition, it changed to 64TB volumes, and 1 TB files, what represents a huge breakthrough. 

Virtualization Support 

In the first edition of deduplication feature ( Microsoft Windows Server 2012), there was a single type of deduplication, created only for standard file servers, with no support for constantly running VM’s. 

Windows Server 2012 R2 started using   Volume Shadow Copy Service (VSS)  in a way that deduplication with a usage of optimization jobs, optimizes data, and VSS captures and copies stable volume images for backup on running server systems. With the usage of VSS, Microsoft, in 2012 R2 system, introduced virtual machines deduplication support and a separate type of deduplication. 

Windows Server 2016, went one step further and introduced another type of deduplication, designed specifically for virtual backup servers (DPM). 

Nano server support  

Nano server is minimal component’s fully operational Windows Server 2016, similar to Windows Server Core editions, but smaller, and without GUI support, ideal for purpose-built, cloud-based apps, infrastructure services, or Virtual Clusters.  

Windows Server 2016, supports fully deduplication feature on that type of servers. 

Cluster OS Rolling Upgrade support 

Cluster OS Rolling upgrade is a Windows Server 2016 feature that allows upgrade of an operating system from Windows Server 2012 R2 cluster nodes to Windows Server 2016 without stopping Hyper V. It can be made by usage of so-called “mix mode” operation of the cluster. From deduplication angle, that means that same data can be located at nodes with different versions of deduplication. Windows Server 2016, supports mix mode and provides deduplicated data access while a process of cluster upgrade is ongoing. 

Installation and Setup of Data Deduplication on Windows Server 2016 

In this section, we will bring an overview of best practice installation and set up data deduplication on Windows Server 2016 system. 

As usual, everything starts with a role. 

In server manager, choose, Data deduplication ( Located in the drop-down menu of File and storage services), or with the usage of PowerShell cmdlet (as administrator) :  

Install-WindowsFeature -Name FS-Data-Deduplication 

Enabling And Configuring Data Deduplication on Windows Server 2016 

For Gui systems, deduplication can be enabled from Server manager – File and Storage services – Volumes, selection of volume, then right-click and Configure Data Deduplication. 

After selecting the wanted type of deduplication, it is possible to specify types of files or folders that will not be affected by the process. 

After it is needed to setup schedule, with a click on Set Deduplication Schedule button, which will allow selection of days, weeks, start time, and duration. 

Through PowerShell terminal, deduplication can be enabled with following command ( E: is an example volume letter) : 

Enable-DedupVolume -Name E:  -UsageType HyperV 

Jobs Can be listed with the command : 

Get-DedupSchedule 

And scheduled with following command (example – Garbage collection job) : 

Set-DedupSchedule -Name “OffHoursGC” -Type GarbageCollection -Start 08:00 -DurationHours 5 -Days Sunday -Priority Normal 

These are only basics of deduplication  PowerShell commands, it has a lot more different deduplication -specific cmdlets, and they can be found at the following link : 

 https://docs.microsoft.com/en-us/powershell/module/deduplication/?view=win10-ps 

Do you want to avoid Data Lost and Unwanted Data Access?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

How to Configure NFS in Windows Server 2016

NFS  (Network File System) is a client-server filesystem that allows users to access files across a network and handle them as if they are located in a local file directory. It is developed by  Sun Microsystems, Inc, and it is common for Linux/ Unix systems. 

Since Windows Server 2012 R2, it is possible to configure it on Windows Server as a role and use it with Windows or Linux machines as clients. Read to know about How to Configure NFS in Windows Server 2016 here.

How to install NFS to Windows Server 2016 

Installation of NFS (Network File System) role is no different than an installation of any other role. It goes from “Add roles and features Wizard”. 

With few clicks on  “Select server roles” page, under File and Storage Services, and expansion of File and iSCSI Services, the system will show checkbox “Server for NFS”. Installation of that role will enable NFS server. 

The configuration of NFS on Windows Server 2016 

After installation, it is needed to configure role properly. The first stage is choosing or creating a folder for NFS (Network File System) share. 

With right click and properties option, the system will bring the NFS Sharing tab, and Manage NFS sharing button, as part of the tab. 

It will provide  NFS Advanced Sharing dialogue box, with authentication and mapping options, as well as with “Permissions” button. 

Clicking on “Permissions” button will open Type of access drop-down list, with the possibility of root user access, and permission level. 

By default, any client can access the NFS shared folder, but it is possible to control or limit the specific clients, with a clicking of Add button and type the client’s IP address or hostname. 

 Mount NFS Shared Folder on Windows Client 

The steps above make NFS (Network File System) server ready for work.  

To successfully test it, it is needed to mount chosen NFS folder on a Linux or Windows client with following steps: 

  1. It is needed to activate a feature on the client, by clicking Control Panel / Programs and Features / Services for NFS / Client for NFS
  2. After installing the service, it is needed to mount the folder with the following command :
mount –o \\<NFS-Server-IP>\<NFS-Shared-Folder> <Drive Letter>: 

 The command maps folder as drive and assigns chosen letter to it. 

Mount NFS Shared Folder on Linux Client  

No matter NFS is common to Linux / Unix systems, it is still needed to mount folder to a system via command, similar to windows systems. 

mount –t NFS <NFS-Server-IP>/<NFS-Shared-Folder> /<Mount-Point> 

 

Do you have unclear NTFS Permissions assignments?
Do you have too many special permissions set on your fileservers?
Or blocked NTFS Permission Inheritance?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!