Storage Replication in Windows Server 2016

Storage Replica is a new Windows Server technology feature on Windows Server 2016. This facilitates the replication of volumes between clusters for discovery or servers. It also allows the users to craft stretch failover clusters which span at least two sites, and with all the nodes kept in sync.

Note: This feature is only available in the Datacenter edition of Windows Server 2016.

Storage Replica reinforces asynchronous and synchronous replications.

  • Asynchronous replication mirrors the data across sites which lie beyond metropolitan ranges over the network links which have higher latencies, minus any guarantee that both sites have any identical copies of data by the instance of failure.
  • Synchronous replication has the duty of reflecting the data within the low-latency network site which have crash-consistent volumes to make certain that there is zero data loss at the file-system level amid the failure.

Why You Need Storage Replication

The storage replica is an ideal tool for the modern requirement for disaster recovery alongside the preparedness abilities in Windows Server 2016 Datacenter Edition. The Windows Server, for the first time, offers the users with a peace of mind of no data loss, an ability to synchronously safeguard data on various floors, racks, building, cities, counties, and campuses.

After a disaster strikes, the data will be accessible elsewhere without any data loss. The same principle applies prior to the striking of the disaster; the storage replica allows the users to switch workloads to much safer locations before catastrophes are served with a few moments warning (again, without any data loss).

The storage replica is also reliable as it reinforces the asynchronous replication for extended ranges and networks of higher latency. Since it is not a check-point, the delta of adjustments will be somehow much lower as compared to the snapshot-based outputs. Again, the storage replica mainly operates at the partition layer, and is therefore able to replicate all VSS snapshots modelled by the Windows Server and backup software. This permits the application of unstructured operator data synchronously replicated.

The storage replica can also permit users to decommission the existing file replication systems like DFS replication which were pressed into the duty as the low-end disaster recovery remedy. The DFS replication works quite perfectly over very low bandwidth networks, though its latency is relatively higher most of the time. This is majorly contributed by its need for files to close and also its artificial throttles which are meant to eradicate the network congestion.

Supported Configurations

The Stretch Cluster allows the users to configure storage and computer in one cluster, where other nodes share a set of symmetric storage whole, some nodes share the other, and then asynchronously or synchronously replicate with the site awareness. This instance can leverage storage spaces with the shared SAN, SAS Storage and ISCSI-attached LUNs. It is regulated with the PowerShel and Failover manager graphical gadget, and permits for the automated failover.

Cluster to Cluster permits the replication in between two separate clusters, where a single cluster asynchronously or synchronously replicates with another cluster. Ideally, the instance can permit the utilization of storage spaces directly, SAN and ISCSI-attached LUNs and Storage Spaces with shared SAS storage. It is naturally managed by the PowerShell and demands manual intervention for the failover. There is an inclusion of support for Azure Site recovery of this instance.

Server to Server permits both asynchronous and synchronous replication between at least two standalone servers leveraging the Storage Spaces with the shared SAS storage, ISCSI-attached LUNs and SAN. This is also managed by the PowerShell, alongside the server manager tool and demands a manual intervention for the failover.

The Key Features of Storage Replication

Simple Management and Deployment
The storage replica has a model mandate for an ease of use. The crafting of the replication affiliation between two servers demands only one PowerShell command. The deployment stretch clusters leverages the intuitive wizard in the Failover Cluster Manager gadget.

Host and Guest
All abilities of Storage Replica are in both virtualized guest and host-based deployments. This implies that the guests are able to replicate their data volumes if running on non-Windows virtualization platforms even in public clouds, so long as Windows Server 2016 Datacenter Edition in the guest is utilized.

Block-Level Replication, Zero Data Loss
With the help of synchronous replication, there is zero possibility of any data being lost. With the block-level replication, there is no probability of any file getting blocked.

User Delegation
The operators can delegate the permissions to manage the replication without being an affiliate of the built-in Administrators team on the replicated modes, hence reducing their access to the unrelated sections.

Network Constraint
The storage replica can at times be limited to the individual networks server and by the replicated volumes, with the aim of providing backup, application, and management software bandwidth.

High Performance Initial Sync
The storage replica reinforces the seeded initial sync, where there is already a subset of data on a target from the initial backups, copies, or shipped rives. The initial application can only copy the differing blocks, possibly reducing the initial sync time and regulating data with an aim of preventing the data from utilizing the limited bandwidth.

Use of SMB 3 as the transport protocol which is also supported via the TCP/IP model.

Prerequisites

  1. Two servers with two volumes on each server or location. One location will be for storage of data and the other for storage of logs.
  2. Volumes need to be of the same size both at the main server and remote server.
  3. Log volumes should also be of identical sizes across the two volumes.
  4. Data volumes should not exceed 10TB and should be of NTF
  5. Both servers need to be running Windows Server 2016.
  6. There must be at least 2GB of RAM alongside two cores for every server.
  7. There must be one TCP/Ethernet connection on each of the server for synchronized replication, but most preferably RDMA.
  8. The network between the servers with a reliable amount of bandwidth to accommodate the user’s IO write workload and an average of 5ms round-trip latency for an effective synchronous replication.

How it Works

The above diagram depicts how storage replication works in synchronous configuration.

The application will write data onto the File System volume labelled Data. This will be intercepted by I/O (input/output) filtering and be written onto the Log Volume located on the same server. This data will then be replicated across to the remote server’s log volume. When this data is written on the log volume, an acknowledgement is sent back to the primary server and to the application. On the remote server, data will be flushed from the Logs volume to the Data volume.

Note: The purpose of the Log Volume is to record and verify all the changes that occur across both blocks. Furthermore, in synchronous model configuration, the primary server needs to await acknowledgement from the remote server. If network latency is high, this will lead to a degraded network and slow down the replication process. Consider using RDMA which has a low network latency.

In asynchronous replication model, data would be written to the Log Volume located on the main server and thereafter, an acknowledgement sent to the application. Data would then be replicated from the Log Volume on the primary server to the Log Volume on the remote server. Should the link deteriorate between the two servers, the primary server will block all changes until the link is restored whereupon replication of changes will continue.

Setting Up Storage Replication

  1. Import-module StorageReplica
    Launch Windows PowerShell and verify the presence of Storage Replica Module.
  2. Test-SRTopology -SourcheComputerName CHA-SERVER1 -SourceVolumeName f: -SourceLogVolumeName e: -DestinationComputerName CHA-SERVER2 -DestinationVolumeName f: -DestinationLogVolumeName e: -DurationInMinutes 30 -ResultPath c:\temp
    Test the storage replica Volume by running the command above.
  3. PowerShell will then generate an HTML report that will give an overview of the requirements met.
  4. NewSRPartnership -SourceComputerName CHA-SERVER1 -SourceRGName SERVER1 -SourceVolumeName e: -SourceLogVolumeName f: -DestinationComputerName CHA-SERVER2 –DestinationRGName SERVER2 –DestinationVolumeName e: -DestinationLogVolumeName f:
    Begin setting up the replication configuration using the command above.
  5. Set-SRPartnership –ReplicationMode Asynchronous.
    Run Get-SRgroup to generate a list of configuration properties. It is set to run on synchronous replication by default & Log file set to 8GB. This can be set to asynchronous using the command above.

When we head out to the remote server and open File Explorer, Local Disk E will be inaccessible, while Logs will be stored on Volume F.

When data is written on the source server, it will be replicated block by block to the destination or remote server.

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

  • No more unauthorized access to sensitive data
  • No more unclear permission assignments
  • No more unsafe data
  • No more security leaks

Get your free trial of the easiest and fastest NTFS Permission Reporter now!

Overview: Resilient File System (ReFS)

Resilient File System (ReFS) is Microsoft’s latest file system that is an alternative to the New Technology File System (NTFS). ReFS has been introduced for implantation of systems with large data sets to give more functionality in terms of efficiency, scalability, and availability.

An outstanding feature of ReFS is data integrity which protects data from common errors that may lead to data loss. In case of an error in the file system, ReFS has the ability to recover from data loss without compromising the volume availability. On the other hand, ReFS is a robust file system with proven reliability and it is time and cost efficient when used on servers.

The Key Elements of ReFS

The key elements of a Resilient File System are dependent on the amount of data the server system manages.

  • Allocate on Write
    The main reason behind this feature is to avoid data corruption because of its ability to provide cloning of course database simultaneously without straining available storage space. All forms of torn writes are eliminated using the Allocate on Write method. This implies that the file stored on ReFS partition can be read and written on a single instruction.
  • B+ Trees
    The servers store a lot of information and limitless sizes of files and folders. The ReFS scalability element means that the file servers can handle large data sets efficiently. A B+ Tree file structure also enables data to be stored and retrieved in a tree structure with every node acting as keys and pointers to low level nodes in the same tree.

Why Use Resilient File System

  • Resilience
    From its name, the ReFS partition will automatically detect and fix detected errors while the file is in use without compromising file in integrity and availability. Resiliency relies on the following four factors:

    • Integrity Streams
      Integrity streams allow for the use of checksums on stored data enabling the partition to query the reliability and consistency of the file. Fault tolerance and redundancy is maintained through data striping. Power shell commands such as Get-FileIntegrity and Set-FileIntegrity can be used to manage file integrity streams.
    • Storage Space Integration
      ReFS allow for repair of data files with an alternate copy found in the storage space. This is made possible when used alongside disk mirroring. The repair and replacement takes place online without the need to unmount the volume.
    • Data Recovery
      When data is corrupted and no original copy of it exists in the database, ReFS will remove the corrupt data from the namespace while keeping the volume online.
    • Preventive Error Correction
      The Resilient File System allows for data integrity check in addition to validation before any read or write action. The integrity check will occasionally scan through volumes to identify potential errors and trigger a repair action.
  • Compatibility
    Working with ReFS can be used alongside volumes using the New Technology File System (NTFS) because it still has support for key NTFS features.
  • Time Saver
    When backing up data or transferring files from partitions using ReFS, the time taken during read/write actions is reduced compared to backing up data in an NTFS partition.
  • Performance
    ReFS performance ranks on new features like virtualization, cloning volume blocks, real time optimization, etc. All are to enhance dynamic and multiple workloads. Performance on any ReFS is made possible through:

    • Mirror Accelerated Parity
      The parity mode ensures that the system delivers both efficient data storage and high performance. The volume is divided into two logical storage sectors, each with its own drive properties and resilient types.
    • Accelerated VM Operations
      In an effort to improve functionality when implementing virtualization, ReFS allow for the creation of partitions that support block cloning to allow for multi-tasking. ReFS also reduces the time needed to create new fixed-size Virtual
      Hard Disk files from minutes to seconds.
    • Varied Cluster Sizes
      The ReFS allows for the creation of both 4K and 64K file cluster sizes. In other file systems, 4K is the recommended cluster size. But the ReFS accommodate the 64K because of its large and sequential input /output file requests.
    • Scalability
      The ability to support large data sets without having a negative impact on system performance is by far the best file deployment system in terms of scalability. Shared data storage pools across the network enhance fault tolerance and load balancing.

Points to Note

ReFS cannot be used on a boot file system (the drive containing bootable Windows files). The ReFS partition is best used exclusively on storage volumes.

Removable volumes such as USB flash drives cannot accommodate the ReFS partition because there is no available mechanism to convert a ReFS partition to another file system.

ReFS, like NTFS, was built on the foundation of compatibility to make it easier to move data from NTFS to ReFS because of the inherited features like access control list, BitLocker, mount points, junction points, volume snapshots, symbolic links, and file IDs.

Some of the lost features likely to be encountered when moving to ReFS are Object IDs, short names, extended IDs, compressions, quotas, hard links, user data transactions, and file level encryption.

Some files or installed programs may not function as intended when ReFS is used on a non-server operating system.

In the even that a ReFS partition fails, recovering the partition is not possible; all that can be done is data recovery. Presently, there is no recovery tool available for ReFS.

Conclusion

The Resilient File System has unique advantages over the existing file system. It may have its own drawbacks, but that does not take away its self-healing power, file repairs without downtimes, resilience in the event of power failure, and its ability to accept huge file sizes and names longer that then usual 255 characters. File access on ReFS uses the same mechanisms NTFS uses.

Most of the implementations of ReFS are to be used on systems with huge storage and rapid input/output demands. The ReFS cannot fully replace the NTFS because its design was intended for a specific work environment. Some of its features do not have full support, therefore system administrators aspiring to use ReFS may still have to wait for its full implementation.

Enforcing NTFS Permissions on A File Share

One of the most important functionalities in Microsoft Windows Server is access control over files and folders. That important function is controlled by File and Folder security permissions framework.

NTFS (New Technology File System) permissions are usable to drives formatted with NTFS. NTFS permissions affect local users as well as network users and they are based on the permission granted to each user at system login, no matter where the user is connecting.

NTFS Structure

NTFS File System is a hierarchical structure, with disk volume on top and folders as branches. Each folder can contain numerous files or folders, as leaves in that node. Folders are referred as containers or objects that contain other objects.

In that hierarchy, of course, there is need to define access rights and permission per user or group. For that, permissions are used.

Managing Permissions

Each permission that exists can be assigned in two ways: explicitly or by inheritance.

Permissions set by default when the object is created, or by user action are called. Explicit permissions and permissions that are given to an object because it is a child of a parent object is called inherited permissions.

Permissions are best managed for containers of objects. Objects within the containers inherit all the access permissions in that container. The first thing to specify when establishing permissions is granting access to the resource (Allow) or not (not Allow).

After setting up permission, resource assets are controlled by the Local Security Authority (LSASS), and it checks the security of user that tries to access it. If SID (security identifier) is valid, LSASS allows usage of an object and all inherited objects in the structure.

Permission Rules

Due to many different permission settings per user in a bigger structure, there is a possibility of conflicting permission settings. So here are some rules that were made to resolve possible issues:

  • Deny permissions are superior to allow
  • Permissions applied directly to an object (explicit permissions) are superior to permissions inherited from a parent (for example from a group).
  • Permissions inherited from near relatives are superior to permissions inherited from distant predecessors. So, permissions inherited from the object’s parent folder are superior to permissions inherited from the object’s “grandparent” folder, and so on.
  • Permissions from different user groups that are at the same level are cumulative. So, if a user is a member of two groups – one of which has an “allow” permission of “Read” and other has an “allow” or “Write”, the user will have both read and write permission depending on the other rules above.

Permission Hierarchy

File permissions are superior to folder permissions unless the Full Control permission has been granted to the folder.

Deny permissions generally are superior to allow permissions, it is not always the matter. An explicit “allow” permission can take precedence over an inherited “deny” permission. The hierarchy of precedence for the permissions can be set as follows, starting from higher to lower:

  1. Explicit Deny
  2. Explicit Allow
  3. Inherited Deny
  4. Inherited Allow

NTFS Permissions and Shared Folder Permissions

When NTFS permissions are used alongside Share permission, there could be a conflict in the configuration. In those cases, an option that is applied is one that is most restrictive.

It is possible to combine both permission sets to access the resources on an NTFS volume. First, it is needed to share folders with the default shared folder permission and then assigns NTFS permission to a shared folder and to secure files that way.

This way, an effect is the usage of NTFS permissions to control access to shared folders, and it is more secure and flexible than usage of shared folders permission only. Plus, NTFS permissions are enforced, regardless if the resource is accessed locally or via the network.

NTFS permissions can be applied to files and subfolders in a shared folder, and different permissions can be applied to each file and subfolder inside shared folder. That means that NTFS functionality is added to a shared folder.

So, in the hypothetical situation of moving or copying files or folders from NTFS permissions to a shared folder. The question is, is it possible to force files and folders to inherit permissions from the parent, regardless of how they get in a shared folder (copied or moved)?

The short answer is yes.

When files are copied or moved, all permissions are inherited from the destination. This makes things much easier to administer and gives users less chance to accidentally create file/folder structures with incorrect permissions without knowing.

File Server Resource Manager (FSRM) Overview

File Server Resource Manager (FSRM) is a Microsoft Windows Server role created for managing and classifying data stored on file servers. It includes some interesting features which can be configured by using the File Server Resource Manager snap-in or by using Windows PowerShell.

Here’s an overview of the features included in the FSRM.

File Classification Infrastructure

This offers automatic classification process based on custom properties with the purpose of an easier and a more effective way of managing files.

It classifies files and applies policies based on that classification. Once files are classified, a management task can be either public or private. As an example, we can take public or private file classification. Once the files have set class, a management task can be created to perform some actions on a file (RMS encryption for example).

It can be instructed to perform encryption on files classified as private but exclude files classified as public.

File Management Task

Enables applying of conditional policy or action to files based on classification. Conditions of the policies can include file location, classification properties, file creation date, file modification date, or date of last access to file.

The tasks that can be managed are ability to expire files, encrypt files, or run some custom command.

Quota Management

This allows a limitation of allowed space for a volume or folder. Quotas are automatically applied to new folders that are created on a volume. It is possible to define quota templates which can be applied to new volumes or folders.

File Screening Management

This provides control over the type of files that can be stored on a server. For example, the user can create file screen which does not allow storing JPEG files in the personal shared folder on a file server.

Storage Reports

Storage reports are used to help identify trends in disk usage and classification of user data. It can monitor selected groups of users and restrict attempts to save unauthorized files.

Important thing to notice is that File Server Resource Manager supports only NTFS File System format and does not support the Resilient File System (ReFS).

Practical Applications

Some practical applications for File Server Resource Manager include:

  • If File Classification Infrastructure is used with the Dynamic Access Control, a policy that grants access to files and folders based on the way files are classified on the file server.
  • The user can create File Classification rule which tags any file that contains at least 10 Social Security numbers as personal pieces of the information file.
  • Any file that has not been modified in the last 10 years can be set as expired.
  • Quotas (i.e. 200 MB) can be created per user. A notification to the admin user can also be set when the quota is at 80% (i.e. 180 MB of 200).
  • It is possible to schedule a report which runs at the specific time weekly with a purpose of generating a list of most recently accessed files from a previously selected period. This can help the admin user determine the weekend storage activity and plan server downtime accordingly.

Storage on Windows Server 2016: An Overview

Windows Server 2016 Data Center brought interesting new and improved features in the field of virtual workload data centers (SDDC). 

SDDC stands for Software-Defined Data Center, which is defined as data centers with a virtualized infrastructure delivered as a service. Microsoft finds SDDC as a more flexible, cost-effective data center platform based on Hyper-V. It offers the possibility of moving entire operation models away from a physical data center. 

Software-Defined Storage

For virtualized workloads technology in Windows Server 2016 consist of 4 new and improved features: 

  • Storage spaces direct – A new Windows Server 2016 features an extended existing Windows Server SDS (Software-defined Storage). This enables the building of highly-available (HA) storage systems with local storage. HA storage systems are highly scalable and much cheaper than traditional SAN or NAS arrays. It simplifies procurator and deployment and offers higher efficiency and performance. 
  • Storage replica – This provides block-level replication between servers or clusters and is intended primarily for disaster prevention, such as the ability to restore service to an alternate data center with minimal downtime or data loss, or even to shift services to an alternate site. It supports two types of replication: synchronous (primarily used for high-end transactional applications that need instant failover if the primary node fails) and asynchronous (commits data to be replicated to memory or a disk-based journal which then copies the data in real-time or at scheduled intervals to replication targets). 
  • Storage Quality of Service (QoS) – A feature that provides central monitoring and managing of storage performance for virtual machines using Hyper-V and the Scale-Out File Server roles. In Windows Server 2016, QoS can be used to prevent all storage resources consumption of single VM. This also monitors performance details of all running virtual machines and the configuration of the Scale-Out File Server cluster from one place. Plus, it defines performance minimums and maximums for virtual machines and ensures that they are met. 
  • Data Deduplication – A feature that helps in reducing the impact of redundant data on storage costs. Data Deduplication optimizes free space on a volume by examining the data on the volume for duplication. Once identified, duplicated portions of the volume’s dataset are stored once and are (optionally) compressed for additional savings. 

 General Purpose File Servers

  • Work folders, which were first presented in Windows Server 2012 R2, allows users to synchronize folder across multiple devices. It can be compared to existing solutions such as Dropbox, but with a difference of using your file server as the repository and that it doesn’t rely on a service provider. This way of synchronization is convenient for companies because of its own infrastructure used as a server, and for users who can work on files with no limits to corporate PC or being online.  
  • Offline Files and Folder Redirection are features that when used together, redirect the path of local folders (such as the Documents folder) to a network location while caching the contents locally for increased speed and availability.  
  • Separate Folder Redirection enables users and admins to redirect the local folder to other (network) locations. It makes files available from any computer on the network. Offline files allow access to files, even when online, or in case of slow network. When working offline, files are retrieved from the Offline Files folder at local access speeds. 
  • Roaming Users Profiles redirects user profiles to a file share so that users receive the same operating system and application settings on multiple computers. 
  • DFS Namespaces enables a user access to group-shared folders from different servers to one logically structured namespace. It makes handling shared folders on multiple locations easier from one place. 
  • File Server Resource Manager (FSRM) is a feature set in the File and Storage Services server role which helps classify and manage stored data on file servers. It uses features to provide insight into your data by automating classification processes, to apply a conditional policy or action to files based on their classification, limit the space that is allowed for a volume or folder, control the types of files that user can store on a file server and provides reports on disk usage. 
  • iSCSI Target Server is a role service which automizes management tasks. This is useful in a network or diskless boots as it creates block and heterogeneous storages. It’s also useful for testing applications before deployment in storage area networks. 

File Systems and Protocols

  • NTFS and ReFS – A primarily new and a more resilient file system, which maximizes data availability, scaling, and integrity of large data sets across different workloads. 
  • SMB (Server Message Block) –  Provides access to files or other resources at a remote server. This allows applications to read, create, and update files on the remote server. It can also communicate with any server program that is set up to receive an SMB client request.  
  • Storage Class Memory – Provides performance similar to computer memory, but with the data persistence of normal storage drives. 
  • BitLocker – Protects data and system against offline attacks and stores data on volumes in an encrypted format. Even if the computer is tampered with or when the operating system is not running, this still provides protection. 
  • NFS (Network File System) – Provides a file sharing solution for enterprises that have heterogeneous environments that consist of both Windows and non-Windows computers. 

SDDC represents a diversity of traditional data centers where infrastructure is defined by hardware and devices. Components are based on network, storage, and server virtualization. 

ReFS vs NTFS

ReFS vs NTFS

When discussing backup solutions for data on Windows fileservers, most of the discussion would go around exceeding 3-2-1 backup rule, or backup (all in one) appliance solutions.

Not so much attention is given to local storage formatting and file systems used in the process, of course, in case you prefer local storage over file based share.

An interesting topic was shared lately on Novosco’s Technical Architect Craig Rodgers, under the title “ReFs vs. NTFS, Calm Seas vs. Stormy Waters” regarding exactly the point.

As a part of his role in Novosco (technical validation of projects and solutions), Mr. Rodgers tested the ReFS and NTFS capabilities and differences in 8 week period, with daily copy of 8 virtual servers to 8 different repositories.

Test goal was a direct comparison between the various block sizes, file systems, compression and deduplication settings, which are often used in backup copy jobs

His team created 2TB LUNs from SAN ( Storage Area Network) and presented it to the server as drives.

Data flow from host to repositories used BaaS node, copied varied roles and change rated VM’s data to another location via backup copy jobs, then copied from the Baas note do the test repositories outside of the normal backup window.

Testing was made with Veem backup platform configured to create backup copy jobs that targeted servers, with 7 incremental and 8 weekly backup copies configured via GFS retention policy on 8 different type of servers :

  • Application
  • Web Application
  • Database
  • Domain Controller
  • Exchange Hybrid
  • Web Server
  • Light Application
  • Network Services

8 Week period test results came very interesting. According to Mr. Rodgers;  64K ReFS formatted drives have an additional file system overhead once formatted when compared to 4K.

Veem made solid results on data reduction, especially on DB server, which structured data achieved the best reduction in space. Raw uncompressed data achieved the best levels of deduplication, with ReFS repository data included for comparison, there was no post process operation on the ReFS repositories.

Initially, the capacity savings of processed data in the NTFS uncompressed repositories is impossible to ignore, however, the additional space required to ingest the data cannot be ignored too.  If a long-term retention repository is a goal, then within the constraints of NTFS deduplication, (1TB officially, seen 4TB restored without issue in testing) uncompressed offers huge gains regarding data reduction, 20:1, with Windows.

The big flaw of ReFS is a disability for the RAID, which Microsoft is working on, so keep in mind usage of hardware virtualized RAID alternative, if you want to use ReFS in a future deployment.

ReFS, for the most part, is working well now and is probably the best bet for a primary or indeed secondary backup repository. With regards to a second copy, ReFS is great for fast transforms however you may be happy trading performance for retention, in which case backup copies can target an NTFS volume.

In conclusion, although ReFS has some major advantages over NTFS filesystem, like Automatic integrity checking, data scrubbing techniques, better protection against data degradation, built-in drive recovery, and redundancy, etc., by comparison to NTFS, it still has flaws: cannot be used with Clustered Shared Volumes, no conversion capability between NTFS and ReFS, no file-based deduplication, and no disk quotas. Regarding that flaws, and Microsoft announcement to move it just to windows workstation distribution, It doesn’t look like ReFS, in a state that is now, can threaten NTFS’ position as the main system.

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

Get your free edition of the easiest and fastest NTFS Permission Reporter now!

Quota Management in Windows Server 2016

Quota management is a valuable feature that enables you to restrict the storage capacity of shared resources in Windows Server 2016. If you create quotas, you will limit the space allocated for a volume or a folder—allowing you to practice capacity management conveniently.

Quota Management in Windows Server 2016

To set quotas in the Windows Server, you’ll need to use a tool called File Server Resource Manager (FSRM). This tool assists in managing and organising data kept on file servers.

The File Server Resource Manager tool consists of the following five features.

  • File classification infrastructure—this feature allows you to organise files and implement policies.
  • File management tasks—it enables you to implement conditional policies or tasks.
  • Quota management—it assists you to restrict the space available on shared folders.
  • File screening management—it allows you to limit the type of files that users can keep. For example, you can set a file screen to prevent users from creating MP3 files on the files server.
  • Storage reports—with this feature, you can generate reports to understand trends in disk utilisation and how data is organised, which enables you to spot unauthorised activities.

In this article, we are going to talk about the quota management feature in FSRM.

Setting up File Server Resource Manager

We need to install the File Server Resource Manager tool before using it for quota management.

A quick way to complete its setup is through the GUI server manager. Here are the steps for installing the tool.

1. Start by logging into the Windows Server 2016. Then, on the Server Manager’s dashboard, click on “Manage” and select “Add Roles and Features”.

2. On the “Before you begin” screen click “Next”.

3. Select “Role-based or feature-based installation” and click “Next”.

4. Select your destination server and click “Next”.

5. On the “Select Server roles” dashboard, expand “File and Storage Services” and “File and iSCSI Services”. Th

en, select “File Server Resource Manager”.

6. On the window that pops up, Click the “Add Features” button to incorporate the required features. Click “Next”.

7. If you do not need to add any extra features, just leave the default settings and click “Next”.

8. Confirm the installation selections and Click “Install” to start the process.

9. After the installation process is complete, click the “Close” button.

10. You can now access the File Server Resource Manager from the administrative interface and use it to create quotas.

Creating Quotas Using FSRM

As earlier mentioned, quota management enables you to set restrictions and define the extent of space available for users in the server. For example, you can limit all users to a maximum of 5GB on a shared folder. As such, the users cannot add data to the folder that exceeds 5GB.

You can also configure the File Server Resource Manager tool to be sending notifications whenever the specified usage limit is reached. For example, you can specify that an email is to be sent if 85% of the space has been consumed.

Creating quotas using the FSRM tool is a two-step process:

  • Create a template
  • Create a quota

a) Create a template

Before setting quotas, you need to either create a quota template or choose a default template already available on the File Server Resource Manager tool.

It is recommended that you create quotas solely from templates. This way, you can easily manage your quotas by making changes to the templates rather than the individual quotas. The one central location for managing quotas eases the enactment of storage policy rules.

Here are the steps for creating a quota template.

1. Under the “Quota Management” Section, right-click the “Quota Templates” button and go for “Create Quota Template”.

2. On the window that pops up, enter the Template name and the space limit. If you choose the “Hard quota” option, users will be unable to surpass the specified limit. A hard quota is good for controlling the amount of data allowed on a folder or a disk.

On the other hand, if you select the “Soft quota” option, users will be able to exceed the allocated limit. A soft quota is mostly used for monitoring space usage and producing notifications.

3. Lastly, to set notification thresholds, press the “Add” button. On the window that pops up, input your notification specifications.

You can specify that an email is to be sent, an entry is to be made to the event log, a command is to be run, or a report is to be generated. For example, you can state the whenever usage reaches 85%, send an email message to the administrator.

Thereafter, click “OK” to complete creating the quota template.

b) Create a quota

After setting up the quota template or using a default quota template, you need to create the quota.

Here are the steps for creating a quota.

1. On the File Server Resource Manager’s dashboard, right-click on “Quotas” and go for “Create Quota”.

2. On the “Create Quota” window, in the “Quota path” section, browse the path to the volume or folder that the storage capacity restriction will be applied.

Then, choose either the “Create quota on path” or the “Auto apply template and create quota…” option.

If you select the first option, quota will only be applied to the primary folder. For example, if you limit the parent folder to only 5GB, then the other subfolders will share the space specified in the main folder.

On the other hand, if you choose the second option, then the quota will also be applied to the subfolders. For example, if you restrict the main folder to 5GB, then the subfolders will also have individual quotas of 5GB each.

Subsequently, on the “Derive properties from this quota template” option, choose the template you created previously.

If satisfied with the quota properties, click “Create”.

After you’ve created the quota, you can see it on the File Server Resource Manager’s dashboard. Thereafter, you’ll be able to limit the amount of space allowed on your shared resources.

 

I hope these clear things up.

Want to learn about NTFS Permissions, Share Permissions, and how to use them, get your free course HERE!

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

Get your free edition of the easiest and fastest NTFS Permission Reporter now!

 

Optimizing File Server Performance in Windows Server 2016

If you have a file server system in your company, you may want to tune some parameters and settings to enhance its performance. For example, you may want the highest possible throughput on your server to meet the growing workload needs.

This article gives a set of guidelines that you can implement to optimize the file server settings in Windows Server 2016 and benefit from optimized performance.

How to Optimize File Server Performance?

1. Choose a Proper Hardware

Foremost, you should go for a good hardware that will sufficiently support your performance incremental efforts. If the hardware cannot meet the expected file server load, the software adjustments may not yield significant fruits.

Here are some important hardware parameters you should optimise.

  • Response times
  • Growth expectations
  • Loading factors—such as average load and peak load
  • Capacity level

2. Optimise SMB Parameters

The Server Message Block (SMB) protocol is included into the Windows server to enhance the sharing of files and other resources across the network. The latest version available on Windows Server 2016 is 3.1.1, and it comes with several helpful features you can optimise to get the most of it.

Here are some tips on how to optimise the various SMB parameters.

a) Practice “least privilege” principle

You can practice the principle of least privilege by limiting access to some services or features. If a file server or a file client do not need any feature, just disable it. Period.

Some of the features you can disable include:

  • SMB signing
  • SMB encryption
  • NTFS encryption
  • File system filters
  • Client-side caching
  • Scheduled tasks
  • IPSEC

Btw, check out our FolderSecurityViewer to analyze and report NTFS permissions. Download the Free Edition now!

b) Configure power management mode

A constant high workload will reduce the speed and performance of your server. Therefore, for a comfortable working experience, you should make sure that the configurations for any BIOS as well as operating system power management is done correctly.

For example, this may consist of High Performance mode or even modified C-State. To avoid any bottlenecks, remember to install the most up-to-date, robust, and quickest storage and networking device drivers.

c) Follow file copying best practices

Users usually copy files from one location to the other on file servers. There are some best practices you can follow to enhance the speed of transferring files.

Windows has numerous utilities you can run on the command prompt and conveniently transfer files. And, the recommended ones are Robocopy and Xcopy.

If using Robocopy, it’s advisable to include the /mt option to quickly copy and transfer several small files. It is also advisable to use the /log option to lessen console output by enabling redirection to NUL device or to a file.

If using Xcopy, you can significantly increase performance by including the /q option (which lowers CPU overhead) and /k option (which lowers network traffic) to your present parameters.

d) Practice SMB performance tuning

It is important to note that the performance of a file server will largely depend on the parameters set on the SMB protocol. If the parameters are well tuned, the file server performance can greatly improve.

Here is table giving some of the registry settings that can influence the operation of the SMB file servers, together with some recommended practices.

Parameter Registry Settings Recommendations
Smb2CreditsMin

and

Smb2CreditsMax

HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\Smb2CreditsMin

HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\Smb2CreditsMax

The defaults are 512 and 8192 correspondingly.

Check SMB Client Shares\Credit Stalls /Sec to observe any problems with credits.

AdditionalCriticalWorkerThreads HKLM\System\CurrentControlSet\Control\Session Manager\Executive\AdditionalCriticalWorkerThreads The default is 0. You could raise the value if the quantity of cache manager dirty data is consuming a larger percentage of memory.
MaxThreadsPerQueue HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\MaxThreadsPerQueue The default is 20. In case the SMB2 work queues are increasing significantly, raise the value.
AsynchronousCredits HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\MaxThreadsPerQueue The default is 512. In case a big quantity of concurrent asynchronous SMB commands is needed, raise the value.

Here is an example of how the settings can be applied to achieve optimum file server performance on Windows server 2016. Note that the settings are not suited for all computing situations, and you should assess the effect of every individual settings before using them.

Parameter Value Default
AdditionalCriticalWorkerThreads 64 0
MaxThreadsPerQueue 64 20

3. Optimise NFS Parameters

The Network File System (NFS) model available in Windows server 2016 is important for enabling client-server communications in mixed Windows and UNIX conditions.

Here is table giving some of the registry settings that can influence the operation of the NFS file servers, together with some recommended practices.

Parameter Registry Settings Recommendations
OptimalReads HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\OptimalReads The default is 0. Before making any changes to the setting, evaluate its effect on system file cache grow.
RdWrNfsHandleLifeTime HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\RdWrNfsHandleLifeTime The default is 5. Appropriately set it to ensure optimal control of the lifetime of NFS cache.
CacheAddFromCreateAndMkDir HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\CacheAddFromCreateAndMkDir The default is 1. Adjust the value to 0 to deactivate the inclusion of entries to the cache in CREATE and MKDIR directories.
MaxConcurrentConnectionsPerIp HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Rpcxdr\Parameters\MaxConcurrentConnectionsPerIp The default is 16. Raise it to the highest value of 8192 to increase the number of connections for every IP address.

4. Uninstall Unused and Redundant Features

Windows server 2016 has dozens of logging, monitoring, and debugging tools, most of which you may not find useful.

The amount of space available on the server is critical and allowing unused and redundant tools to just sit there is not doing any justice to your server.

On a regular basis, you should visit the “Service Control Manager” section and remove services and features that do not add value to your file server.

You should uninstall any utility or application that you find not useful, and your file server performance will greatly improve.

For example, you should always deactivate the DOS 8.3 short file names. For backward compatibility, your Windows server 2016 may contain the DOS 8.3 file names, especially if you upgraded your server from an older version of Windows.

These days, the 8.3 short file name is unnecessary, and they do not add any value to the operation of the file servers. Therefore, disabling this feature will provide some additional speed to your Windows server 2016.

References

Microsoft. (2017). Performance tuning for SMB file servers. Retrieved from https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/file-server/smb-file-server

Apachelounge. (2017). Performance tuning guidelines for Windows Server 2016. Retrieved from https://www.apachelounge.com/download/contr/Perf-tun-srv-2016.pdf

 

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

Get your free edition of the easiest and fastest NTFS Permission Reporter now!