Windows Server Deduplication

Thought to be one of the useful features of Windows Server since the launch of the 2008 R2 version. Deduplication is a native feature added through the server manager that gives system administrators enough time to plan server storage and network volume management.

Most Server Administrators rarely talk about this feature until it is time to address the organization’s storage crunch. Data deduplication identifies similar data blocks and saves a copy as the central source reducing the spread of data all over the storage areas. Deduplication works on a file or block level giving you more space in the server.

Special hardware components, which are relatively expensive, are required to explore the block level deduplication; the reason behind extra hardware is the complex processing requirements. The file level of deduplication is not complicated and thus does not require the additional hardware. In most cases, Administrators implementing deduplication prefer the file approach.

When to Apply Windows Server Deduplication

Windows server file deduplication works on the file level its operations work on a higher level than a block level as it tries to match chunks of data. File deduplication is an operating system level meaning that you can enable the feature within a virtual guest in a hypervisors environment.

Growth in industries is also driving the demand for deduplication although storage hardware components are becoming bigger and affordable. Deduplication is all about fulfilling the growing demand.

Why is Deduplication Feature Found on Servers?

Severs are central to any organization data, as users store their information into the repositories. Not all users embrace new technology on how to handle their work while others feel safe making multiple copies of the same work. Most of the work Server Administrators should be doing managing and backing up user data, and this gives them an easy time using windows dedupe feature.

Data deduplication in a straightforward feature and will take a few minutes to make it active. Deduplication is one of the server roles found on windows servers, and you do not need a restart for it to work. However, it is safe to do so to make sure the entire process is configured correctly.

Preparing for Windows Server Duplication

  • Click on start
  • Click on the run command window
  • Enter the command below and press enter (this command runs against selected volume to analyses potential space for storage)

DDEval.exe

  • Right click on the volume in Server Manager to activate data deduplication
  • The following wizard will guide you through the deduplication process depending on the type of server in place. (Choose a VDI or Hyper-V configuration or File Server)

Set up The Timing for Deduplication

Deduplication should run on scheduled time to reduce the strain on existing resources. You should not aim to save storage space at the expense of optimization of the server. The timing should at such a time when there is little strain on the server to allow for quick and effective deduplication.

Deduplication is a process that requires more CPU time because of the numerous activities and process taken by each job. Other deduplication demands include optimization, integrity scheduling, and garbage collection. All these deduplication activities should be running at peak hours unless the server has enough resources to withstand system slowdowns.

The capacity that deduplication reclaims varies depending on server use and storage available. General files, ISOs, Office applications files, and virtual disks consume much of the storage locations.

Benefits of Windows Server Deduplication

With the help of deduplication, it brings these direct benefits to the organization:

Reduced Storage Allocation

Deduplication can reduce storage space for files and backups. Therefore, an enterprise can get more storage space reducing the annual cost of storage hardware. With enough storage, there is a lot of efficiency, speed and eliminates the need of installing backup tapes

Efficient Volume Replication

Deduplication ensures that only unique data is written to the disk hence reducing network traffic

Increasing Network Bandwidth

If deduplication is configured to run at the source no need to transfer files over the network

Cost-Effective Solution

Power consumption is reduced, less space required for extra storage for both local and remote locations. The organization buys and spends less on storage maintenance thus reducing the overall storage costs.

File Recovery

Deduplication ensures faster file recoveries and restoration without straining the day’s business activities.

Features of Deduplication

Transparency and Ease of Use

Installation is straightforward on the target volume(s). Running applications and users will not know when deduplication takes place. The file system works well with NTFS file requirements. Files using the encryption mode, Encrypted File System (EFS), files that have a capacity smaller than 32KB or those with Extended Attributes (EAs) cannot be processed during deduplication. In such cases, file interaction takes place through NTFS and not deduplication. Files with alternative data stream will only have its primary data stream deduplicated, as the alternative will be left on the disk.

Works on Primary Data

The feature once installed on the primary data volumes will operate without interfering with the server’s primary objective. The feature will ignore hot data (active files at the time of deduplication) until it reaches a given number of days. The skipping of such files maintains consistency of the active files and shortens the deduplication time.

This feature uses the following approach when processing special files

  • Post procession: when new files are created, the files go directly to the NTFS volume where they are evaluated on a regular schedule. The background processing confirms file eligibility for deduplication every hour by default. The scheduling for confirmation time is flexible
  • File age: a setting on the deduplication feature called MinimumFileAgeDays controls how long a file should stay on the queue before it is processed. The default number of days is 5. The Administrator can configure it to 0 to process all files.
  • Type of File and Location Exclusions: you can instruct the deduplication feature not to process specific file types. You can choose to ignore CAB files, which does help the process in any way and any file that requires a lot of compression space such as PNG files. There is an option of directing the tool not to process a particular folder.

Portability

Any volume that is under deduplication runs as an automatic unit. The volume can be backed up and move it to a different location. Moving it to another server means that anything that was in that file is accessible on its new site. The only thing that you need to change is schedule timings because the native task scheduler controls the scheduler. If the new server location does not have a running deduplication feature, you can only access the files that have not been deduplicated.

Minimal Use of Resources

The default operations of the deduplication feature are to use minimal resources on the primary server. If any case the process is active, and there is a likely shortage of resources, deduplication will surrender the resources to the active process and resume when enough is available.

How storage resources are utilized

  • The harsh index storage method uses low resources and reduces read/write operations to scale large datasets and deliver high edit/search performance. The index footprint left behind is excessively low and uses a temporary partition.
  • Deduplication verifies the amount of space before it executes. If no storage space is available, it will keep trying at regular intervals. You can schedule and run any deduplication tasks during off-peak hours or during idle time.

Sub-file Segmentation

The process segments files into different sizes for example between 32 to 128 KB using an algorithm based on Microsoft research and other developers. The segmentation splits the file into a sequence depending on the content of the file. A Rabin fingerprint, a system based on sliding window hash helps to identify the chunk boundaries.

The average size of every segment is 64KB and are compressed and placed into a chunk store hidden in a folder located at the System Volume Information (SVI) folder. A reparse point, which is a pointer to the map of all data streams, helps in the replacement of normal files when requested.

BranchCache

Another feature you get from deduplication is that sub-file segmentation and indexing engine is shared with BranchCache feature. This sharing is important because when a Windows Server is running and all the data segments are already indexed, they can be quickly sent over the network as needed, therefore saving a lot of network traffic within the office or the branch.

How Does Deduplication Affect Data Access?

The fragmentations created by deduplication are stored on the disk are file segments that are spread all over increasing seek time. Upon the processing of each file, the filter driver will work overtime to maintain the sequence by keeping the segments together in a random fashion. Deduplication keeps a file cache to avoid repeating file segments and helps in quick file access. In a case where multiple users access the same resource simultaneously, that access pattern enables speeding up of the deduplication for each user.

  • No much difference is noted when opening an Office document; users cannot tell whether the feature is running or not
  • When copy one bulky file, deduplication will send end-to-end copy that is likely to be 1.5 times faster than it would take a non-deduplicated file.
  • During the transfer of multiple bulky files simultaneously, cache helps to transfer the file 30% times faster
  • The file-server load simulator (File Server Capacity Tool) when used to test multiple file access scenarios, you will notice a reduction of about 10% in the number of users supported.
  • Data optimization increases between 20-35 MB/Sec per job that easily translates to 100GB/hour for a single 2TB volume running on one core CPU with a 1GB RAM. This is an indicator that multiple volumes can be processed if additional CPU, disk resources, and memory.

Reliability and Risk Preparedness

Even when you configure the server environment using RAID, there is the risk of data corruption and loss attributed to disk malfunctioning, control errors, and firmware bugs. Other environmental risks to stored data include radiation or disk vibrators. Deduplication raises the risk of disk corruption especially when one file segment referring to thousands of other files is located in a bad sector. Such a scenario gives a possibility of losing thousands of user data.

Backups

Using the Windows Server Backup tool runs a selective file restore API to enable backup applications to pull files out of the optimized backup

Detect and Report

When a deduplication filter comes across a corrupted file or section of the disk, a quick checksum validation will be done on data and metadata. This validation helps the process to recognize data corruption during file access, hence reducing accumulated failures.

Redundancy

An extra copy of critical data is created, and any file segments with more than 100 references are collected as most popular chunks.

Repair

Inspection of the deduplication process and host volumes take place on a weekly basis to scrub for any logged errors and tries to fix them from alternative copies. An optional deep scrubber will walk you through the whole data set by identifying errors and fixing them if possible.

When the disk configurations are configured to mirror each other, deduplication will look for a better copy on the other side and use it as a replacement. If there are no other alternatives, data will be recovered from an existing backup. Scanning and fixing of errors is a continuous process once the deduplication is active.

Verdict on Deduplication

Some of the features described above does not work in all Window Server 2012 editions and may be subject to limitations. Deduplication was built for volumes that support NTFS data structure. Therefore root volumes and system drives, and it cannot be used with Cluster Shared Volumes (CSV). Live Virtual Machines (VMs) and active SQL databases are not supported by deduplication.

Deduplication Data Evaluation Tool

To get a better understanding of the deduplication environment, Microsoft created a portable evaluation tool that installs into the \Windows\System32\ directory. The tool can be tested on Windows 7 and later Windows operating systems. The tool installed through the DDPEval.exe supports local drives, mapped, unmapped, and remote shares. If you are using Windows NAS or an EMC /NetApp NAS, you can test it on a remote share.

Conclusion

The Windows Server native deduplication feature is now becoming a popular feature. It mirrors the needs of a typical server administrator working in production deployments. However, planning for deduplication before implementation is necessary because of the varying situations in which its use may not be applicable.

Windows Server 2016 – Whats New in Data Deduplication

Deduplication intends to eliminate repeating data to create a single instance. The creation of the single instance improves storage utility and works well in a network with heavy network transfers.

Some may confuse deduplication from data compression, which identifies repeat data within single files and encodes the redundancy. In simple terms, deduplication is a continuous process that eliminates excess copies of data, therefore, decreasing storage demands.

Data deduplication applies to Windows Server the Semi-annual Channel and Windows Server 2016. Data deduplication in Windows Server 2016 is highly optimized, manageable, and flexible.

The new elements of data deduplication in Windows Server 2016 are:

The Updated Features

Support for Large Volumes

It was present in earlier versions where volumes were partitioned to fit data sizes above 10TB. Windows Server 2016, data deduplication supports volume sizes up to 64TB.

What is the Added Value?

The volumes in Windows Server 2012 R2 had to be appropriately portioned in the correct sizes to ensure optimization was keeping up with the rate of data transfer. The implication here was that data deduplication could only work on volumes with data of 10TB or less. The performance also depends on existing workloads on write patterns.

What is Different?

Windows Server 2012 R2 uses a single thread and input and an output queue for every volume. This is to maximize optimization and make sure jobs do not fall behind affecting the volume’s overall saving rate, and large data sets have to be broken into small volumes. The volume size depends on the expected partition size, the maximum size is between 6 and 7TB for high volumes and 9 and 10TB for low volumes.

Windows Server 2016 has a new way of working the data deduplication runs on more than one thread and uses multiple input and output for every volume. This routine introduces a new routine that was only possible after dividing data into small chunks.

Support for Large Files

In earlier versions, any files approaching the 1TB size were not eligible for deduplication. Windows Server 2016 supports files with a maximum of 1TB.

What is the Added Value?

For Windows Server 2012 R2 you cannot use large files for deduplication due to reduced performance in the deduplication process queue. Windows Server 2016 deduplication for files up to 1TB is possible enabling you to save a large volume of work, for example, reduplicating large backup files.

What is Different?

Windows Server 2016 deduplication process uses a new streaming and mapping structures to improve and increase the deduplication output and access. Besides, the process can now optimize when there is a failure instead of restarting the entire process. Deduplication affects file with a capacity of 1TB.

The New Features

Support for Nano Server

Nano server support is a new feature and is available in any Nano Server Deployment option that features Windows Server 2016.

What is the Added Value?

Nano servers is a headless deployment in Windows Server 2016 that need a smaller system for tracing resources, enables quick startup and needs fewer updates and restarts than the Windows Server Core Deployment version.

Simple Backup Support

A Windows Server 2012 R2 supported Virtualized Backups like Microsoft Data Protection Manager after successful manual configurations. Windows Server 2016 has some new default backups that allow for seamless data deduplication for Virtual backups.

What is the Added Value?

For this to happen in earlier versions of Windows Server, you must manually tune deduplication settings as opposed to Windows Server 2016 that has a simplified process for its Virtualized backup applications. Server 2016 enables deduplication for a volume the same way the General Purpose File Servers.

Support Clusters Operating System Rolling Upgrade

Data is capable of supporting new Cluster OS Rolling Upgrade feature in Windows Server 2016.

What is the Added Value?

The failover clusters in Windows Server 2012 R2 could have a mix of nodes that run deduplication alongside nodes that operate Windows Server 2016 versions of deduplication.

The improvement adds full access to data that are Deduplicated during the rolling upgrade; this allows the gradual rollout of the new version of Data deduplication on an existing Windows Server 2012 R2 cluster without allowing for downtime during the upgrading process.

What is Different?

Earlier versions of Windows Server, a failover cluster required that all nodes in a cluster be of the same Windows Server version. In Windows Server 2016 version, the rolling upgrades allow clusters to run in mixed modes.

Whats New in Storage in Windows Server 2019 and 2016

The two Window Server Edition 2016 and 2019 have new features, and that has made it possible to store data in the name of storage migration. The migration service helps keep inventory when moving from one platform to another. Other essential details such as security settings and settings from old systems to the new server installation.

The article will try to explain what is new and any changed functions in the storage systems of Windows Server 2016, Windows Server 2019, and other semiannual releases.

We will start by highlighting some of the features added by the two server systems.

Managing Storage with Windows Admin Center

The Windows Admin Center is the central location where an App operating like a browser handles the server functions, clusters, Windows 10 PCs, and hyper-converged infrastructure containing storage locations. The Admin center does this as part of the new server configurations.

The Windows Admin Center is different and runs on Windows Server 2019 and some versions of Windows, we covered it first because it is new and we did not want you to miss it.

Storage Migration Service

The Storage Migration Service is the latest technology making it easy to move servers from an old to a new server version. All the events take place via a graphical interface displaying data on the servers and transfer data and configuration to new servers and then optimally moves old server identities to the new one to match apps and user settings.

Storage Spaces Direct (Available in Server 2019 only)

Several improvements have been made to storage spaces direct in Server 2019 though not included in Windows Server, Semi-Annual channel. Here are some of these improvements:

Deduplication and Compression of ReFS Volume

You will be able to store up to 10X more data on the same storage space using deduplication and compression of the ReFS system. You only need to turn this to on using a single click on the Windows Admin Center.

The different storage sizes with an option to compress make the saving rates to increase. The multi-threaded post processing will keep performance impact low. However, it supports a volume of up to 64TB and with each file reaching 1TB.

Native Support for Persistent Memory

Open up more performance with the native Storage Spaces Direct support for continuous creation of memory modules including the Intel Optane DC PM and NVDIMM-N. Use persistent memory as your cache to speed up the active working set or use it as an extra space needed to facilitate low latency. Manage persistent memory the same way you would any other storage device in Windows Admin Center or PowerShell.

Nested Resiliency for Two-Node Hyper-Converged Infrastructure on the Edges

The all new software resiliency option inspired by RAID 5 + 1 helps survive two hardware failures. The nested resiliency, the two node Storage Spaces Direct cluster should offer continuous accessible storage for programs and virtual machines even when one server node fails.

Two-Server Cluster Using USB Flash Drive as a Witness

Use a low-cost USB flash plugged into your router to act as a witness between two servers in a cluster. If the server is down, the USB will know which of the servers has more data.

Windows Admin Center

Managing and monitoring storage spaces direct with the newly built dashboard gives you an opportunity to create, delete, open and expand volumes with a few clicks. Follow performances of IOPS and IO latency from the entire clusters to the individual hard disks and SSD.

Performance Log

You will see what your server was up to in its resource utilization and performance using the built-in history feature. With more than 50 counters that cover memory, computation, storage and network are collected automatically collected and left in the cluster for a full year.

You do not find anything to install or configure or start; things will work in this feature.

Scale up to 4 PB for Every Cluster

Get to the level of multi-petabyte scale which makes sense in media servers for backup and archiving purposes. Windows Server 2019, storage spaces direct supports up to 4 petabytes (PB) which is the same as 4,000 terabytes.

Other capacity guides are increased as well; for instance, you can create volumes reaching 64 and not 32. The clusters can be stitched together into a set to make the scaling that fits within one storage namespace.

Accelerated Parity is now 2X Faster

You are now able to create Storage Spaces Direct Volumes that are part mirror and part parity. For example, mixing RAID-1 and RAID -5/6 to harness the advantages of both. Windows Server 2019, the performance of mirror accelerates parity is twice that of Windows Server 2016 due to optimizations.

Drive Latency Outline Detection

Get to know which drives have abnormal latency using proactive monitoring and the built-in outlier detection an inspiration from Microsoft Azure. You can see the failing drives labeled automatically in the PowerShell and Windows Admin Center.

Manual Delimiting of the Allocation of Volumes to Increase Fault Tolerance

The Admin can manually change the limit of allocations of volume in Storage Spaces Direct. Delimiting is done to increase fault tolerance in specific circumstances with added management considerations and complexity.

Storage Replica

The storage replica has the following improvements:

Storage Replica in Windows Server, Standard Edition

It is now very possible to use Storage Replica with Windows Server, Standard Edition as well as the Datacenter editions. Running storage replica on Windows Server, Standard Edition has the following weaknesses:

  • Storage replica replicated a single volume and not an unlimited volume number
  • Volume varies with some taking up to 2TB instead of taking an unlimited size

Storage Replica Log Performance Improvements

Some improvements on how storage and replica logs track replication improve replication throughout the latency period as well as Storage Spaces Direct clusters that replicate.

To get the increased performance, all members of the replication group must run Windows Server 2019.

Test Failover

Mount a temporary snapshot of the replicated storage on destination server for testing or backing up purposes.

Windows Admin Center Support

Supporting the graphical management of replication is made possible via the Server Manager Tool. This involves server-to-server replication, cluster-to-cluster and stretch cluster replication.

Miscellaneous Improvements

Storage Replica also seems to have the following improvements:

  • Changes asynchronous stretch cluster behaviors for automatic failover to take place.
  • Multiple bug fixes

SMB

SMB1 and Guest Authentication Removal

Windows Server does not install the SMB1 client and server by default while at the same time the ability to authenticate guests in SMB2 if off by default.

SMB2/SMB3 Security and Compatibility

More options for security and application compatibility were added including the disabling oplocks in SMB2+ for old applications. This also covers the need for signing encryption on every connection from the client.

Data Deduplication

Data Deduplication Supports ReFS

You do not have to choose between the advantages of a modern file system with ReFS and Data Deduplication. Anytime you enable Data Deduplication, enabling ReFS is also possible now.

Data Port API for Optimized Ingress/egress to Deduplicated Volumes

As a developer, you can now enjoy the advantage of knowing data deduplication and how to store data in an efficient manner

File Server Resource Manager

Windows Server 2019 can prevent the File Resources Manager service from creating a change(USN) journal on storage volumes. This is to create and conserve more space on every volume; however, it will disable real-time classification.

This is the same effect that takes place in Windows Storage Server, Version 1803.

What’s New in Storage in Windows Server, Version 1709

Server Version 1709 is the first window server release with a Semi-Annual Channel, which is a channel that fully supported in production for 18 months and a new version coming in every six months.

Storage Replica

Disaster recovery and protection team is an added function of the Storage Replica which is now expanded to include:

Test Failover

You now have an option of mounting the destination storage through a test failover. The snapshots can be mounted temporarily for both testing and backup purposes

Windows Admin Center Support

Supported the management of graphical applications managing replications. You access it via Server Manager Tool.

Storage Replica also has the following improvements:

  • Change asynchronous cluster behaviors to enable automatic failover
  • Multiple bug fixes

What’s New in Storage in Windows Server 2016

Storage Spaces Direct

The storage spaces direct facilitate the availability and scalability of storage using servers with local storage. This implies that deployment and management software that control storage systems and unlock the use of new classes of storage devices. The devices include SATA, SSD, and NVMe disks that may not have been possible with clustered Storage Spaces with Shared Disks.

What Value Does the Change add?

The Storage Spaces Direct allows service providers and enterprises to use industry standard servers with local storage. The idea is to build highly available and scalable software-defined storage.

Use of servers with local storage decreases complexity as it increases scalability and allows the use of storage devices such as SATA solid state disks. This lowers the cost of flash storage, or NVMe sold state Disks

Storage Spaces Direct Removes the need to have a shared SAS fabric which simplifies deployment and configuration. This means the server uses the network as the storage fabric leveraging the SMB3 and SMB Direct (RDMA) for both high speed and low latency as well as good use of the processing unit.

Adding more server to the configuration increases storage capacity and input and output performance. In Windows Server 2016 has its Storage Spaces Direct working differently as explained below:

Storage Replica

Enables the storage, block-level, stretching of failover clusters between sites, as well as synchronous replication between servers. Synchronous replication enables mirroring of data in physical sites with consistent volumes to ensure no data is lost at the file system level. Asynchronous replication may increase the possibility of data loss.

What Value Does the Change Add?

Provide a single vendor disaster recovery solution for both planned and unplanned power loss

Use SMB3 transport with proven performance, scalability, and reliability

  • Stretch windows failover clusters further
  • Use Microsoft end-to-end software for storage and clustering such as Hyper-V, Scale-Out File Server, Storage Replica, Storage Spaces, ReFS/ NTFS, and Deduplication.
  • Help reduce complexity costs by:
  • Being hardware agnostic with no specific requirement for storage configuration like DAS or SAN
  • Allow the storage of commodities and network technologies
  • Features easy graphical management interface for nodes and clusters through failover cluster manager
  • Includes comprehensive and large scale scripting options through the Windows PowerShell
  • Help in the reduction of downtime, large scale productivity
  • Provide supportability and performance metrics and diagnostic capabilities

What Works Differently

The functionality is new in Windows Server 2016

Storage Quality of Service

You can use the storage quality of Service (QoS) as a central monitor for end-to-end storage performance and develop management policies using Hyper-V and CSV clusters in Windows Server 2016.

What Value Does the Change Add?

You will be able to change the QoS policies on a CSV and assign one or more virtual disks on Hyper-V machines. The storage will automatically adjust to meet the policies and workloads that keep fluctuating.

  • Each policy can give a minimum reserve or create a maximum to be used when collecting data. For example, a single virtual hard disk, a tenant, a service or a virtual machine can be used.
  • Use Windows PowerShell or WMI to perform the following:
  • Create Policies on CSV cluster
  • Assign the policy to virtual hard disk and status within the policies
  • Enumerate policies on the CSV clusters
  • Monitor flow performance and status of the policy
  • If you have several virtual hard disks sharing the same policy and performance is shared to meet the demands within the policy’s minimum and maximum settings, it means that the policy can manage virtual hard disks, a single or multiple virtual machines that constitute a service owned by a tenant.

What Works Differently

This is a new feature in Windows Server 2016. The management of minimum reserves and monitoring flow of all virtual disks over a cluster using a single command and central policy-based management are not possible in the previous Server releases.

Data Deduplication

Function

New or Updated

Description

Support large volumes

Updated Before windows Server 2016 you had to specify sizes. Anything above 10TB did not qualify for deduplication. Server 2016 supports deduplication sizes of up to 64TB

Large file support

Updated Before Windows Server 2016, files with 1TB could not deduplicate. Server 2016 supports deduplication of files up to 1TB.

Nano Server Support

New Deduplication is available and fully supported for Server 2016

Simple Backup Support

New Windows Server 2012 R2 supported the Virtual backups using the Microsoft’s Data Protection Manager. Windows Server 2016 simple backup is possible and is seamless

Cluster OS Rolling Upgrades Support

New Deduplication supports Cluster OS Rolling Upgrade and is available in Windows Server 2016

SMB Hardening Improvements for SYSVOL and NETLOGON Connections

Windows 10 and Windows Server 2016 client connections to the Active Directory Domain Service, the SYSVOL and NETLOGON share domain controllers that require SMB signing and authentication via Kerberos.

What does this Change Add?

It reduces the possibility of man-in-the-middle attacks

What Works Differently?

If the SMB and mutual authentication are not available, a Windows 10 or Server 2016 will not access the domain-based Group Policy Scripts. It is also good to note that the registry values of the settings are not present by default, the hardening rules will apply until a new policy change comes in through Group Policy or any relevant registry values.

Work Folders

The added changes to notifications are there when Work Folder server is running on Windows Server 2016, and the Work Folder is on a client running Windows 10.

What Value Does this Change Add?

Windows Server 2012 R2 when the changes in files are synchronized to Work Folder, clients will get notified of the impending changes and wait for at least 10 minutes to the update.

When running Windows Server 2016, the Work Folders will immediately notify the Windows 10 client and the synchronization changes immediately.

What Works Differently

This is a new feature in Windows 2016 and the client accessing the Work Folders must be a Windows 10. In case you are using older clients, or if the Work Folder is on Windows Server 2012 R2, the client will poll every 10 minutes for any new changes.

ReFS

The next cycle will be the ReFS that offer support for large scale storage allocation with varying workloads, reliability, resilience and scalability for your data.

What Values Does the Change Add?

ReFS bring in the following improvements:

  • Implementing new storage tiers that help deliver fast performance and increased capacity. This functionality further enables:
  • Multiple resiliency on the same virtual disk through mirroring and parity tier
  • Enhanced responsiveness to drifting working sets
  • Introducing a block of cloning and substantial improvement of VM operations such as. vhdx checkpoint merge operations.
  • The ReFS tool enables the recovery of leaked storage and helps keep from being corrupted.

What Works Differently?

These functionalities are new in Windows Server 2016.

Conclusion

With so many features available to Windows Server 2019, the article covered the fully supported features. At the time of writing this post, some features were partially supported in earlier versions but are getting full support in the latest Server versions. From this read, you can see that Windows Server 2019 is a good upgrade that you can experience.

Windows Server Disk Quota – Troubleshooting

DISK QUOTA CHALLENGES AND TROUBLESHOOTING

Disk quotas come in handy and allow system administrators to equitably distribute disk space among multiple users in shared servers or PCs. This avoids a situation where a careless user ends up filling the entire hard drive and wreaking havoc in the system. However, quotas do not always work as intended.

As easy as it may seem in setting up disk quotas, sometimes things may go a bit askew. Occasionally, users can get allocated a volume of disk space which is less than what was specified in the settings. This usually happens when the server runs out of space. However, there are situations where you may get the impression that they have received less hard drive space than what was configured. The reason behind this is the misconception that shrouds the meaning of quota allocation when it comes to a user’s files. What users do not realize is that quotas take into account all files that are owned by a user. And this includes files in the recycle bin. This is true especially if disk quotas are implemented on local PCs. Since the recycle bin resides on the PC, this is scenario or discrepancy is most likely to occur.

Another unusual thing that may arise is the unavailability of space despite a user relinquishing the ownership of their files. A user may create a file, change its ownership but still, the file will be counted in the quota.

Another confusing scenario is the use of compressed folders. Windows looks at compressed folders not in their compressed size, but rather, in their original size. This means that quotas look at compressed files in their original uncompressed format, not according to the current size they occupy on the hard drive in their compressed format.

Sometimes, when the disk space limit is exceeded, the user may realize that deleting files in the volume may not free up space as expected. This occurrence has been noted in Windows Server 2008 R2. This happens due to incorrect filling of the file content structure when the deletion happens.

As a solution to this issue, Microsoft released a hotfix which can be downloaded from their official site via this link https://support.microsoft.com/hotfix/kbhotfix.aspx?kbnum=2679054&kbln=en-US

Once you apply the hotfix, run the command below

dirquota quota scan /path: d:\users\scratch

For instance, the above command will apply on scratch folder located in users directory.

After running the command, reboot the system to effect the hotfix settings.

This occurs because the file context structure is not filled in correctly when you delete the files.

In case a user is using a system whose hard drive is formatted using FAT or FAT-32 filesystem, they’ll be required to format it to NTFS filesystem since NTFS filesystem is the only filesystem that acknowledges the concept of quotas as well as file ownership. This compels the system administrator to first perform a file backup of the files contained in the FAT & FAT-32 partitions and later format the volumes to NTFS filesystem. This can be quite tedious and cumbersome. It’s therefore important to ensure all volumes are formatted in NTFS filesystem if you are planning to have several users using or backing up data in the system. This is because disk quotas only work with NTFS volumes only.

Windows Server Disk Quota – Setup and Configure

In the previous post, we looked at the disk quota functionality and how quotas are handy in limiting disk space utilization for shared systems. This is crucial in ensuring that all users get equal space allocation and systems’ performance is kept at an optimal level. In this post, we’ll take a practical approach and see how we can manage and control disk space utilization to prevent users from filling up the hard disk and leaving no more space for anyone. To recap some of the important features about disk quotas, the quota can only be applied to volumes which have been formatted in NTFS filesystem. They are mostly used in corporate networks but can as well be used on a home PC running Windows OS including the basic Windows 10 home. You can choose to set quotas per individual user or apply them on everyone. However, you cannot implement limits on groups. For best practices, quotas should be configured or set per volume basis and not per computer, and upon execution, newly added users will begin using them as expected. That said, let’s dive deeper and see how you can implement this functionality to manage and control hard drive space utilization.

Setting Quota Limits

Although implementing quotas can be done on any disk volume, it can prove quite tricky setting limits on Drive C, which is the Windows installation volume. Try as much as possible to enable quotas on secondary volumes or partitions and plan accordingly. There are two ways of setting quotas. You can set them per account or on a volume basis. Let’s see how you can set quotas on Account basis:

Setting up Quota Per Account basis on Windows

If you want to set disk space limit on end users, while at the same time having your account occupy unlimited space, follow the steps outlined below:

  1. Fire up the File Explorer. This is done by using the (Windows key + E) shortcut.
  2. On Windows 10, Locate This PC tab and click on it.
  3. Under “Devices and drives,” right-click on the preferred drive that you wish to manage. In the menu that appears, select Properties option.
  4. Select the Quota tab.
  5. Click the Show Quota settings tab.
  6. The Quota status Windows will open. Check the first option, which is the Enable quota management option.
  7. Just below the option in 6 above, Locate and Check the Deny disk space to users exceeding quota limit option. This option enables disk space limitation.
  8. Next, Click on the Quote Entries button at the bottom right corner of the window
  9. In case the account you want to restrict is not listed, click Quota, and select New Quota Entry.
  10. In the “Select Users” tab, click on the Advanced button. This displays a pop up window
  11. Next, click on the Find Now button.
  12. At the bottom of the Windows, a list of user accounts will be listed. Select the account you’d want to effect limits on.
  13. Next, Press OK.
  14. Press OK again in the previous Window.
  15. Select the Limit disk space to radio button option.
  16. Set the desired volume of space you’d want and specify the restriction unit size (for instance, MB, GB or TB).
  17. Set the preferred space size before a warning is triggered and specify the size unit (for instance, MB, GB or TB).
  18. Click on Apply option.
  19. Finally, Click on OK.

After following and completing the above procedure, the quotas will take effect immediately users login in. Users will be restricted to the amount of disk space size set and get a warning when approaching the limit as specified in step 16 and 17 above.

Setting up Quota Per Volume basis on Windows

Should you decide to limit the available storage space for all users, follow the steps outlined below:

  1. Fire up File Explorer. This is done by using the (Windows key + E) shortcut.
  2. On Windows 10, Locate This PC tab and click on it.
  3. Under the “Devices and drives,” section, right-click on the preferred drive that you wish to manage. In the menu that appears, select Properties option.
  4. Hit on the Quota tab.
  5. Click the Show Quota Settings tab.
  6. The Quota status Windows will open. Check the first option, which is the Enable quota management option.
  7. Next, Locate and Check the Deny disk space to users exceeding quota limit option. This option enables disk space limitation.
  8. Select the Limit disk space to option.
  9. Set the desired amount of space and specify the size unit (e.g., MB, GB).
  10. Set the amount of space before a warning is triggered to the user and specify the size unit (for example, MB or GB).
  11. Click Apply.
  12. Click OK.
  13. Finally, Reboot your computer.

Once you completed the above procedure, all accounts on your system will now be able to use part of the total available disk space storage. A warning will be triggered to alert or warn users that they’re approaching their maximum storage quota. Should the threshold be reached, users will no longer be able to create and store any more files on the volume. They will either have to delete existing files or move them to another location.

One can always adjust – increase or decrease – the storage quota by making changes to the Limit disk space to & Set warning level to options in step 8

If you decide that you no longer want to put restrictions on the hard disk volume users can use on a drive, you can use the same instructions. In step number 8, select Do not limit disk usage option and uncheck the Deny disk space to users exceeding quota limit as well as the Enable quota management options.

In summary, we have seen how you can plan and implement disk quotas on Windows Systems, both on per user account and volume basis. In the next post, we’ll see some of the challenges that are likely to occur and how you can go around them.

Windows Server Disk Quota – Overview

Windows Server system comes with a very handy feature that allows the creation of many user accounts on a shared system. This enables users to log in and have their own disk space and other custom settings. However, the drawback with this feature is that users have unlimited disk space usage, and with time, space eventually gets filled up leading to a slow or malfunctioning system, which is a real mess. Have you ever wondered how you can avert this situation and set user limits to disk volume usage?

Worry no more. To overcome the scenario described above Windows came up with the disk quota functionality. This feature allows you to dictate or set limits on hard disk utilization space such that users are restricted to the size of disk space they can use for their files. The functionality is available for both Windows and Unix systems like Linux that are being shared by many users. In Linux, it supports ext2, ext3, ext4 and XFS filesystems. In Windows operating systems, it’s supported in Windows 2000 and later versions. It’s important to point out that in Windows, this functionality can only be configured on NTFS file systems only. So, If you are starting out with a Windows server or client system, you may to consider formatting the volumes to NTFS filesystem to avert complications later on. Quotas can be applied to both client and server systems like Windows server 2008, 2012 and 2016. In addition to that, quotas cannot be configured on individual files or folders. They can only be set on volumes and restrictions apply to those volumes only. To be able to administer a disk quota, one must either be an administrator or have administrative privileges, that is, be a member of Administrator’s group.

The idea behind setting limits is to prevent the hard disk from getting filled up and thereby causing the system or server to freeze or behave abnormally. When a quota is surpassed, the user receives an “insufficient disk space error” warning and cannot, therefore, create or save any more files. A quota is a limit that is normally set by the administrator to restrict disk space utilization. This will prevent careless or unmindful users from filling up the disk space leading to a host of other problems including slowing down or freeing of the system. Quotas are ideally applicable in enterprise environments where many users access the server to save or upload documents. An administrator will want to assign a maximum disk space limit so that end users are confined to uploading work files only like Word, PowerPoint and Excel documents. The idea behind this is to prevent them from filling the disk with other non-essential and personal files like images, videos and music files which take up a significant amount of space. A disk quota can be configured as per user or per group basis. A perfect example of disk quota usage is in Web hosting platforms such as cPanel or Vesta CP whereby users are allocated a fixed disk space usage according to the subscription payment.

When a disk quota system is implemented, users cannot save or upload files to the system beyond the limit threshold. For instance, if an administrator sets a limit of 10 GB on disk space for all logon users, the users cannot save files exceeding the 10G limit. If a limit is exceeded, the only way out is to delete existing files, request another user to take ownership of some files or request the administrator, who’s the God of the system, to allocate you more space. It’s important to note that you cannot increase the disk space by compressing files. This is because quotas are based on uncompressed files and Windows treats compressed files based on their original uncompressed size. There are two types of limits: Hard limits and soft limits. A hard limit refers to the maximum possible space that the system can grant an end user. If for instance, a hard limit of 10G is set on a hard drive, the end user can no longer create and save files once the 10G limit is reached. This restriction will force them to look for an alternative storage location elsewhere or delete existing files

A soft limit, on the other hand, can temporarily be exceeded by an end user but should not go beyond the hard limit. As it approaches the hard limit, the end user will receive a string of email notifications warning them that they are approaching the hard limit. In a nutshell, a soft limit gives you a grace period but a hard limit will not give you one. A soft limit is set slightly below the hard limit. If a hard limit of, say 20G is set, a soft limit of 19G would be appropriate. It’s also worth mentioning that end users can scale up their soft limits up to the hard limit. They can also scale down their soft limits to zero. As for hard limits, end users can scale them down but cannot increase them. For purposes of courtesy, soft limits are usually configured for C level executives so that they can get friendly reminders when they are about to approach the Hard limit.

In summary, we have seen how handy disk quota is especially when it comes to a PC or a server that is shared by many users. Its ability to limit disk space utilization ensures that the disk is not filled up by users leading to malfunctioning or ‘freezing’ of the server. In our next topic, we’ll elaborate in detail how we apply or implement the quotas.

File System Attacks on Microsoft Windows Server

Most common File System Attacks on Microsoft Windows Server systems are an Active Directory targeted attacks, which is, based on a fact that AD is a “heart” of any Windows-based system. The bit less common, but still, very dangerous ( and interesting), can be File system attacks.

In this article, we investigated most common ways of filesystem attacks and protection against it.

The goal of File System Attacks is always the data, pieces of information stored on a server, important for any reason to any side that planned an attack. To get to data, first thing, attacker needs are credentials, as more elevated account, as better.

In this article, we will not write about credentials theft, that could be a topic for itself, but we will assume, that attacker already breached organization, and got the Domain administrator credentials.

Finding File Shares

The first step is finding the Data, a place where it “lives”.

For that, the tools are coming out the front. Most of the tools, attackers are using, are penetration testing tools, like, in our example smbmap, or PowerShell ( we will show both ways)

SMBMap, as git hub says “ allows users to enumerate samba share drives across an entire domain. List share drives, drive permissions, share contents, upload/download functionality, file name auto-download pattern matching, and even execute remote commands. This tool was designed with pen testing in mind, and is intended to simplify searching for potentially sensitive data across large networks”

So with a usage of Smbmap’s features, attackers will find all the file shares on those hosts and determine what sort of access, Permissions, and more detailed info about any file share on the system.

Another common way of determining the data location is PowerShell based.

By definition – PowerSploit is a collection of Microsoft PowerShell modules that can be used to aid penetration testers during all phases of an assessment.

And like smbmap, PowerSploit has a huge number of features. For finding data shares, attackers use Invoke-ShareFinder cmdlet, which will, in the combination of other PowerSploit features, show exactly the same things as smbmap, that means all information necessary to access and use data.

Protection

Of course, examples, above, are a just brief description of attacks that can list your data shares to the potential attacker, but, no matter, it is clear, that listing your data is a first step to getting it.

So here are some recommended actions to protect your system:

Removing open shares: Reduce open shares as much as possible. It is ok to have some if explicitly needed by a job, but sometimes, open shares are just result of lousy made permissions. Check out your default permissions ( default permissions are equivalent to open), change them properly, and avoid easy listing for the potential attacker

Monitor first time access activity – this is more an admin tip than a protection method, but it can be important. If you notice, a user has a right to share but never used it, and all the sudden, activity on that account changes, and steps out of “normal”, it could be a sign that account credentials are hijacked.

Check for potentially harmful software, not as malware, but a hint. SmbMap is built in python, so if noticed, sudden installation of python software, or PowerSploit module on your system, that could be an early alarm that something suspicious is going on your servers.

Finding Interesting Data

So now the potential attacker know where the data on our hypothetical server “ live”. The next step is narrowing the data down on “interesting”. There could be huge amounts of files in even the smallest organizations. How can attacker know which data is one he/she need.

With PowerSploit, functionality used is called Invoke-FileFinder.  It has a lot of filtering options, to narrow down data to “interesting”, and export it to CSV files, which allows attacker to explore it on his own system with wanted pace, and after detecting it, attacker can make a targeted attack, and get needed files to staging area, and transport them out of the network ( via FTP, or even Dropbox trial account).

The same thing happens with SmbMap. Just as PowerSploit, it will filter out the data with options, the tool can provide, and show the data, the attacker is interested in, with the same outcome, getting pieces of information.

Protection

With this example, a hypothetical attack is in the second phase. The attacker, successfully listed files and found the most interesting ones. The easy part is left undone. Just taking the data. How to protect from that? Together with earlier mentioned methods, the following can help administrator fortify system and files.

Password rotation – Can be very important action, especially for services and applications that store passwords in filesystems. Constantly rotating passwords and checking file content can present a very large obstacle for the attacker, and will make your system more secure.

Tagging, and encryption –  In combination with Data Loss Protection, will highlight and encrypt important data, which will stop simple type of attacks, at least, getting important data.

Persistence

The final part of the file system attack. In the hypothetic scenario, we had listed and accessing data on the penetrated system. And here we will describe how attackers persist in the system, even when they get kicked out the first time.

Attackers hide some of their data into the NTFS file system, more accurate, in Alternate Data Stream ( ADS). Data of a file is stored in $DATA attribute of that file as NTFS tracks it. Malware vendors, and “bad guys” are tracking ADS and use it for entrance, but still, they need credentials.

So as usual, they can be stopped by correct permissions usage, and not allowing “write” permission to any account that is not specifically assigned for write operations.

File System Attacks are tricky, but they are leaving traces, and in general, most of the attacks should be prevented by System Administrator behavior and predictions. In this field, we can fully say: it’s better to prevent than to heal, and it is clear that only knowing your system fully, and full-time administration and monitoring will/can make your system safe.

Do you want to avoid Unwanted File System Attacks on Microsoft Windows Server?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Introduction to Data Deduplication on Windows Server 2016

Data Deduplication is a Microsoft Windows Server feature, initially introduced in Windows Server 2012 edition. 

As a simple definition, we can tell, data deduplication is an elimination of redundant data in data set and storing only one copy of the same data. It is done by identifying double byte patterns through data analysis, removing double data and replacing it with reference pointed to stored, single piece of data. 

In 2017, according to IBM, an output of world data creation was 2.5 quintillions (1018) bytes a day. That fact shows that today’s servers handle huge portions of data in every aspect of human life. 

Definitely, some percentage drops on duplicated data in any form, and that data is nothing more than the unnecessary load on servers. 

Microsoft knew the trends, way back in 2012 when Data deduplication was introduced and kept developing it, so in Windows Server 2016 system, Data deduplication is more advanced, as more important. 

But let’s start with 2012, and understand the feature in its basic. 

Data Deduplication Characteristics: 

Usage –  Data deduplication is very easy to use. It can be enabled on a data volume in “one-click”, with no delays or impacts on a system functionality.  In simple words, if the user requests a file, he will get it, as usual, no matter is that file affected by deduplication process. 

Deduplication is made not to aim to all files. For example, files smaller than 32KB, encrypted files ( encrypted with a usage of EFS), and files that have Extended attributes, will not be affected by the deduplication process. 

If files have an alternate data stream, the only primary stream will be affected, but alternate, will not.  

Deduplication can be used on Primary data volumes without affecting files that are being written to until files get to certain age, which allows great performance of feature active files and saves on other files. It sorts files in categories by criteria, and those that are categorized as “in policy” files are affected with deduplication, while others are not. 

Deduplication does not change write-path of new files. It allows writing of new files directly to NTFS and evaluates them later through background monitoring process. 

When files get to a  certain age, MinimumFileAgeDays setting decides ( previously set up by admin), are the files eligible for deduplication. The default setting is 5 days, but it can be changed, to a minimum of 0 days, which processes it, no matter of age. 

Some file types can be excluded, like PNG or CAB file types, with compression, if it is decided, the system will not benefit much from mentioned file type processing. 

In need of backing up and restoring to another server, deduplication will not make problems. All settings are maintained on the volume, and in need of relocation, they will be relocated too, all except scheduled settings, that are not written on volume. If relocation is made to a server that does not use deduplication, a user will not be able to access files affected by the process. 

Resource Control 

The feature is made to follow server workload and adapt to system resources. Servers usually have roles to fill, and storage, as seen by admin is only necessary to store background data, so deduplication is adapting to that philosophy. If there are resources to deduplicate, the process will run, if not, the process will stand by and wait for resources to become available. 

A feature is designed to use low resources and reduce the Input/output operations per second ( IOPS) so it can scale large data and improve the performance, with index footprint of only 6 bytes of RAM per chunk (average size of 64 KB) and temporary partitioning.  

– As mentioned, deduplication works on “chunks” principle, it uses an algorithm with chunks a     file in a 64KB pieces, compresses it, and store in a hidden folder. If a user requests that file, it “regenerate” file from the pieces and serve it to the user. 

–  BranchCache:  the feature that sub-file chunking and indexing engine are shared with. It has an option to send, if needed,  already indexed chunk over the WAN to the branch office, and saves a lot of time and data. 

Is there a  Fragmentation, and what about data access? 

The question that is imposed when reading about deduplication, is fragmentation!? 

Is there a fragmentation on a hard drive, based on spreading chunks around your hard drive? 

Answer is no, deduplication ’s filter driver has a task to keep the sequence of unique chunks together on disk locality, so distribution doesn’t go randomly, plus, deduplication has its own cache, so in situation of multiple requests for a file in an organization, the access pattern will speed things up, and will not start multiple file “recovery” processes, and user will have the same “answer time” as with file without deduplication, and in need of copying one large file, we see end-to-end copy times that can be 1.5 times what it takes on a non-deduplicated volume. But real quality and savings are coming up when copying multiple large files at the same time. The time of copying, due to the cache can speed up to an amazing 30%. 

Deduplication Risks and solutions 

Of course, like all other features, this way of works has some risks. 

In cases of any type of data corruption, there are serious risks, but solutions too. 

There is possibility with errors caused by  disk anomalies, controller errors, firmware bugs or environmental factors, like radiation or disk vibrations, that chunks errors can cause major problems as multiple files loss., but with good admin organization, usage of backup tools,  on time corruption detection, redundancy copies and regular checkups can minimize risks of corrupted data, and loses. 

Deduplication in Windows Server 2016 

As with all other features, data deduplication went through some upgrades and new features in the latest edition of Microsoft Server. 

We will describe the most important ones, and show a way to enable and configure that feature in Microsoft Server 2016 environment. 

Multithreading  

Multithreading is flagged as a most important change in 2016 when compared with Windows Server 2012 R2. On Server 2012 R2, deduplication operates in a single-threaded mode, and it uses one processor core by the single volume. In Microsoft, they saw it as a performance limit, and in 2016, they introduced multi-threaded mode. Now each volume uses multiple threads and an I/O queues. It changed limits of size per file or volume. In Server 2012 R2, maximum volume size was 10 TB, and in 2016 edition, it changed to 64TB volumes, and 1 TB files, what represents a huge breakthrough. 

Virtualization Support 

In the first edition of deduplication feature ( Microsoft Windows Server 2012), there was a single type of deduplication, created only for standard file servers, with no support for constantly running VM’s. 

Windows Server 2012 R2 started using   Volume Shadow Copy Service (VSS)  in a way that deduplication with a usage of optimization jobs, optimizes data, and VSS captures and copies stable volume images for backup on running server systems. With the usage of VSS, Microsoft, in 2012 R2 system, introduced virtual machines deduplication support and a separate type of deduplication. 

Windows Server 2016, went one step further and introduced another type of deduplication, designed specifically for virtual backup servers (DPM). 

Nano server support  

Nano server is minimal component’s fully operational Windows Server 2016, similar to Windows Server Core editions, but smaller, and without GUI support, ideal for purpose-built, cloud-based apps, infrastructure services, or Virtual Clusters.  

Windows Server 2016, supports fully deduplication feature on that type of servers. 

Cluster OS Rolling Upgrade support 

Cluster OS Rolling upgrade is a Windows Server 2016 feature that allows upgrade of an operating system from Windows Server 2012 R2 cluster nodes to Windows Server 2016 without stopping Hyper V. It can be made by usage of so-called “mix mode” operation of the cluster. From deduplication angle, that means that same data can be located at nodes with different versions of deduplication. Windows Server 2016, supports mix mode and provides deduplicated data access while a process of cluster upgrade is ongoing. 

Installation and Setup of Data Deduplication on Windows Server 2016 

In this section, we will bring an overview of best practice installation and set up data deduplication on Windows Server 2016 system. 

As usual, everything starts with a role. 

In server manager, choose, Data deduplication ( Located in the drop-down menu of File and storage services), or with the usage of PowerShell cmdlet (as administrator) :  

Install-WindowsFeature -Name FS-Data-Deduplication 

Enabling And Configuring Data Deduplication on Windows Server 2016 

For Gui systems, deduplication can be enabled from Server manager – File and Storage services – Volumes, selection of volume, then right-click and Configure Data Deduplication. 

After selecting the wanted type of deduplication, it is possible to specify types of files or folders that will not be affected by the process. 

After it is needed to setup schedule, with a click on Set Deduplication Schedule button, which will allow selection of days, weeks, start time, and duration. 

Through PowerShell terminal, deduplication can be enabled with following command ( E: is an example volume letter) : 

Enable-DedupVolume -Name E:  -UsageType HyperV 

Jobs Can be listed with the command : 

Get-DedupSchedule 

And scheduled with following command (example – Garbage collection job) : 

Set-DedupSchedule -Name “OffHoursGC” -Type GarbageCollection -Start 08:00 -DurationHours 5 -Days Sunday -Priority Normal 

These are only basics of deduplication  PowerShell commands, it has a lot more different deduplication -specific cmdlets, and they can be found at the following link : 

 https://docs.microsoft.com/en-us/powershell/module/deduplication/?view=win10-ps 

Do you want to avoid Data Lost and Unwanted Data Access?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

How to Configure NFS in Windows Server 2016

NFS  (Network File System) is a client-server filesystem that allows users to access files across a network and handle them as if they are located in a local file directory. It is developed by  Sun Microsystems, Inc, and it is common for Linux/ Unix systems. 

Since Windows Server 2012 R2, it is possible to configure it on Windows Server as a role and use it with Windows or Linux machines as clients. Read to know about How to Configure NFS in Windows Server 2016 here.

How to install NFS to Windows Server 2016 

Installation of NFS (Network File System) role is no different than an installation of any other role. It goes from “Add roles and features Wizard”. 

With few clicks on  “Select server roles” page, under File and Storage Services, and expansion of File and iSCSI Services, the system will show checkbox “Server for NFS”. Installation of that role will enable NFS server. 

The configuration of NFS on Windows Server 2016 

After installation, it is needed to configure role properly. The first stage is choosing or creating a folder for NFS (Network File System) share. 

With right click and properties option, the system will bring the NFS Sharing tab, and Manage NFS sharing button, as part of the tab. 

It will provide  NFS Advanced Sharing dialogue box, with authentication and mapping options, as well as with “Permissions” button. 

Clicking on “Permissions” button will open Type of access drop-down list, with the possibility of root user access, and permission level. 

By default, any client can access the NFS shared folder, but it is possible to control or limit the specific clients, with a clicking of Add button and type the client’s IP address or hostname. 

 Mount NFS Shared Folder on Windows Client 

The steps above make NFS (Network File System) server ready for work.  

To successfully test it, it is needed to mount chosen NFS folder on a Linux or Windows client with following steps: 

  1. It is needed to activate a feature on the client, by clicking Control Panel / Programs and Features / Services for NFS / Client for NFS
  2. After installing the service, it is needed to mount the folder with the following command :
mount –o \\<NFS-Server-IP>\<NFS-Shared-Folder> <Drive Letter>: 

 The command maps folder as drive and assigns chosen letter to it. 

Mount NFS Shared Folder on Linux Client  

No matter NFS is common to Linux / Unix systems, it is still needed to mount folder to a system via command, similar to windows systems. 

mount –t NFS <NFS-Server-IP>/<NFS-Shared-Folder> /<Mount-Point> 

 

Do you have unclear NTFS Permissions assignments?
Do you have too many special permissions set on your fileservers?
Or blocked NTFS Permission Inheritance?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Windows Server Storage Reports Managements

Storage Reports Management

Storage Reports is a node on the file server management console that enables system administrators to schedule periodic storage reports that allow the identification of trends in disk usage, look out for any attempts made to save unauthorized files, and generate random reports on demand.

The following are the four ways in which you can use Storage Reports:

  1. Scheduling a report on a particular day and specific time to generate a list of recently accessed files. Information from these files can help in monitoring weekly storage activities and help in planning on a suitable day to put the server on a downtime for maintenance
  2. The report can be used at any given time to identify duplicate files in storage volumes of a particular server. Removing duplicate copies frees up more space.
  3. A customized file by group report can be used to identify how volumes are distributed across different file groups
  4. Run individual file reports to understand how users use shared resources on the network

The article will explore:

  • Setting a report schedule
  • Generating on-demand reports

Setting a Report Schedule

A regular report schedule is done via a report task which specifies the kind of report to be generated and what parameters to use. The parameters are the volume and folder used for reporting, the frequency of report generation, and the file format used. By default, all scheduled reports will be saved using the default parameters. They can also be configured in File Server Resource Manager options. There is also an option of using E-Mails to send reports to several Administrators.

When setting up a reporting schedule, it is critical to configure the report to gather as much information as possible on a single schedule to reduce the impact likely to affect server performance. This can be achieved by using the Add or Remove Reports for a Report Task action. The process gives room for editing or adding different report parameters. Changing the schedules or delivery address, one must edit the report tasks individually.

Scheduling a Report Task

  1. Click on Storage Reports Management Console
  2. Right click on Storage Reports Management and click Schedule a New Report Task (alternatively, you can select Schedule a New Report Task from the Actions panel). You should now be seeing the Storage Reports Task Properties dialog box
  3. The following steps are taken when selecting the folder and volume to be used:
    • Click Add found under Scope
    • Browse to the volume or folder that you want to use and click OK to add it as one of the paths.
    • You can add many volumes as necessary (Removing a volume is by clicking on the path and click Remove
  4. Specifying Storage Report type:
    • Under Report Data, choose all the reports that should be included. All reports generated for a scheduled report task are included

Editing the report parameters:

  • Click on the report label and click Edit Parameters
  • In the Report Parameters dialog box, enter the parameter values and then click OK
  • Use the Review Selected Reports to see a list of all parameters for a particular report
  • Click close
  1. Storage Reports Saving Format:
    • Under Report formats, select one of the formats to be used for scheduled reports. All reports use the Dynamic HTML. Other formats include XML, HTML, CSV, and TEXT.
  2. Setting up the E-mail for delivery:
    • On the Delivery tab, select Send Reports to the Following Administrators check box. Enter the name of the account to receive the reports.
    • The email format should be account@domain. Use semicolons to separate multiple email addresses
  3. Report Scheduling:

On the Schedule tab, click on Create Schedule and then click New. The default time is set at 9.00 am, which can be modified.

  • To specify the reporting frequency, select an interval by picking from the Schedule Task drop-down list. Reports can be generated at once or using periodic timelines. A report can also be generated at system startup or when the server has been idle for some time.
  • Additional scheduling information can be modified in Schedule Task options. The options can be changed depending on the intervals chosen.
  • To specify time, you can type or select the value in the Start time box
  • Advanced options give access to more scheduling options
  1. Save the schedule by clicking OK

Storage Report tasks are added to the Storage Reports Management node and are identified by report type and schedule.

Generating On-Demand Reports

On Demand Storage Reports are obtained by using the Generate Reports Now option. On-demand, reports are used to analyze disk usage on the server.  On-demand, Storage Reports are also saved in their default location.

Generate Reports Immediately

  1. Click on Storage Reports Management node
  2. Right click on Storage Reports Management and then click on Generate Reports Now (Alternatively, choose Generate Reports Now from the Actions panel) to open the Storage Reports Task Properties dialog box
  3. Selecting the volumes and folders to use:
    • Under Scope click on Add
    • Browse the folders and select by clicking on the desired folder and click OK.
  4. To specify the nature of the report:
    • Under Report Data, select the report(s) you want to be included

Editing report parameters:

  • Click on the report label and click on Edit Parameters
  • In Report Parameters, you can edit the parameters as needed, then click OK
  • You can view a list of selected parameters by clicking on Review Selected Reports, then click
  1. Specify saving format:
    • Under Report Formats, you can choose to use the default Dynamic HTML or use the CSV, XML, HTML, and TEXT formats.
  2. Using an E-mail address to send Storage Reports:
    • On the Delivery tab select, the option Send reports to the following administrators. Then enter the administrative account by using the format account@domain. Remember to use a semicolon when adding more than one account.
  3. To get all the data and generate reports, click OK to open the Generate Storage Reports dialog box.
  4. Choose how you want to generate on-demand reports:
    • You can view the reports immediately or wait for the entire report to be generated before being displayed.
    • To view reports later, click on Generate Reports in the background.

Conclusion

All Storage Reports tasks are added to the Storage Reports Management node. They can be viewed by their status, the last run time, and the output of every run, and the next scheduled run time.

Prevent Unauthorized Access to Windows Server Storage Reports!

Get your free edition of the easiest and fastest NTFS Permission Reporter now!