Whats New in Storage in Windows Server 2019 and 2016

The two Window Server Edition 2016 and 2019 have new features, and that has made it possible to store data in the name of storage migration. The migration service helps keep inventory when moving from one platform to another. Other essential details such as security settings and settings from old systems to the new server installation.

The article will try to explain what is new and any changed functions in the storage systems of Windows Server 2016, Windows Server 2019, and other semiannual releases.

We will start by highlighting some of the features added by the two server systems.

Managing Storage with Windows Admin Center

The Windows Admin Center is the central location where an App operating like a browser handles the server functions, clusters, Windows 10 PCs, and hyper-converged infrastructure containing storage locations. The Admin center does this as part of the new server configurations.

The Windows Admin Center is different and runs on Windows Server 2019 and some versions of Windows, we covered it first because it is new and we did not want you to miss it.

Storage Migration Service

The Storage Migration Service is the latest technology making it easy to move servers from an old to a new server version. All the events take place via a graphical interface displaying data on the servers and transfer data and configuration to new servers and then optimally moves old server identities to the new one to match apps and user settings.

Storage Spaces Direct (Available in Server 2019 only)

Several improvements have been made to storage spaces direct in Server 2019 though not included in Windows Server, Semi-Annual channel. Here are some of these improvements:

Deduplication and Compression of ReFS Volume

You will be able to store up to 10X more data on the same storage space using deduplication and compression of the ReFS system. You only need to turn this to on using a single click on the Windows Admin Center.

The different storage sizes with an option to compress make the saving rates to increase. The multi-threaded post processing will keep performance impact low. However, it supports a volume of up to 64TB and with each file reaching 1TB.

Native Support for Persistent Memory

Open up more performance with the native Storage Spaces Direct support for continuous creation of memory modules including the Intel Optane DC PM and NVDIMM-N. Use persistent memory as your cache to speed up the active working set or use it as an extra space needed to facilitate low latency. Manage persistent memory the same way you would any other storage device in Windows Admin Center or PowerShell.

Nested Resiliency for Two-Node Hyper-Converged Infrastructure on the Edges

The all new software resiliency option inspired by RAID 5 + 1 helps survive two hardware failures. The nested resiliency, the two node Storage Spaces Direct cluster should offer continuous accessible storage for programs and virtual machines even when one server node fails.

Two-Server Cluster Using USB Flash Drive as a Witness

Use a low-cost USB flash plugged into your router to act as a witness between two servers in a cluster. If the server is down, the USB will know which of the servers has more data.

Windows Admin Center

Managing and monitoring storage spaces direct with the newly built dashboard gives you an opportunity to create, delete, open and expand volumes with a few clicks. Follow performances of IOPS and IO latency from the entire clusters to the individual hard disks and SSD.

Performance Log

You will see what your server was up to in its resource utilization and performance using the built-in history feature. With more than 50 counters that cover memory, computation, storage and network are collected automatically collected and left in the cluster for a full year.

You do not find anything to install or configure or start; things will work in this feature.

Scale up to 4 PB for Every Cluster

Get to the level of multi-petabyte scale which makes sense in media servers for backup and archiving purposes. Windows Server 2019, storage spaces direct supports up to 4 petabytes (PB) which is the same as 4,000 terabytes.

Other capacity guides are increased as well; for instance, you can create volumes reaching 64 and not 32. The clusters can be stitched together into a set to make the scaling that fits within one storage namespace.

Accelerated Parity is now 2X Faster

You are now able to create Storage Spaces Direct Volumes that are part mirror and part parity. For example, mixing RAID-1 and RAID -5/6 to harness the advantages of both. Windows Server 2019, the performance of mirror accelerates parity is twice that of Windows Server 2016 due to optimizations.

Drive Latency Outline Detection

Get to know which drives have abnormal latency using proactive monitoring and the built-in outlier detection an inspiration from Microsoft Azure. You can see the failing drives labeled automatically in the PowerShell and Windows Admin Center.

Manual Delimiting of the Allocation of Volumes to Increase Fault Tolerance

The Admin can manually change the limit of allocations of volume in Storage Spaces Direct. Delimiting is done to increase fault tolerance in specific circumstances with added management considerations and complexity.

Storage Replica

The storage replica has the following improvements:

Storage Replica in Windows Server, Standard Edition

It is now very possible to use Storage Replica with Windows Server, Standard Edition as well as the Datacenter editions. Running storage replica on Windows Server, Standard Edition has the following weaknesses:

  • Storage replica replicated a single volume and not an unlimited volume number
  • Volume varies with some taking up to 2TB instead of taking an unlimited size

Storage Replica Log Performance Improvements

Some improvements on how storage and replica logs track replication improve replication throughout the latency period as well as Storage Spaces Direct clusters that replicate.

To get the increased performance, all members of the replication group must run Windows Server 2019.

Test Failover

Mount a temporary snapshot of the replicated storage on destination server for testing or backing up purposes.

Windows Admin Center Support

Supporting the graphical management of replication is made possible via the Server Manager Tool. This involves server-to-server replication, cluster-to-cluster and stretch cluster replication.

Miscellaneous Improvements

Storage Replica also seems to have the following improvements:

  • Changes asynchronous stretch cluster behaviors for automatic failover to take place.
  • Multiple bug fixes

SMB

SMB1 and Guest Authentication Removal

Windows Server does not install the SMB1 client and server by default while at the same time the ability to authenticate guests in SMB2 if off by default.

SMB2/SMB3 Security and Compatibility

More options for security and application compatibility were added including the disabling oplocks in SMB2+ for old applications. This also covers the need for signing encryption on every connection from the client.

Data Deduplication

Data Deduplication Supports ReFS

You do not have to choose between the advantages of a modern file system with ReFS and Data Deduplication. Anytime you enable Data Deduplication, enabling ReFS is also possible now.

Data Port API for Optimized Ingress/egress to Deduplicated Volumes

As a developer, you can now enjoy the advantage of knowing data deduplication and how to store data in an efficient manner

File Server Resource Manager

Windows Server 2019 can prevent the File Resources Manager service from creating a change(USN) journal on storage volumes. This is to create and conserve more space on every volume; however, it will disable real-time classification.

This is the same effect that takes place in Windows Storage Server, Version 1803.

What’s New in Storage in Windows Server, Version 1709

Server Version 1709 is the first window server release with a Semi-Annual Channel, which is a channel that fully supported in production for 18 months and a new version coming in every six months.

Storage Replica

Disaster recovery and protection team is an added function of the Storage Replica which is now expanded to include:

Test Failover

You now have an option of mounting the destination storage through a test failover. The snapshots can be mounted temporarily for both testing and backup purposes

Windows Admin Center Support

Supported the management of graphical applications managing replications. You access it via Server Manager Tool.

Storage Replica also has the following improvements:

  • Change asynchronous cluster behaviors to enable automatic failover
  • Multiple bug fixes

What’s New in Storage in Windows Server 2016

Storage Spaces Direct

The storage spaces direct facilitate the availability and scalability of storage using servers with local storage. This implies that deployment and management software that control storage systems and unlock the use of new classes of storage devices. The devices include SATA, SSD, and NVMe disks that may not have been possible with clustered Storage Spaces with Shared Disks.

What Value Does the Change add?

The Storage Spaces Direct allows service providers and enterprises to use industry standard servers with local storage. The idea is to build highly available and scalable software-defined storage.

Use of servers with local storage decreases complexity as it increases scalability and allows the use of storage devices such as SATA solid state disks. This lowers the cost of flash storage, or NVMe sold state Disks

Storage Spaces Direct Removes the need to have a shared SAS fabric which simplifies deployment and configuration. This means the server uses the network as the storage fabric leveraging the SMB3 and SMB Direct (RDMA) for both high speed and low latency as well as good use of the processing unit.

Adding more server to the configuration increases storage capacity and input and output performance. In Windows Server 2016 has its Storage Spaces Direct working differently as explained below:

Storage Replica

Enables the storage, block-level, stretching of failover clusters between sites, as well as synchronous replication between servers. Synchronous replication enables mirroring of data in physical sites with consistent volumes to ensure no data is lost at the file system level. Asynchronous replication may increase the possibility of data loss.

What Value Does the Change Add?

Provide a single vendor disaster recovery solution for both planned and unplanned power loss

Use SMB3 transport with proven performance, scalability, and reliability

  • Stretch windows failover clusters further
  • Use Microsoft end-to-end software for storage and clustering such as Hyper-V, Scale-Out File Server, Storage Replica, Storage Spaces, ReFS/ NTFS, and Deduplication.
  • Help reduce complexity costs by:
  • Being hardware agnostic with no specific requirement for storage configuration like DAS or SAN
  • Allow the storage of commodities and network technologies
  • Features easy graphical management interface for nodes and clusters through failover cluster manager
  • Includes comprehensive and large scale scripting options through the Windows PowerShell
  • Help in the reduction of downtime, large scale productivity
  • Provide supportability and performance metrics and diagnostic capabilities

What Works Differently

The functionality is new in Windows Server 2016

Storage Quality of Service

You can use the storage quality of Service (QoS) as a central monitor for end-to-end storage performance and develop management policies using Hyper-V and CSV clusters in Windows Server 2016.

What Value Does the Change Add?

You will be able to change the QoS policies on a CSV and assign one or more virtual disks on Hyper-V machines. The storage will automatically adjust to meet the policies and workloads that keep fluctuating.

  • Each policy can give a minimum reserve or create a maximum to be used when collecting data. For example, a single virtual hard disk, a tenant, a service or a virtual machine can be used.
  • Use Windows PowerShell or WMI to perform the following:
  • Create Policies on CSV cluster
  • Assign the policy to virtual hard disk and status within the policies
  • Enumerate policies on the CSV clusters
  • Monitor flow performance and status of the policy
  • If you have several virtual hard disks sharing the same policy and performance is shared to meet the demands within the policy’s minimum and maximum settings, it means that the policy can manage virtual hard disks, a single or multiple virtual machines that constitute a service owned by a tenant.

What Works Differently

This is a new feature in Windows Server 2016. The management of minimum reserves and monitoring flow of all virtual disks over a cluster using a single command and central policy-based management are not possible in the previous Server releases.

Data Deduplication

Function

New or Updated

Description

Support large volumes

Updated Before windows Server 2016 you had to specify sizes. Anything above 10TB did not qualify for deduplication. Server 2016 supports deduplication sizes of up to 64TB

Large file support

Updated Before Windows Server 2016, files with 1TB could not deduplicate. Server 2016 supports deduplication of files up to 1TB.

Nano Server Support

New Deduplication is available and fully supported for Server 2016

Simple Backup Support

New Windows Server 2012 R2 supported the Virtual backups using the Microsoft’s Data Protection Manager. Windows Server 2016 simple backup is possible and is seamless

Cluster OS Rolling Upgrades Support

New Deduplication supports Cluster OS Rolling Upgrade and is available in Windows Server 2016

SMB Hardening Improvements for SYSVOL and NETLOGON Connections

Windows 10 and Windows Server 2016 client connections to the Active Directory Domain Service, the SYSVOL and NETLOGON share domain controllers that require SMB signing and authentication via Kerberos.

What does this Change Add?

It reduces the possibility of man-in-the-middle attacks

What Works Differently?

If the SMB and mutual authentication are not available, a Windows 10 or Server 2016 will not access the domain-based Group Policy Scripts. It is also good to note that the registry values of the settings are not present by default, the hardening rules will apply until a new policy change comes in through Group Policy or any relevant registry values.

Work Folders

The added changes to notifications are there when Work Folder server is running on Windows Server 2016, and the Work Folder is on a client running Windows 10.

What Value Does this Change Add?

Windows Server 2012 R2 when the changes in files are synchronized to Work Folder, clients will get notified of the impending changes and wait for at least 10 minutes to the update.

When running Windows Server 2016, the Work Folders will immediately notify the Windows 10 client and the synchronization changes immediately.

What Works Differently

This is a new feature in Windows 2016 and the client accessing the Work Folders must be a Windows 10. In case you are using older clients, or if the Work Folder is on Windows Server 2012 R2, the client will poll every 10 minutes for any new changes.

ReFS

The next cycle will be the ReFS that offer support for large scale storage allocation with varying workloads, reliability, resilience and scalability for your data.

What Values Does the Change Add?

ReFS bring in the following improvements:

  • Implementing new storage tiers that help deliver fast performance and increased capacity. This functionality further enables:
  • Multiple resiliency on the same virtual disk through mirroring and parity tier
  • Enhanced responsiveness to drifting working sets
  • Introducing a block of cloning and substantial improvement of VM operations such as. vhdx checkpoint merge operations.
  • The ReFS tool enables the recovery of leaked storage and helps keep from being corrupted.

What Works Differently?

These functionalities are new in Windows Server 2016.

Conclusion

With so many features available to Windows Server 2019, the article covered the fully supported features. At the time of writing this post, some features were partially supported in earlier versions but are getting full support in the latest Server versions. From this read, you can see that Windows Server 2019 is a good upgrade that you can experience.

Windows Server Disk Quota – Troubleshooting

DISK QUOTA CHALLENGES AND TROUBLESHOOTING

Disk quotas come in handy and allow system administrators to equitably distribute disk space among multiple users in shared servers or PCs. This avoids a situation where a careless user ends up filling the entire hard drive and wreaking havoc in the system. However, quotas do not always work as intended.

As easy as it may seem in setting up disk quotas, sometimes things may go a bit askew. Occasionally, users can get allocated a volume of disk space which is less than what was specified in the settings. This usually happens when the server runs out of space. However, there are situations where you may get the impression that they have received less hard drive space than what was configured. The reason behind this is the misconception that shrouds the meaning of quota allocation when it comes to a user’s files. What users do not realize is that quotas take into account all files that are owned by a user. And this includes files in the recycle bin. This is true especially if disk quotas are implemented on local PCs. Since the recycle bin resides on the PC, this is scenario or discrepancy is most likely to occur.

Another unusual thing that may arise is the unavailability of space despite a user relinquishing the ownership of their files. A user may create a file, change its ownership but still, the file will be counted in the quota.

Another confusing scenario is the use of compressed folders. Windows looks at compressed folders not in their compressed size, but rather, in their original size. This means that quotas look at compressed files in their original uncompressed format, not according to the current size they occupy on the hard drive in their compressed format.

Sometimes, when the disk space limit is exceeded, the user may realize that deleting files in the volume may not free up space as expected. This occurrence has been noted in Windows Server 2008 R2. This happens due to incorrect filling of the file content structure when the deletion happens.

As a solution to this issue, Microsoft released a hotfix which can be downloaded from their official site via this link https://support.microsoft.com/hotfix/kbhotfix.aspx?kbnum=2679054&kbln=en-US

Once you apply the hotfix, run the command below

dirquota quota scan /path: d:\users\scratch

For instance, the above command will apply on scratch folder located in users directory.

After running the command, reboot the system to effect the hotfix settings.

This occurs because the file context structure is not filled in correctly when you delete the files.

In case a user is using a system whose hard drive is formatted using FAT or FAT-32 filesystem, they’ll be required to format it to NTFS filesystem since NTFS filesystem is the only filesystem that acknowledges the concept of quotas as well as file ownership. This compels the system administrator to first perform a file backup of the files contained in the FAT & FAT-32 partitions and later format the volumes to NTFS filesystem. This can be quite tedious and cumbersome. It’s therefore important to ensure all volumes are formatted in NTFS filesystem if you are planning to have several users using or backing up data in the system. This is because disk quotas only work with NTFS volumes only.

Windows Server Disk Quota – Setup and Configure

In the previous post, we looked at the disk quota functionality and how quotas are handy in limiting disk space utilization for shared systems. This is crucial in ensuring that all users get equal space allocation and systems’ performance is kept at an optimal level. In this post, we’ll take a practical approach and see how we can manage and control disk space utilization to prevent users from filling up the hard disk and leaving no more space for anyone. To recap some of the important features about disk quotas, the quota can only be applied to volumes which have been formatted in NTFS filesystem. They are mostly used in corporate networks but can as well be used on a home PC running Windows OS including the basic Windows 10 home. You can choose to set quotas per individual user or apply them on everyone. However, you cannot implement limits on groups. For best practices, quotas should be configured or set per volume basis and not per computer, and upon execution, newly added users will begin using them as expected. That said, let’s dive deeper and see how you can implement this functionality to manage and control hard drive space utilization.

Setting Quota Limits

Although implementing quotas can be done on any disk volume, it can prove quite tricky setting limits on Drive C, which is the Windows installation volume. Try as much as possible to enable quotas on secondary volumes or partitions and plan accordingly. There are two ways of setting quotas. You can set them per account or on a volume basis. Let’s see how you can set quotas on Account basis:

Setting up Quota Per Account basis on Windows

If you want to set disk space limit on end users, while at the same time having your account occupy unlimited space, follow the steps outlined below:

  1. Fire up the File Explorer. This is done by using the (Windows key + E) shortcut.
  2. On Windows 10, Locate This PC tab and click on it.
  3. Under “Devices and drives,” right-click on the preferred drive that you wish to manage. In the menu that appears, select Properties option.
  4. Select the Quota tab.
  5. Click the Show Quota settings tab.
  6. The Quota status Windows will open. Check the first option, which is the Enable quota management option.
  7. Just below the option in 6 above, Locate and Check the Deny disk space to users exceeding quota limit option. This option enables disk space limitation.
  8. Next, Click on the Quote Entries button at the bottom right corner of the window
  9. In case the account you want to restrict is not listed, click Quota, and select New Quota Entry.
  10. In the “Select Users” tab, click on the Advanced button. This displays a pop up window
  11. Next, click on the Find Now button.
  12. At the bottom of the Windows, a list of user accounts will be listed. Select the account you’d want to effect limits on.
  13. Next, Press OK.
  14. Press OK again in the previous Window.
  15. Select the Limit disk space to radio button option.
  16. Set the desired volume of space you’d want and specify the restriction unit size (for instance, MB, GB or TB).
  17. Set the preferred space size before a warning is triggered and specify the size unit (for instance, MB, GB or TB).
  18. Click on Apply option.
  19. Finally, Click on OK.

After following and completing the above procedure, the quotas will take effect immediately users login in. Users will be restricted to the amount of disk space size set and get a warning when approaching the limit as specified in step 16 and 17 above.

Setting up Quota Per Volume basis on Windows

Should you decide to limit the available storage space for all users, follow the steps outlined below:

  1. Fire up File Explorer. This is done by using the (Windows key + E) shortcut.
  2. On Windows 10, Locate This PC tab and click on it.
  3. Under the “Devices and drives,” section, right-click on the preferred drive that you wish to manage. In the menu that appears, select Properties option.
  4. Hit on the Quota tab.
  5. Click the Show Quota Settings tab.
  6. The Quota status Windows will open. Check the first option, which is the Enable quota management option.
  7. Next, Locate and Check the Deny disk space to users exceeding quota limit option. This option enables disk space limitation.
  8. Select the Limit disk space to option.
  9. Set the desired amount of space and specify the size unit (e.g., MB, GB).
  10. Set the amount of space before a warning is triggered to the user and specify the size unit (for example, MB or GB).
  11. Click Apply.
  12. Click OK.
  13. Finally, Reboot your computer.

Once you completed the above procedure, all accounts on your system will now be able to use part of the total available disk space storage. A warning will be triggered to alert or warn users that they’re approaching their maximum storage quota. Should the threshold be reached, users will no longer be able to create and store any more files on the volume. They will either have to delete existing files or move them to another location.

One can always adjust – increase or decrease – the storage quota by making changes to the Limit disk space to & Set warning level to options in step 8

If you decide that you no longer want to put restrictions on the hard disk volume users can use on a drive, you can use the same instructions. In step number 8, select Do not limit disk usage option and uncheck the Deny disk space to users exceeding quota limit as well as the Enable quota management options.

In summary, we have seen how you can plan and implement disk quotas on Windows Systems, both on per user account and volume basis. In the next post, we’ll see some of the challenges that are likely to occur and how you can go around them.

Windows Server Disk Quota – Overview

Windows Server system comes with a very handy feature that allows the creation of many user accounts on a shared system. This enables users to log in and have their own disk space and other custom settings. However, the drawback with this feature is that users have unlimited disk space usage, and with time, space eventually gets filled up leading to a slow or malfunctioning system, which is a real mess. Have you ever wondered how you can avert this situation and set user limits to disk volume usage?

Worry no more. To overcome the scenario described above Windows came up with the disk quota functionality. This feature allows you to dictate or set limits on hard disk utilization space such that users are restricted to the size of disk space they can use for their files. The functionality is available for both Windows and Unix systems like Linux that are being shared by many users. In Linux, it supports ext2, ext3, ext4 and XFS filesystems. In Windows operating systems, it’s supported in Windows 2000 and later versions. It’s important to point out that in Windows, this functionality can only be configured on NTFS file systems only. So, If you are starting out with a Windows server or client system, you may to consider formatting the volumes to NTFS filesystem to avert complications later on. Quotas can be applied to both client and server systems like Windows server 2008, 2012 and 2016. In addition to that, quotas cannot be configured on individual files or folders. They can only be set on volumes and restrictions apply to those volumes only. To be able to administer a disk quota, one must either be an administrator or have administrative privileges, that is, be a member of Administrator’s group.

The idea behind setting limits is to prevent the hard disk from getting filled up and thereby causing the system or server to freeze or behave abnormally. When a quota is surpassed, the user receives an “insufficient disk space error” warning and cannot, therefore, create or save any more files. A quota is a limit that is normally set by the administrator to restrict disk space utilization. This will prevent careless or unmindful users from filling up the disk space leading to a host of other problems including slowing down or freeing of the system. Quotas are ideally applicable in enterprise environments where many users access the server to save or upload documents. An administrator will want to assign a maximum disk space limit so that end users are confined to uploading work files only like Word, PowerPoint and Excel documents. The idea behind this is to prevent them from filling the disk with other non-essential and personal files like images, videos and music files which take up a significant amount of space. A disk quota can be configured as per user or per group basis. A perfect example of disk quota usage is in Web hosting platforms such as cPanel or Vesta CP whereby users are allocated a fixed disk space usage according to the subscription payment.

When a disk quota system is implemented, users cannot save or upload files to the system beyond the limit threshold. For instance, if an administrator sets a limit of 10 GB on disk space for all logon users, the users cannot save files exceeding the 10G limit. If a limit is exceeded, the only way out is to delete existing files, request another user to take ownership of some files or request the administrator, who’s the God of the system, to allocate you more space. It’s important to note that you cannot increase the disk space by compressing files. This is because quotas are based on uncompressed files and Windows treats compressed files based on their original uncompressed size. There are two types of limits: Hard limits and soft limits. A hard limit refers to the maximum possible space that the system can grant an end user. If for instance, a hard limit of 10G is set on a hard drive, the end user can no longer create and save files once the 10G limit is reached. This restriction will force them to look for an alternative storage location elsewhere or delete existing files

A soft limit, on the other hand, can temporarily be exceeded by an end user but should not go beyond the hard limit. As it approaches the hard limit, the end user will receive a string of email notifications warning them that they are approaching the hard limit. In a nutshell, a soft limit gives you a grace period but a hard limit will not give you one. A soft limit is set slightly below the hard limit. If a hard limit of, say 20G is set, a soft limit of 19G would be appropriate. It’s also worth mentioning that end users can scale up their soft limits up to the hard limit. They can also scale down their soft limits to zero. As for hard limits, end users can scale them down but cannot increase them. For purposes of courtesy, soft limits are usually configured for C level executives so that they can get friendly reminders when they are about to approach the Hard limit.

In summary, we have seen how handy disk quota is especially when it comes to a PC or a server that is shared by many users. Its ability to limit disk space utilization ensures that the disk is not filled up by users leading to malfunctioning or ‘freezing’ of the server. In our next topic, we’ll elaborate in detail how we apply or implement the quotas.

File System Attacks on Microsoft Windows Server

Most common File System Attacks on Microsoft Windows Server systems are an Active Directory targeted attacks, which is, based on a fact that AD is a “heart” of any Windows-based system. The bit less common, but still, very dangerous ( and interesting), can be File system attacks.

In this article, we investigated most common ways of filesystem attacks and protection against it.

The goal of File System Attacks is always the data, pieces of information stored on a server, important for any reason to any side that planned an attack. To get to data, first thing, attacker needs are credentials, as more elevated account, as better.

In this article, we will not write about credentials theft, that could be a topic for itself, but we will assume, that attacker already breached organization, and got the Domain administrator credentials.

Finding File Shares

The first step is finding the Data, a place where it “lives”.

For that, the tools are coming out the front. Most of the tools, attackers are using, are penetration testing tools, like, in our example smbmap, or PowerShell ( we will show both ways)

SMBMap, as git hub says “ allows users to enumerate samba share drives across an entire domain. List share drives, drive permissions, share contents, upload/download functionality, file name auto-download pattern matching, and even execute remote commands. This tool was designed with pen testing in mind, and is intended to simplify searching for potentially sensitive data across large networks”

So with a usage of Smbmap’s features, attackers will find all the file shares on those hosts and determine what sort of access, Permissions, and more detailed info about any file share on the system.

Another common way of determining the data location is PowerShell based.

By definition – PowerSploit is a collection of Microsoft PowerShell modules that can be used to aid penetration testers during all phases of an assessment.

And like smbmap, PowerSploit has a huge number of features. For finding data shares, attackers use Invoke-ShareFinder cmdlet, which will, in the combination of other PowerSploit features, show exactly the same things as smbmap, that means all information necessary to access and use data.

Protection

Of course, examples, above, are a just brief description of attacks that can list your data shares to the potential attacker, but, no matter, it is clear, that listing your data is a first step to getting it.

So here are some recommended actions to protect your system:

Removing open shares: Reduce open shares as much as possible. It is ok to have some if explicitly needed by a job, but sometimes, open shares are just result of lousy made permissions. Check out your default permissions ( default permissions are equivalent to open), change them properly, and avoid easy listing for the potential attacker

Monitor first time access activity – this is more an admin tip than a protection method, but it can be important. If you notice, a user has a right to share but never used it, and all the sudden, activity on that account changes, and steps out of “normal”, it could be a sign that account credentials are hijacked.

Check for potentially harmful software, not as malware, but a hint. SmbMap is built in python, so if noticed, sudden installation of python software, or PowerSploit module on your system, that could be an early alarm that something suspicious is going on your servers.

Finding Interesting Data

So now the potential attacker know where the data on our hypothetical server “ live”. The next step is narrowing the data down on “interesting”. There could be huge amounts of files in even the smallest organizations. How can attacker know which data is one he/she need.

With PowerSploit, functionality used is called Invoke-FileFinder.  It has a lot of filtering options, to narrow down data to “interesting”, and export it to CSV files, which allows attacker to explore it on his own system with wanted pace, and after detecting it, attacker can make a targeted attack, and get needed files to staging area, and transport them out of the network ( via FTP, or even Dropbox trial account).

The same thing happens with SmbMap. Just as PowerSploit, it will filter out the data with options, the tool can provide, and show the data, the attacker is interested in, with the same outcome, getting pieces of information.

Protection

With this example, a hypothetical attack is in the second phase. The attacker, successfully listed files and found the most interesting ones. The easy part is left undone. Just taking the data. How to protect from that? Together with earlier mentioned methods, the following can help administrator fortify system and files.

Password rotation – Can be very important action, especially for services and applications that store passwords in filesystems. Constantly rotating passwords and checking file content can present a very large obstacle for the attacker, and will make your system more secure.

Tagging, and encryption –  In combination with Data Loss Protection, will highlight and encrypt important data, which will stop simple type of attacks, at least, getting important data.

Persistence

The final part of the file system attack. In the hypothetic scenario, we had listed and accessing data on the penetrated system. And here we will describe how attackers persist in the system, even when they get kicked out the first time.

Attackers hide some of their data into the NTFS file system, more accurate, in Alternate Data Stream ( ADS). Data of a file is stored in $DATA attribute of that file as NTFS tracks it. Malware vendors, and “bad guys” are tracking ADS and use it for entrance, but still, they need credentials.

So as usual, they can be stopped by correct permissions usage, and not allowing “write” permission to any account that is not specifically assigned for write operations.

File System Attacks are tricky, but they are leaving traces, and in general, most of the attacks should be prevented by System Administrator behavior and predictions. In this field, we can fully say: it’s better to prevent than to heal, and it is clear that only knowing your system fully, and full-time administration and monitoring will/can make your system safe.

Do you want to avoid Unwanted File System Attacks on Microsoft Windows Server?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Introduction to Data Deduplication on Windows Server 2016

Data Deduplication is a Microsoft Windows Server feature, initially introduced in Windows Server 2012 edition. 

As a simple definition, we can tell, data deduplication is an elimination of redundant data in data set and storing only one copy of the same data. It is done by identifying double byte patterns through data analysis, removing double data and replacing it with reference pointed to stored, single piece of data. 

In 2017, according to IBM, an output of world data creation was 2.5 quintillions (1018) bytes a day. That fact shows that today’s servers handle huge portions of data in every aspect of human life. 

Definitely, some percentage drops on duplicated data in any form, and that data is nothing more than the unnecessary load on servers. 

Microsoft knew the trends, way back in 2012 when Data deduplication was introduced and kept developing it, so in Windows Server 2016 system, Data deduplication is more advanced, as more important. 

But let’s start with 2012, and understand the feature in its basic. 

Data Deduplication Characteristics: 

Usage –  Data deduplication is very easy to use. It can be enabled on a data volume in “one-click”, with no delays or impacts on a system functionality.  In simple words, if the user requests a file, he will get it, as usual, no matter is that file affected by deduplication process. 

Deduplication is made not to aim to all files. For example, files smaller than 32KB, encrypted files ( encrypted with a usage of EFS), and files that have Extended attributes, will not be affected by the deduplication process. 

If files have an alternate data stream, the only primary stream will be affected, but alternate, will not.  

Deduplication can be used on Primary data volumes without affecting files that are being written to until files get to certain age, which allows great performance of feature active files and saves on other files. It sorts files in categories by criteria, and those that are categorized as “in policy” files are affected with deduplication, while others are not. 

Deduplication does not change write-path of new files. It allows writing of new files directly to NTFS and evaluates them later through background monitoring process. 

When files get to a  certain age, MinimumFileAgeDays setting decides ( previously set up by admin), are the files eligible for deduplication. The default setting is 5 days, but it can be changed, to a minimum of 0 days, which processes it, no matter of age. 

Some file types can be excluded, like PNG or CAB file types, with compression, if it is decided, the system will not benefit much from mentioned file type processing. 

In need of backing up and restoring to another server, deduplication will not make problems. All settings are maintained on the volume, and in need of relocation, they will be relocated too, all except scheduled settings, that are not written on volume. If relocation is made to a server that does not use deduplication, a user will not be able to access files affected by the process. 

Resource Control 

The feature is made to follow server workload and adapt to system resources. Servers usually have roles to fill, and storage, as seen by admin is only necessary to store background data, so deduplication is adapting to that philosophy. If there are resources to deduplicate, the process will run, if not, the process will stand by and wait for resources to become available. 

A feature is designed to use low resources and reduce the Input/output operations per second ( IOPS) so it can scale large data and improve the performance, with index footprint of only 6 bytes of RAM per chunk (average size of 64 KB) and temporary partitioning.  

– As mentioned, deduplication works on “chunks” principle, it uses an algorithm with chunks a     file in a 64KB pieces, compresses it, and store in a hidden folder. If a user requests that file, it “regenerate” file from the pieces and serve it to the user. 

–  BranchCache:  the feature that sub-file chunking and indexing engine are shared with. It has an option to send, if needed,  already indexed chunk over the WAN to the branch office, and saves a lot of time and data. 

Is there a  Fragmentation, and what about data access? 

The question that is imposed when reading about deduplication, is fragmentation!? 

Is there a fragmentation on a hard drive, based on spreading chunks around your hard drive? 

Answer is no, deduplication ’s filter driver has a task to keep the sequence of unique chunks together on disk locality, so distribution doesn’t go randomly, plus, deduplication has its own cache, so in situation of multiple requests for a file in an organization, the access pattern will speed things up, and will not start multiple file “recovery” processes, and user will have the same “answer time” as with file without deduplication, and in need of copying one large file, we see end-to-end copy times that can be 1.5 times what it takes on a non-deduplicated volume. But real quality and savings are coming up when copying multiple large files at the same time. The time of copying, due to the cache can speed up to an amazing 30%. 

Deduplication Risks and solutions 

Of course, like all other features, this way of works has some risks. 

In cases of any type of data corruption, there are serious risks, but solutions too. 

There is possibility with errors caused by  disk anomalies, controller errors, firmware bugs or environmental factors, like radiation or disk vibrations, that chunks errors can cause major problems as multiple files loss., but with good admin organization, usage of backup tools,  on time corruption detection, redundancy copies and regular checkups can minimize risks of corrupted data, and loses. 

Deduplication in Windows Server 2016 

As with all other features, data deduplication went through some upgrades and new features in the latest edition of Microsoft Server. 

We will describe the most important ones, and show a way to enable and configure that feature in Microsoft Server 2016 environment. 

Multithreading  

Multithreading is flagged as a most important change in 2016 when compared with Windows Server 2012 R2. On Server 2012 R2, deduplication operates in a single-threaded mode, and it uses one processor core by the single volume. In Microsoft, they saw it as a performance limit, and in 2016, they introduced multi-threaded mode. Now each volume uses multiple threads and an I/O queues. It changed limits of size per file or volume. In Server 2012 R2, maximum volume size was 10 TB, and in 2016 edition, it changed to 64TB volumes, and 1 TB files, what represents a huge breakthrough. 

Virtualization Support 

In the first edition of deduplication feature ( Microsoft Windows Server 2012), there was a single type of deduplication, created only for standard file servers, with no support for constantly running VM’s. 

Windows Server 2012 R2 started using   Volume Shadow Copy Service (VSS)  in a way that deduplication with a usage of optimization jobs, optimizes data, and VSS captures and copies stable volume images for backup on running server systems. With the usage of VSS, Microsoft, in 2012 R2 system, introduced virtual machines deduplication support and a separate type of deduplication. 

Windows Server 2016, went one step further and introduced another type of deduplication, designed specifically for virtual backup servers (DPM). 

Nano server support  

Nano server is minimal component’s fully operational Windows Server 2016, similar to Windows Server Core editions, but smaller, and without GUI support, ideal for purpose-built, cloud-based apps, infrastructure services, or Virtual Clusters.  

Windows Server 2016, supports fully deduplication feature on that type of servers. 

Cluster OS Rolling Upgrade support 

Cluster OS Rolling upgrade is a Windows Server 2016 feature that allows upgrade of an operating system from Windows Server 2012 R2 cluster nodes to Windows Server 2016 without stopping Hyper V. It can be made by usage of so-called “mix mode” operation of the cluster. From deduplication angle, that means that same data can be located at nodes with different versions of deduplication. Windows Server 2016, supports mix mode and provides deduplicated data access while a process of cluster upgrade is ongoing. 

Installation and Setup of Data Deduplication on Windows Server 2016 

In this section, we will bring an overview of best practice installation and set up data deduplication on Windows Server 2016 system. 

As usual, everything starts with a role. 

In server manager, choose, Data deduplication ( Located in the drop-down menu of File and storage services), or with the usage of PowerShell cmdlet (as administrator) :  

Install-WindowsFeature -Name FS-Data-Deduplication 

Enabling And Configuring Data Deduplication on Windows Server 2016 

For Gui systems, deduplication can be enabled from Server manager – File and Storage services – Volumes, selection of volume, then right-click and Configure Data Deduplication. 

After selecting the wanted type of deduplication, it is possible to specify types of files or folders that will not be affected by the process. 

After it is needed to setup schedule, with a click on Set Deduplication Schedule button, which will allow selection of days, weeks, start time, and duration. 

Through PowerShell terminal, deduplication can be enabled with following command ( E: is an example volume letter) : 

Enable-DedupVolume -Name E:  -UsageType HyperV 

Jobs Can be listed with the command : 

Get-DedupSchedule 

And scheduled with following command (example – Garbage collection job) : 

Set-DedupSchedule -Name “OffHoursGC” -Type GarbageCollection -Start 08:00 -DurationHours 5 -Days Sunday -Priority Normal 

These are only basics of deduplication  PowerShell commands, it has a lot more different deduplication -specific cmdlets, and they can be found at the following link : 

 https://docs.microsoft.com/en-us/powershell/module/deduplication/?view=win10-ps 

Do you want to avoid Data Lost and Unwanted Data Access?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

How to Configure NFS in Windows Server 2016

NFS  (Network File System) is a client-server filesystem that allows users to access files across a network and handle them as if they are located in a local file directory. It is developed by  Sun Microsystems, Inc, and it is common for Linux/ Unix systems. 

Since Windows Server 2012 R2, it is possible to configure it on Windows Server as a role and use it with Windows or Linux machines as clients. Read to know about How to Configure NFS in Windows Server 2016 here.

How to install NFS to Windows Server 2016 

Installation of NFS (Network File System) role is no different than an installation of any other role. It goes from “Add roles and features Wizard”. 

With few clicks on  “Select server roles” page, under File and Storage Services, and expansion of File and iSCSI Services, the system will show checkbox “Server for NFS”. Installation of that role will enable NFS server. 

The configuration of NFS on Windows Server 2016 

After installation, it is needed to configure role properly. The first stage is choosing or creating a folder for NFS (Network File System) share. 

With right click and properties option, the system will bring the NFS Sharing tab, and Manage NFS sharing button, as part of the tab. 

It will provide  NFS Advanced Sharing dialogue box, with authentication and mapping options, as well as with “Permissions” button. 

Clicking on “Permissions” button will open Type of access drop-down list, with the possibility of root user access, and permission level. 

By default, any client can access the NFS shared folder, but it is possible to control or limit the specific clients, with a clicking of Add button and type the client’s IP address or hostname. 

 Mount NFS Shared Folder on Windows Client 

The steps above make NFS (Network File System) server ready for work.  

To successfully test it, it is needed to mount chosen NFS folder on a Linux or Windows client with following steps: 

  1. It is needed to activate a feature on the client, by clicking Control Panel / Programs and Features / Services for NFS / Client for NFS
  2. After installing the service, it is needed to mount the folder with the following command :
mount –o \\<NFS-Server-IP>\<NFS-Shared-Folder> <Drive Letter>: 

 The command maps folder as drive and assigns chosen letter to it. 

Mount NFS Shared Folder on Linux Client  

No matter NFS is common to Linux / Unix systems, it is still needed to mount folder to a system via command, similar to windows systems. 

mount –t NFS <NFS-Server-IP>/<NFS-Shared-Folder> /<Mount-Point> 

 

Do you have unclear NTFS Permissions assignments?
Do you have too many special permissions set on your fileservers?
Or blocked NTFS Permission Inheritance?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Windows Server Storage Reports Managements

Storage Reports Management

Storage Reports is a node on the file server management console that enables system administrators to schedule periodic storage reports that allow the identification of trends in disk usage, look out for any attempts made to save unauthorized files, and generate random reports on demand.

The following are the four ways in which you can use Storage Reports:

  1. Scheduling a report on a particular day and specific time to generate a list of recently accessed files. Information from these files can help in monitoring weekly storage activities and help in planning on a suitable day to put the server on a downtime for maintenance
  2. The report can be used at any given time to identify duplicate files in storage volumes of a particular server. Removing duplicate copies frees up more space.
  3. A customized file by group report can be used to identify how volumes are distributed across different file groups
  4. Run individual file reports to understand how users use shared resources on the network

The article will explore:

  • Setting a report schedule
  • Generating on-demand reports

Setting a Report Schedule

A regular report schedule is done via a report task which specifies the kind of report to be generated and what parameters to use. The parameters are the volume and folder used for reporting, the frequency of report generation, and the file format used. By default, all scheduled reports will be saved using the default parameters. They can also be configured in File Server Resource Manager options. There is also an option of using E-Mails to send reports to several Administrators.

When setting up a reporting schedule, it is critical to configure the report to gather as much information as possible on a single schedule to reduce the impact likely to affect server performance. This can be achieved by using the Add or Remove Reports for a Report Task action. The process gives room for editing or adding different report parameters. Changing the schedules or delivery address, one must edit the report tasks individually.

Scheduling a Report Task

  1. Click on Storage Reports Management Console
  2. Right click on Storage Reports Management and click Schedule a New Report Task (alternatively, you can select Schedule a New Report Task from the Actions panel). You should now be seeing the Storage Reports Task Properties dialog box
  3. The following steps are taken when selecting the folder and volume to be used:
    • Click Add found under Scope
    • Browse to the volume or folder that you want to use and click OK to add it as one of the paths.
    • You can add many volumes as necessary (Removing a volume is by clicking on the path and click Remove
  4. Specifying Storage Report type:
    • Under Report Data, choose all the reports that should be included. All reports generated for a scheduled report task are included

Editing the report parameters:

  • Click on the report label and click Edit Parameters
  • In the Report Parameters dialog box, enter the parameter values and then click OK
  • Use the Review Selected Reports to see a list of all parameters for a particular report
  • Click close
  1. Storage Reports Saving Format:
    • Under Report formats, select one of the formats to be used for scheduled reports. All reports use the Dynamic HTML. Other formats include XML, HTML, CSV, and TEXT.
  2. Setting up the E-mail for delivery:
    • On the Delivery tab, select Send Reports to the Following Administrators check box. Enter the name of the account to receive the reports.
    • The email format should be account@domain. Use semicolons to separate multiple email addresses
  3. Report Scheduling:

On the Schedule tab, click on Create Schedule and then click New. The default time is set at 9.00 am, which can be modified.

  • To specify the reporting frequency, select an interval by picking from the Schedule Task drop-down list. Reports can be generated at once or using periodic timelines. A report can also be generated at system startup or when the server has been idle for some time.
  • Additional scheduling information can be modified in Schedule Task options. The options can be changed depending on the intervals chosen.
  • To specify time, you can type or select the value in the Start time box
  • Advanced options give access to more scheduling options
  1. Save the schedule by clicking OK

Storage Report tasks are added to the Storage Reports Management node and are identified by report type and schedule.

Generating On-Demand Reports

On Demand Storage Reports are obtained by using the Generate Reports Now option. On-demand, reports are used to analyze disk usage on the server.  On-demand, Storage Reports are also saved in their default location.

Generate Reports Immediately

  1. Click on Storage Reports Management node
  2. Right click on Storage Reports Management and then click on Generate Reports Now (Alternatively, choose Generate Reports Now from the Actions panel) to open the Storage Reports Task Properties dialog box
  3. Selecting the volumes and folders to use:
    • Under Scope click on Add
    • Browse the folders and select by clicking on the desired folder and click OK.
  4. To specify the nature of the report:
    • Under Report Data, select the report(s) you want to be included

Editing report parameters:

  • Click on the report label and click on Edit Parameters
  • In Report Parameters, you can edit the parameters as needed, then click OK
  • You can view a list of selected parameters by clicking on Review Selected Reports, then click
  1. Specify saving format:
    • Under Report Formats, you can choose to use the default Dynamic HTML or use the CSV, XML, HTML, and TEXT formats.
  2. Using an E-mail address to send Storage Reports:
    • On the Delivery tab select, the option Send reports to the following administrators. Then enter the administrative account by using the format account@domain. Remember to use a semicolon when adding more than one account.
  3. To get all the data and generate reports, click OK to open the Generate Storage Reports dialog box.
  4. Choose how you want to generate on-demand reports:
    • You can view the reports immediately or wait for the entire report to be generated before being displayed.
    • To view reports later, click on Generate Reports in the background.

Conclusion

All Storage Reports tasks are added to the Storage Reports Management node. They can be viewed by their status, the last run time, and the output of every run, and the next scheduled run time.

Prevent Unauthorized Access to Windows Server Storage Reports!

Get your free edition of the easiest and fastest NTFS Permission Reporter now!

Storage Replication in Windows Server 2016

Storage Replica is a new Windows Server technology feature on Windows Server 2016. This facilitates the replication of volumes between clusters for discovery or servers. It also allows the users to craft stretch failover clusters which span at least two sites, and with all the nodes kept in sync.

Note: This feature is only available in the Datacenter edition of Windows Server 2016.

Storage Replica reinforces asynchronous and synchronous replications.

  • Asynchronous replication mirrors the data across sites which lie beyond metropolitan ranges over the network links which have higher latencies, minus any guarantee that both sites have any identical copies of data by the instance of failure.
  • Synchronous replication has the duty of reflecting the data within the low-latency network site which have crash-consistent volumes to make certain that there is zero data loss at the file-system level amid the failure.

Why You Need Storage Replication

The storage replica is an ideal tool for the modern requirement for disaster recovery alongside the preparedness abilities in Windows Server 2016 Datacenter Edition. The Windows Server, for the first time, offers the users with a peace of mind of no data loss, an ability to synchronously safeguard data on various floors, racks, building, cities, counties, and campuses.

After a disaster strikes, the data will be accessible elsewhere without any data loss. The same principle applies prior to the striking of the disaster; the storage replica allows the users to switch workloads to much safer locations before catastrophes are served with a few moments warning (again, without any data loss).

The storage replica is also reliable as it reinforces the asynchronous replication for extended ranges and networks of higher latency. Since it is not a check-point, the delta of adjustments will be somehow much lower as compared to the snapshot-based outputs. Again, the storage replica mainly operates at the partition layer, and is therefore able to replicate all VSS snapshots modelled by the Windows Server and backup software. This permits the application of unstructured operator data synchronously replicated.

The storage replica can also permit users to decommission the existing file replication systems like DFS replication which were pressed into the duty as the low-end disaster recovery remedy. The DFS replication works quite perfectly over very low bandwidth networks, though its latency is relatively higher most of the time. This is majorly contributed by its need for files to close and also its artificial throttles which are meant to eradicate the network congestion.

Supported Configurations

The Stretch Cluster allows the users to configure storage and computer in one cluster, where other nodes share a set of symmetric storage whole, some nodes share the other, and then asynchronously or synchronously replicate with the site awareness. This instance can leverage storage spaces with the shared SAN, SAS Storage and ISCSI-attached LUNs. It is regulated with the PowerShel and Failover manager graphical gadget, and permits for the automated failover.

Cluster to Cluster permits the replication in between two separate clusters, where a single cluster asynchronously or synchronously replicates with another cluster. Ideally, the instance can permit the utilization of storage spaces directly, SAN and ISCSI-attached LUNs and Storage Spaces with shared SAS storage. It is naturally managed by the PowerShell and demands manual intervention for the failover. There is an inclusion of support for Azure Site recovery of this instance.

Server to Server permits both asynchronous and synchronous replication between at least two standalone servers leveraging the Storage Spaces with the shared SAS storage, ISCSI-attached LUNs and SAN. This is also managed by the PowerShell, alongside the server manager tool and demands a manual intervention for the failover.

The Key Features of Storage Replication

Simple Management and Deployment
The storage replica has a model mandate for an ease of use. The crafting of the replication affiliation between two servers demands only one PowerShell command. The deployment stretch clusters leverages the intuitive wizard in the Failover Cluster Manager gadget.

Host and Guest
All abilities of Storage Replica are in both virtualized guest and host-based deployments. This implies that the guests are able to replicate their data volumes if running on non-Windows virtualization platforms even in public clouds, so long as Windows Server 2016 Datacenter Edition in the guest is utilized.

Block-Level Replication, Zero Data Loss
With the help of synchronous replication, there is zero possibility of any data being lost. With the block-level replication, there is no probability of any file getting blocked.

User Delegation
The operators can delegate the permissions to manage the replication without being an affiliate of the built-in Administrators team on the replicated modes, hence reducing their access to the unrelated sections.

Network Constraint
The storage replica can at times be limited to the individual networks server and by the replicated volumes, with the aim of providing backup, application, and management software bandwidth.

High Performance Initial Sync
The storage replica reinforces the seeded initial sync, where there is already a subset of data on a target from the initial backups, copies, or shipped rives. The initial application can only copy the differing blocks, possibly reducing the initial sync time and regulating data with an aim of preventing the data from utilizing the limited bandwidth.

Use of SMB 3 as the transport protocol which is also supported via the TCP/IP model.

Prerequisites

  1. Two servers with two volumes on each server or location. One location will be for storage of data and the other for storage of logs.
  2. Volumes need to be of the same size both at the main server and remote server.
  3. Log volumes should also be of identical sizes across the two volumes.
  4. Data volumes should not exceed 10TB and should be of NTF
  5. Both servers need to be running Windows Server 2016.
  6. There must be at least 2GB of RAM alongside two cores for every server.
  7. There must be one TCP/Ethernet connection on each of the server for synchronized replication, but most preferably RDMA.
  8. The network between the servers with a reliable amount of bandwidth to accommodate the user’s IO write workload and an average of 5ms round-trip latency for an effective synchronous replication.

How it Works

The above diagram depicts how storage replication works in synchronous configuration.

The application will write data onto the File System volume labelled Data. This will be intercepted by I/O (input/output) filtering and be written onto the Log Volume located on the same server. This data will then be replicated across to the remote server’s log volume. When this data is written on the log volume, an acknowledgement is sent back to the primary server and to the application. On the remote server, data will be flushed from the Logs volume to the Data volume.

Note: The purpose of the Log Volume is to record and verify all the changes that occur across both blocks. Furthermore, in synchronous model configuration, the primary server needs to await acknowledgement from the remote server. If network latency is high, this will lead to a degraded network and slow down the replication process. Consider using RDMA which has a low network latency.

In asynchronous replication model, data would be written to the Log Volume located on the main server and thereafter, an acknowledgement sent to the application. Data would then be replicated from the Log Volume on the primary server to the Log Volume on the remote server. Should the link deteriorate between the two servers, the primary server will block all changes until the link is restored whereupon replication of changes will continue.

Setting Up Storage Replication

  1. Import-module StorageReplica
    Launch Windows PowerShell and verify the presence of Storage Replica Module.
  2. Test-SRTopology -SourcheComputerName CHA-SERVER1 -SourceVolumeName f: -SourceLogVolumeName e: -DestinationComputerName CHA-SERVER2 -DestinationVolumeName f: -DestinationLogVolumeName e: -DurationInMinutes 30 -ResultPath c:\temp
    Test the storage replica Volume by running the command above.
  3. PowerShell will then generate an HTML report that will give an overview of the requirements met.
  4. NewSRPartnership -SourceComputerName CHA-SERVER1 -SourceRGName SERVER1 -SourceVolumeName e: -SourceLogVolumeName f: -DestinationComputerName CHA-SERVER2 –DestinationRGName SERVER2 –DestinationVolumeName e: -DestinationLogVolumeName f:
    Begin setting up the replication configuration using the command above.
  5. Set-SRPartnership –ReplicationMode Asynchronous.
    Run Get-SRgroup to generate a list of configuration properties. It is set to run on synchronous replication by default & Log file set to 8GB. This can be set to asynchronous using the command above.

When we head out to the remote server and open File Explorer, Local Disk E will be inaccessible, while Logs will be stored on Volume F.

When data is written on the source server, it will be replicated block by block to the destination or remote server.

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

  • No more unauthorized access to sensitive data
  • No more unclear permission assignments
  • No more unsafe data
  • No more security leaks

Get your free trial of the easiest and fastest NTFS Permission Reporter now!

Overview: Resilient File System (ReFS)

Resilient File System (ReFS) is Microsoft’s latest file system that is an alternative to the New Technology File System (NTFS). ReFS has been introduced for implantation of systems with large data sets to give more functionality in terms of efficiency, scalability, and availability.

An outstanding feature of ReFS is data integrity which protects data from common errors that may lead to data loss. In case of an error in the file system, ReFS has the ability to recover from data loss without compromising the volume availability. On the other hand, ReFS is a robust file system with proven reliability and it is time and cost efficient when used on servers.

The Key Elements of ReFS

The key elements of a Resilient File System are dependent on the amount of data the server system manages.

  • Allocate on Write
    The main reason behind this feature is to avoid data corruption because of its ability to provide cloning of course database simultaneously without straining available storage space. All forms of torn writes are eliminated using the Allocate on Write method. This implies that the file stored on ReFS partition can be read and written on a single instruction.
  • B+ Trees
    The servers store a lot of information and limitless sizes of files and folders. The ReFS scalability element means that the file servers can handle large data sets efficiently. A B+ Tree file structure also enables data to be stored and retrieved in a tree structure with every node acting as keys and pointers to low level nodes in the same tree.

Why Use Resilient File System

  • Resilience
    From its name, the ReFS partition will automatically detect and fix detected errors while the file is in use without compromising file in integrity and availability. Resiliency relies on the following four factors:

    • Integrity Streams
      Integrity streams allow for the use of checksums on stored data enabling the partition to query the reliability and consistency of the file. Fault tolerance and redundancy is maintained through data striping. Power shell commands such as Get-FileIntegrity and Set-FileIntegrity can be used to manage file integrity streams.
    • Storage Space Integration
      ReFS allow for repair of data files with an alternate copy found in the storage space. This is made possible when used alongside disk mirroring. The repair and replacement takes place online without the need to unmount the volume.
    • Data Recovery
      When data is corrupted and no original copy of it exists in the database, ReFS will remove the corrupt data from the namespace while keeping the volume online.
    • Preventive Error Correction
      The Resilient File System allows for data integrity check in addition to validation before any read or write action. The integrity check will occasionally scan through volumes to identify potential errors and trigger a repair action.
  • Compatibility
    Working with ReFS can be used alongside volumes using the New Technology File System (NTFS) because it still has support for key NTFS features.
  • Time Saver
    When backing up data or transferring files from partitions using ReFS, the time taken during read/write actions is reduced compared to backing up data in an NTFS partition.
  • Performance
    ReFS performance ranks on new features like virtualization, cloning volume blocks, real time optimization, etc. All are to enhance dynamic and multiple workloads. Performance on any ReFS is made possible through:

    • Mirror Accelerated Parity
      The parity mode ensures that the system delivers both efficient data storage and high performance. The volume is divided into two logical storage sectors, each with its own drive properties and resilient types.
    • Accelerated VM Operations
      In an effort to improve functionality when implementing virtualization, ReFS allow for the creation of partitions that support block cloning to allow for multi-tasking. ReFS also reduces the time needed to create new fixed-size Virtual
      Hard Disk files from minutes to seconds.
    • Varied Cluster Sizes
      The ReFS allows for the creation of both 4K and 64K file cluster sizes. In other file systems, 4K is the recommended cluster size. But the ReFS accommodate the 64K because of its large and sequential input /output file requests.
    • Scalability
      The ability to support large data sets without having a negative impact on system performance is by far the best file deployment system in terms of scalability. Shared data storage pools across the network enhance fault tolerance and load balancing.

Points to Note

ReFS cannot be used on a boot file system (the drive containing bootable Windows files). The ReFS partition is best used exclusively on storage volumes.

Removable volumes such as USB flash drives cannot accommodate the ReFS partition because there is no available mechanism to convert a ReFS partition to another file system.

ReFS, like NTFS, was built on the foundation of compatibility to make it easier to move data from NTFS to ReFS because of the inherited features like access control list, BitLocker, mount points, junction points, volume snapshots, symbolic links, and file IDs.

Some of the lost features likely to be encountered when moving to ReFS are Object IDs, short names, extended IDs, compressions, quotas, hard links, user data transactions, and file level encryption.

Some files or installed programs may not function as intended when ReFS is used on a non-server operating system.

In the even that a ReFS partition fails, recovering the partition is not possible; all that can be done is data recovery. Presently, there is no recovery tool available for ReFS.

Conclusion

The Resilient File System has unique advantages over the existing file system. It may have its own drawbacks, but that does not take away its self-healing power, file repairs without downtimes, resilience in the event of power failure, and its ability to accept huge file sizes and names longer that then usual 255 characters. File access on ReFS uses the same mechanisms NTFS uses.

Most of the implementations of ReFS are to be used on systems with huge storage and rapid input/output demands. The ReFS cannot fully replace the NTFS because its design was intended for a specific work environment. Some of its features do not have full support, therefore system administrators aspiring to use ReFS may still have to wait for its full implementation.