File System Attacks on Microsoft Windows Server

Most common File System Attacks on Microsoft Windows Server systems are an Active Directory targeted attacks, which is, based on a fact that AD is a “heart” of any Windows-based system. The bit less common, but still, very dangerous ( and interesting), can be File system attacks.

In this article, we investigated most common ways of filesystem attacks and protection against it.

The goal of File System Attacks is always the data, pieces of information stored on a server, important for any reason to any side that planned an attack. To get to data, first thing, attacker needs are credentials, as more elevated account, as better.

In this article, we will not write about credentials theft, that could be a topic for itself, but we will assume, that attacker already breached organization, and got the Domain administrator credentials.

Finding File Shares

The first step is finding the Data, a place where it “lives”.

For that, the tools are coming out the front. Most of the tools, attackers are using, are penetration testing tools, like, in our example smbmap, or PowerShell ( we will show both ways)

SMBMap, as git hub says “ allows users to enumerate samba share drives across an entire domain. List share drives, drive permissions, share contents, upload/download functionality, file name auto-download pattern matching, and even execute remote commands. This tool was designed with pen testing in mind, and is intended to simplify searching for potentially sensitive data across large networks”

So with a usage of Smbmap’s features, attackers will find all the file shares on those hosts and determine what sort of access, Permissions, and more detailed info about any file share on the system.

Another common way of determining the data location is PowerShell based.

By definition – PowerSploit is a collection of Microsoft PowerShell modules that can be used to aid penetration testers during all phases of an assessment.

And like smbmap, PowerSploit has a huge number of features. For finding data shares, attackers use Invoke-ShareFinder cmdlet, which will, in the combination of other PowerSploit features, show exactly the same things as smbmap, that means all information necessary to access and use data.

Protection

Of course, examples, above, are a just brief description of attacks that can list your data shares to the potential attacker, but, no matter, it is clear, that listing your data is a first step to getting it.

So here are some recommended actions to protect your system:

Removing open shares: Reduce open shares as much as possible. It is ok to have some if explicitly needed by a job, but sometimes, open shares are just result of lousy made permissions. Check out your default permissions ( default permissions are equivalent to open), change them properly, and avoid easy listing for the potential attacker

Monitor first time access activity – this is more an admin tip than a protection method, but it can be important. If you notice, a user has a right to share but never used it, and all the sudden, activity on that account changes, and steps out of “normal”, it could be a sign that account credentials are hijacked.

Check for potentially harmful software, not as malware, but a hint. SmbMap is built in python, so if noticed, sudden installation of python software, or PowerSploit module on your system, that could be an early alarm that something suspicious is going on your servers.

Finding Interesting Data

So now the potential attacker know where the data on our hypothetical server “ live”. The next step is narrowing the data down on “interesting”. There could be huge amounts of files in even the smallest organizations. How can attacker know which data is one he/she need.

With PowerSploit, functionality used is called Invoke-FileFinder.  It has a lot of filtering options, to narrow down data to “interesting”, and export it to CSV files, which allows attacker to explore it on his own system with wanted pace, and after detecting it, attacker can make a targeted attack, and get needed files to staging area, and transport them out of the network ( via FTP, or even Dropbox trial account).

The same thing happens with SmbMap. Just as PowerSploit, it will filter out the data with options, the tool can provide, and show the data, the attacker is interested in, with the same outcome, getting pieces of information.

Protection

With this example, a hypothetical attack is in the second phase. The attacker, successfully listed files and found the most interesting ones. The easy part is left undone. Just taking the data. How to protect from that? Together with earlier mentioned methods, the following can help administrator fortify system and files.

Password rotation – Can be very important action, especially for services and applications that store passwords in filesystems. Constantly rotating passwords and checking file content can present a very large obstacle for the attacker, and will make your system more secure.

Tagging, and encryption –  In combination with Data Loss Protection, will highlight and encrypt important data, which will stop simple type of attacks, at least, getting important data.

Persistence

The final part of the file system attack. In the hypothetic scenario, we had listed and accessing data on the penetrated system. And here we will describe how attackers persist in the system, even when they get kicked out the first time.

Attackers hide some of their data into the NTFS file system, more accurate, in Alternate Data Stream ( ADS). Data of a file is stored in $DATA attribute of that file as NTFS tracks it. Malware vendors, and “bad guys” are tracking ADS and use it for entrance, but still, they need credentials.

So as usual, they can be stopped by correct permissions usage, and not allowing “write” permission to any account that is not specifically assigned for write operations.

File System Attacks are tricky, but they are leaving traces, and in general, most of the attacks should be prevented by System Administrator behavior and predictions. In this field, we can fully say: it’s better to prevent than to heal, and it is clear that only knowing your system fully, and full-time administration and monitoring will/can make your system safe.

Do you want to avoid Unwanted File System Attacks on Microsoft Windows Server?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Introduction to Data Deduplication on Windows Server 2016

Data Deduplication is a Microsoft Windows Server feature, initially introduced in Windows Server 2012 edition. 

As a simple definition, we can tell, data deduplication is an elimination of redundant data in data set and storing only one copy of the same data. It is done by identifying double byte patterns through data analysis, removing double data and replacing it with reference pointed to stored, single piece of data. 

In 2017, according to IBM, an output of world data creation was 2.5 quintillions (1018) bytes a day. That fact shows that today’s servers handle huge portions of data in every aspect of human life. 

Definitely, some percentage drops on duplicated data in any form, and that data is nothing more than the unnecessary load on servers. 

Microsoft knew the trends, way back in 2012 when Data deduplication was introduced and kept developing it, so in Windows Server 2016 system, Data deduplication is more advanced, as more important. 

But let’s start with 2012, and understand the feature in its basic. 

Data Deduplication Characteristics: 

Usage –  Data deduplication is very easy to use. It can be enabled on a data volume in “one-click”, with no delays or impacts on a system functionality.  In simple words, if the user requests a file, he will get it, as usual, no matter is that file affected by deduplication process. 

Deduplication is made not to aim to all files. For example, files smaller than 32KB, encrypted files ( encrypted with a usage of EFS), and files that have Extended attributes, will not be affected by the deduplication process. 

If files have an alternate data stream, the only primary stream will be affected, but alternate, will not.  

Deduplication can be used on Primary data volumes without affecting files that are being written to until files get to certain age, which allows great performance of feature active files and saves on other files. It sorts files in categories by criteria, and those that are categorized as “in policy” files are affected with deduplication, while others are not. 

Deduplication does not change write-path of new files. It allows writing of new files directly to NTFS and evaluates them later through background monitoring process. 

When files get to a  certain age, MinimumFileAgeDays setting decides ( previously set up by admin), are the files eligible for deduplication. The default setting is 5 days, but it can be changed, to a minimum of 0 days, which processes it, no matter of age. 

Some file types can be excluded, like PNG or CAB file types, with compression, if it is decided, the system will not benefit much from mentioned file type processing. 

In need of backing up and restoring to another server, deduplication will not make problems. All settings are maintained on the volume, and in need of relocation, they will be relocated too, all except scheduled settings, that are not written on volume. If relocation is made to a server that does not use deduplication, a user will not be able to access files affected by the process. 

Resource Control 

The feature is made to follow server workload and adapt to system resources. Servers usually have roles to fill, and storage, as seen by admin is only necessary to store background data, so deduplication is adapting to that philosophy. If there are resources to deduplicate, the process will run, if not, the process will stand by and wait for resources to become available. 

A feature is designed to use low resources and reduce the Input/output operations per second ( IOPS) so it can scale large data and improve the performance, with index footprint of only 6 bytes of RAM per chunk (average size of 64 KB) and temporary partitioning.  

– As mentioned, deduplication works on “chunks” principle, it uses an algorithm with chunks a     file in a 64KB pieces, compresses it, and store in a hidden folder. If a user requests that file, it “regenerate” file from the pieces and serve it to the user. 

–  BranchCache:  the feature that sub-file chunking and indexing engine are shared with. It has an option to send, if needed,  already indexed chunk over the WAN to the branch office, and saves a lot of time and data. 

Is there a  Fragmentation, and what about data access? 

The question that is imposed when reading about deduplication, is fragmentation!? 

Is there a fragmentation on a hard drive, based on spreading chunks around your hard drive? 

Answer is no, deduplication ’s filter driver has a task to keep the sequence of unique chunks together on disk locality, so distribution doesn’t go randomly, plus, deduplication has its own cache, so in situation of multiple requests for a file in an organization, the access pattern will speed things up, and will not start multiple file “recovery” processes, and user will have the same “answer time” as with file without deduplication, and in need of copying one large file, we see end-to-end copy times that can be 1.5 times what it takes on a non-deduplicated volume. But real quality and savings are coming up when copying multiple large files at the same time. The time of copying, due to the cache can speed up to an amazing 30%. 

Deduplication Risks and solutions 

Of course, like all other features, this way of works has some risks. 

In cases of any type of data corruption, there are serious risks, but solutions too. 

There is possibility with errors caused by  disk anomalies, controller errors, firmware bugs or environmental factors, like radiation or disk vibrations, that chunks errors can cause major problems as multiple files loss., but with good admin organization, usage of backup tools,  on time corruption detection, redundancy copies and regular checkups can minimize risks of corrupted data, and loses. 

Deduplication in Windows Server 2016 

As with all other features, data deduplication went through some upgrades and new features in the latest edition of Microsoft Server. 

We will describe the most important ones, and show a way to enable and configure that feature in Microsoft Server 2016 environment. 

Multithreading  

Multithreading is flagged as a most important change in 2016 when compared with Windows Server 2012 R2. On Server 2012 R2, deduplication operates in a single-threaded mode, and it uses one processor core by the single volume. In Microsoft, they saw it as a performance limit, and in 2016, they introduced multi-threaded mode. Now each volume uses multiple threads and an I/O queues. It changed limits of size per file or volume. In Server 2012 R2, maximum volume size was 10 TB, and in 2016 edition, it changed to 64TB volumes, and 1 TB files, what represents a huge breakthrough. 

Virtualization Support 

In the first edition of deduplication feature ( Microsoft Windows Server 2012), there was a single type of deduplication, created only for standard file servers, with no support for constantly running VM’s. 

Windows Server 2012 R2 started using   Volume Shadow Copy Service (VSS)  in a way that deduplication with a usage of optimization jobs, optimizes data, and VSS captures and copies stable volume images for backup on running server systems. With the usage of VSS, Microsoft, in 2012 R2 system, introduced virtual machines deduplication support and a separate type of deduplication. 

Windows Server 2016, went one step further and introduced another type of deduplication, designed specifically for virtual backup servers (DPM). 

Nano server support  

Nano server is minimal component’s fully operational Windows Server 2016, similar to Windows Server Core editions, but smaller, and without GUI support, ideal for purpose-built, cloud-based apps, infrastructure services, or Virtual Clusters.  

Windows Server 2016, supports fully deduplication feature on that type of servers. 

Cluster OS Rolling Upgrade support 

Cluster OS Rolling upgrade is a Windows Server 2016 feature that allows upgrade of an operating system from Windows Server 2012 R2 cluster nodes to Windows Server 2016 without stopping Hyper V. It can be made by usage of so-called “mix mode” operation of the cluster. From deduplication angle, that means that same data can be located at nodes with different versions of deduplication. Windows Server 2016, supports mix mode and provides deduplicated data access while a process of cluster upgrade is ongoing. 

Installation and Setup of Data Deduplication on Windows Server 2016 

In this section, we will bring an overview of best practice installation and set up data deduplication on Windows Server 2016 system. 

As usual, everything starts with a role. 

In server manager, choose, Data deduplication ( Located in the drop-down menu of File and storage services), or with the usage of PowerShell cmdlet (as administrator) :  

Install-WindowsFeature -Name FS-Data-Deduplication 

Enabling And Configuring Data Deduplication on Windows Server 2016 

For Gui systems, deduplication can be enabled from Server manager – File and Storage services – Volumes, selection of volume, then right-click and Configure Data Deduplication. 

After selecting the wanted type of deduplication, it is possible to specify types of files or folders that will not be affected by the process. 

After it is needed to setup schedule, with a click on Set Deduplication Schedule button, which will allow selection of days, weeks, start time, and duration. 

Through PowerShell terminal, deduplication can be enabled with following command ( E: is an example volume letter) : 

Enable-DedupVolume -Name E:  -UsageType HyperV 

Jobs Can be listed with the command : 

Get-DedupSchedule 

And scheduled with following command (example – Garbage collection job) : 

Set-DedupSchedule -Name “OffHoursGC” -Type GarbageCollection -Start 08:00 -DurationHours 5 -Days Sunday -Priority Normal 

These are only basics of deduplication  PowerShell commands, it has a lot more different deduplication -specific cmdlets, and they can be found at the following link : 

 https://docs.microsoft.com/en-us/powershell/module/deduplication/?view=win10-ps 

Do you want to avoid Data Lost and Unwanted Data Access?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

How to Configure NFS in Windows Server 2016

NFS  (Network File System) is a client-server filesystem that allows users to access files across a network and handle them as if they are located in a local file directory. It is developed by  Sun Microsystems, Inc, and it is common for Linux/ Unix systems. 

Since Windows Server 2012 R2, it is possible to configure it on Windows Server as a role and use it with Windows or Linux machines as clients. Read to know about How to Configure NFS in Windows Server 2016 here.

How to install NFS to Windows Server 2016 

Installation of NFS (Network File System) role is no different than an installation of any other role. It goes from “Add roles and features Wizard”. 

With few clicks on  “Select server roles” page, under File and Storage Services, and expansion of File and iSCSI Services, the system will show checkbox “Server for NFS”. Installation of that role will enable NFS server. 

The configuration of NFS on Windows Server 2016 

After installation, it is needed to configure role properly. The first stage is choosing or creating a folder for NFS (Network File System) share. 

With right click and properties option, the system will bring the NFS Sharing tab, and Manage NFS sharing button, as part of the tab. 

It will provide  NFS Advanced Sharing dialogue box, with authentication and mapping options, as well as with “Permissions” button. 

Clicking on “Permissions” button will open Type of access drop-down list, with the possibility of root user access, and permission level. 

By default, any client can access the NFS shared folder, but it is possible to control or limit the specific clients, with a clicking of Add button and type the client’s IP address or hostname. 

 Mount NFS Shared Folder on Windows Client 

The steps above make NFS (Network File System) server ready for work.  

To successfully test it, it is needed to mount chosen NFS folder on a Linux or Windows client with following steps: 

  1. It is needed to activate a feature on the client, by clicking Control Panel / Programs and Features / Services for NFS / Client for NFS
  2. After installing the service, it is needed to mount the folder with the following command :
mount –o \\<NFS-Server-IP>\<NFS-Shared-Folder> <Drive Letter>: 

 The command maps folder as drive and assigns chosen letter to it. 

Mount NFS Shared Folder on Linux Client  

No matter NFS is common to Linux / Unix systems, it is still needed to mount folder to a system via command, similar to windows systems. 

mount –t NFS <NFS-Server-IP>/<NFS-Shared-Folder> /<Mount-Point> 

 

Do you have unclear NTFS Permissions assignments?
Do you have too many special permissions set on your fileservers?
Or blocked NTFS Permission Inheritance?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

How to Optimize Your Active Directory for Windows Server 2016

Microsoft Windows Server 2016 is still new in the market and organizations are already asking their IT experts to evaluate its added value and possible challenges that one may encounter when moving from the current systems to the new server platform. In addition to the features found on Windows Server 2012 and 2012 R2, Windows Server 2016 presents new possibilities and capabilities that are missing on previous Windows Server platforms. Any new Windows Server Operating System that breaks the market gets more attention. Windows Server 2016 had made tremendous improvements to its Active Directory Optimization.

The best approach to take before implementing Windows Server 2016 is to test its readiness by looking for ways of minimizing the likely impact of migration. Another way to look at it would be to identify organizational needs and how they can be integrated for future implementations. The reason Administrators would want to try on the Windows Server 2016 Active Directory Optimization is to provide an opportunity for growth, offer flexibility, and enhance security setup in the organization. Now let us talk about active directory optimization

Why Does Windows Server 2016 Matter

Windows Server 2016 is a representation of combinations from different principles that define computation, identity, management and automation, security and assurance, and storage. All these are broken down into the core elements of the Server Operating System that consists of Visualization, System Administration, Network Management, and Software Defined Network (SDN) technologies, Cloud Integration and Management, Disk Management and Availability. All these are supposed to bring organizations to the future of technology without the need to discard some of the infrastructures being used in the current environment.

Windows Server 2016 is a full-featured server Operating System boasting of solid performance with modern advancements. This new server shares so many similarities with the Data Center edition that incorporates support for Hyper-V containers and new storage features and enhanced security solely to protect virtual machines and network communications that have no trust configured between them.

This article should help you the reader learn more about Windows Server 2016 features, factors to consider before moving from old to a new setup, and how to do Active Directory Optimization. More details on how to prepare to move and migrate efficiently by managing the new environment effectively.

Windows Server 2016 New Features

Several features and enhancements form part of this server operating system. Here are some of the highlights:

Temporary Group Membership

This form of membership gives Administrators a way of adding new users to a security group for a limited time. For this feature to work, Windows Server 2016 Active Directory must be operating at the functional level. System Administrators need to know beforehand all the system installation requirements during and after the transition.

Active Directory Federation Service

There are essential changes that come with Microsoft Windows 2016 Server Federation Service:

Conditional Access Control

Active Directory in previous installations had straightforward access controls because the assumption had always been that all users would be logging in from a computer joined to a domain with proper Group Policy Security settings. The conditional access gives users access to resources that have been assigned to them.

In the current technological setup users’, access resources from different types of devices that are not connected to the domain and usually work outside the organizations operating norms. This is a direct call for the improvement of security by introducing a Conditional Access Control Feature enabling administrators to have better controls over users whose requests should be handled on per application basis. For example, administrators may enforce multi-factor authentication when the compliant devices try to access business applications.

Support for Lightweight Directory Access Protocol (LDAP) v3

Another change that has been introduced in line with regard to the Active Directory Federation Systems is the Support for Lightweight Directory Access Protocol. The capability makes it easier to centralize identities across different directories. For example, an organization that uses non-Microsoft directory format for identification and access control can centralize identities to office Azure cloud or Office 365. LDAP v3 making it easier to configure a single sign-on for SaaS applications.

Domain Naming Service (DNS)

Active Directory and DNS go hand in hand because of the dependency of Windows Server systems on DNS. There have been no significant changes in the Windows Server DNS service until the arrival of Windows Server 2016. The following are new features under the DNS:

1.     DNS Policies

The inherent ability to create new DNS policies is said to be the most significant. These policies enable administrators to control the way DNS responds to different queries. Some examples of these policies are load balancing and Blocking of DNS requests coming from IP addresses whose domain have been listed as malicious.

2.     Response Rate Limit

The rate of the server response to DNS queries can now be controlled. This control is designed to help defend against external attacks such as denial of service by limiting the number of times in a second a DNS can respond to a client

3.     Microsoft IP Address Management (Microsoft IPAM)

The most significant improvement to the DNS is in its IP Address Management System that helps in the tracking of IP address usage. The integration of Microsoft IPAM feature on DHCP has been robust while the DNS one is minimal. The introduction of Windows Server 2016 brings in some new changes like DNS management capabilities by recording inventory. The support for multiple Active Directory forests by IPAM is a welcome feature. Supporting multiple forests is only possible if there is already an existing trust between them and that IPAM is installed on each forest.

Migration Considerations

Planning is critical when moving from an earlier Windows Server version to Server 2016. The goal of any migration should be minimizing its impact on business operations. Going ahead with the migration should be an opportunity for administrators to set up a scalable, flexible, compliant, and secure platform.

1.     Understanding the Existing Server Environment.

It is a rookie mistake to jump into implementation without a proper analysis of the current server environment. Assessment at this stage should look at users, groups, distribution lists, applications, folders, and Active Directory. On the business side, there is a workflow, emails, programs, and any infrastructure used that should be assessed before making the big move.

It is also vital that you:

  • Understand what needs to be moved and what is to be left as it is. For example, there is no need of moving inactive accounts and old data that is no longer relevant. All active data stores, mailboxes, and users are part of what you should not leave behind.
  • You will also want to analyze applications, users, and processes that need access and should be migrated to ensure that the relevant resources are available during and after the transfer.

2.     Improving Active Direct Security and Compliance Settings

Another critical factor to consider during migration is security and delegation by controlling who makes changes to Window Active Directory objects and policies. Most organizations choose to give access to Active Directory objects to solve an immediate problem and never clear the permissions. Proper controls should be in place to manage what can be added to the AD and who should be responsible for making such changes.

Continuous monitoring of activities in the Active Directory to ascertain if they comply with both internal and external performance regulations should be ongoing. Microsoft Windows Server and AD can audit events with visible output and can be implemented quickly in a busy setup. Having a coherent AD audit cluster with analytical capabilities is critical for marking unauthorized changes, spotting inappropriate use of the AD and related resources, tracking users in the entire infrastructure, and give compliance reports to the auditors.

3.     Ensuring Application Compatibility

Before making an effort to initiate migration, make sure that all software and third-party application used on your organization are compatible and can work with Windows Server 2016. All the in-house applications should also be tested to make sure they work correctly in the new environment.

4.     Minimizing Impact on Business

Minimizing in-house software compatibility is one aspect of reducing the cost of migration on the business. As an Administrator, you need to know how the issue of downtime will be handled when moving from legacy to new system. One thing you need to avoid is underestimating the impact of migration on users and operations by failing to analyze all access points. Many such challenges can be avoided by scheduling resource intensive migration tasks during off-peak hours.

Failure to have a smooth transition between legacy and the new system can lead to service disruptions lost productivity and increased the cost of doing business. The co-existence of both the old and the new system is essential in any Active Directory migration because users still need to access resources to ensure continuity. Directory synchronization is important at this stage to make sure that users can access their data.

5.     Restructure the Activate Directory

Moving from your legacy system to Windows Server 2016 should be taken seriously and not treated like any other routine IT task. This is an opportunity to restructure your Active Directory Optimization to meet its current and future needs. Every time there is a significant system upgrade, changes in organizational models and requirements may have prompted it. Changes in the IT technology is also a major force that influences restructuring of the Active Directory.

Determine the number of domains and forests needed. Examine the need to merge some forests or create new ones. You can also take an opportunity to join new infrastructure to remote offices that may not have been in existence in the legacy system.

Active Directory Management and Recovery

Every IT management faces challenges when managing the Active Directory on a daily basis. The configuration of user properties is time-consuming and error-prone when dealing with a large and a complex Windows Network. Some of these duties have to be performed manually leading repetitive and mundane tasks that end up taking up most of the Administrators time. However, when you decide to accomplish the above tasks using Windows Native Tools or the PowerShell means that you must have a deeper understanding of how the Active Directory and its features work.

The use of software to manage the Active Directory repetitive tasks simplifies the process. You can also get detailed reports on tasks and their status. Using software offers solutions that help in the planning and execution of an efficient AD restructuring, which will eventually help you, implement a secure system. Managing AD using software gives a common console where the management can view and manage Active Directory, users, computers, and groups. Some software’s enable the administration to plan for a secure way of delegating repetitive tasks and perform controlled automation of the Active Directory Structure.

Software Implementation

Two popular software being used in the management of Active Directory optimization tasks are:

  1. ADManager Plus
  2. Quest Software

They both can help in the restructuring and consolidation of Windows Server 2016 in a new environment.

1.     ADManager Plus

The ADManager Plus has additional features such as sending and receiving customized notifications via SMS or emails. The search options make it easier for IT managers to search the directory with ease through its software interface panel. Using the ADManager Plus, the IT department can execute windows optimization tasks with ease in addition to the integration of utilities such as ServiceNow, ServiceDesk, and AdselfService Plus.

Active Directory User management

ADManager Plus manages thousands of your Active Directory through its interface. This property helps you create and modify users by configuring general attributes, exchange server attributes, and apply exchange policies, terminal service attributes, and remote login permissions. You can set new users in Office 365 and G suite when creating the new accounts in the Active Directory. You can design templates that can help the help desk team to modify and configure user accounts and properties by a single action.

Active Directory Computer Management

This solution allows for the management of all computer in the existing environment from any location. You can create objects in bulk using CSV templates by modifying group and general attributes of computers, move them between organizational units, and enable/disable them.

Active Directory Group Management

The management of groups is made more flexible using the software modules used in the creation and modification of groups using templates and conduct all configuration attributes in an instant.

Active Directory Contact Management

You can use this software management tool to import and update Activate Directory contacts as a single process. Therefore, this implies that you do not have to select individual contacts for an update.

Active Directory Help Desk Delegation

The ADManager Plus delegation feature can help administrators to create help desk administrators, and delegate desired tasks related to user attributes. The various repetitive management tasks for users, group, computers, and contacts can be delegated using customized account creation templates. The help desk users can share the workload of the administrators which frees them up giving them more time to work on core duties.

Active Directory Optimization Reports and Management

The ADManager plus provides information on different objects within the AD which allows for the viewing and analysis of information on its web interface. For example, you can see a list of all inactive users and modify the accountant accordingly.

2.     Quest

Quest software takes a different approach because it deals with preparation, recovery, security and compliance, migration, consolidation, and restructuring.

Preparation

During preparation, Quest helps in the assessment of the existing environment with the enterprise reporter gives a detailed evaluation of the current setup that includes the Active Directory, Windows Server, and SQL Server. During this assessment, Quest can report the number of accounts you have in the Active Directory and isolate the active and the disabled ones. Knowing the exact status of your environment is paramount before the migration begins.

Quest helps discover identities and inventories on application servers that are dependent on the Active Domains that are being moved to enable you to fix or redirect them on the new server.

Migration, Consolidation, and Restructuring

The Migration Manager for Active Directory gives the Zero IMPACT AD restructuring and consolidation. The Migration Manager offers a peaceful coexistence to both the migrated and yet to be migrated by maintaining secure access to workstations and resources.

Secure Copy offers an automated solution for quick migration and restructuring files on the data server by maintaining the security and access points. Its robustness makes the tool to be rated as perfect for planning and verification of successful file transfers.

Migrator for Novell Directory Service (NDS) helps administrators move from Novel eDirectory to Active Directory. The tool also moves all data within Novell and re-assigns permission to new identities in the new server.

Security and Compliance

The Change Auditor for Active Directory gives a complete evaluation of all the changes that have taken place in the Active Directory Optimization. The evaluation report contains information such as who made the changes, what kind of changes was made, what were the initial and final values before and after adjustment, and the workstation name where the change occurred. The change auditor tool also prevents changes, for example, you can disable the deletion of or transfer of Organization Units and changes that can be made Group Policy Settings.

Access Control

Active Roles modules ensure that security of the AD complies by enabling you to control access by delegating tasks using less privilege. This gives an opportunity to generate access rules based on defined administrative policies and access rights. You can use the Active Roles to bring together user groups and mailboxes as well as changing and removing access rights based on role changes.

Centralized Permission Management

The Security Explorer facilitates the management of Microsoft Dynamic Access Controls (DAC) by enabling administrators to add, remove, restore, backup, and copy permission all on a single console. The tool can make targeted or bulk changes to server emissions made possible by the enhanced by Dynamic Access Control management features such as the ability to grant, revoke, clone, and modify permissions.

Monitoring Users

The InTrust enables the secure collection, storage, and reporting alerts on the data log that complies with both internal and external regulations surrounding policies and security best practice. Using InTrust, you get an insight into user activities by auditing access to critical systems. You can see suspicious Logins in real time.

Management and Recovery

The easiest way the IT administrator can manage user accounts, computers, and objects via the Group Policy. Poor management of the Group Policy Objects (GPO) can cause many damages. For example, if your GPO is assigning proxy settings with wrong proxy values.

GPO Admin will automate Group Policies, and it has a workflow to enable the checking of changes before being approved by the GPOs. When GPO’s are used in the production industry, the management team will be impressed by the reduced tasks as it improves security.

Recovery is a critical process in any organization that runs its system based on Windows Server 2016. You can also recover the wrong entries and accounts that were removed. The Recovery Manager for Active Directory gives access to other features that report on the differences and help restore objects that were changed.

It is important to be prepared in readiness for disaster and data recovery. In case your domain finds itself in the wrong hands, or the entire network setup is corrupted, use the Recovery Manager for Active Directory optimization utility.

Conclusion

Windows Server 2016 has a wealth of new features and capabilities to streamline and improve the management and facilitate better user experience. A successful implementation means that Active Directory Optimization has a sound consolidation process. Administrators who have already tested this Server Operating Services should take advantage of the new capabilities

The benefits of Active Directory optimization tools and utilities are numerous because they help in setting up a flexible and secure Windows Server 2016 and Active Directory that will work for your current and future environment. These utilities help managers who are not well conversant with some IT related Active Directory optimization management tools who need to switch to the new server to comply with regional and international standards.

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

  • No more unauthorized access to sensitive data
  • No more unclear permission assignments
  • No more unsafe data
  • No more security leaks

Get your free trial of the easiest and fastest NTFS Permission Reporter now!

Work Folders on Windows Server 2016

Work Folders is a feature in selected windows platforms (Windows Server 2016, Windows Server 2012 Windows 7, Windows 10, and Windows 8.1) that enable the user to access the network files from any network device. Work folders help in keeping file copies on these devices and can be automatically be synchronized to the data center. Let us read about work folders windows server 2016 here.

The best illustration of how a user can separate work folders from personal data is by saving work files on folders that automatically synchronized with the file server. This synchronization means that if a user works from home and is connected to the work network the folder at work will be automatically updated. The same scenario will take place if the user works offline and the moment he or she joins a network all the latest changes will be reflected.  In this article, our focus will be to focus on Work Folders as a role service running on Windows Server. 

Describing Work Folder 

It enables network users to access work files on any device connected to their network. Work Folders are controlled on a centrally managed file server and the IT Department sets specific user and device policies. Such policies include encryption and lock-screen passwords. It can be deployed through file redirection, offline files, or home folders. It has its content stored in a Sync Share folder on the server. In addition, folders with user data can be enabled to operate as Work Folders without the need to phase out the default setting. 

Work Folders Applications 

  • System Administrators use Work Folders to give users access to their work files by keeping everything in a central location.  Some of the practical application areas include: 
  • Act as a single access point to network files irrespective of the location of the user 
  • Enables accessing Work Files from an offline location. Synchronization will take place the moment the PC or network device is connected to the Internet. 
  • Work Folders can be deployed through existing deployments such as Folder redirection, Home folders, or Offline files. 
  • They enable high availability framework when Failover Clustering is used 
  • Work Folders utilizes existing server management technologies like folder quotas and file classification. 
  • Security policies can be used to control PC’s and other devices to encrypt folders and use screen passwords. 

 

The Four Functions of Work Folders 

In a typical Windows environment, the Work Folders provide four main functionalities: 

  • Work Folders Role in Server Manager 

This role is available in Windows Server 2012 and Windows Server 2016. Its role in server management enables the setting up of sync shares, monitor work folders, manage user access, and synchronize shares. 

  • Work Folders cmdlets 

Also available in Windows Server 2012 and Windows Server 2016. This powerful shell contains detailed cmdlets for managing Work Folder servers. 

  • Work Folders Integration with Windows 

Works with Windows 10, Windows 8.1, Windows 7 (has to be downloaded), and Windows RT 8.1. In windows, the control panel is available for setting up and monitoring Work Folders. The same can also be done through windows explorer integration enabling easy access to files and folders. The other functionality within windows is the sync engine that enables movement of files to and from the central server to maximize system performance. 

  • Work Folders App for Devices 

Used on Android systems, Apple iPhones, and iPad. The App allows devices using these operating systems to access files stored in Work Folders. 

 

What are the New Functionalities? 

  • Azure Active Directory Application Proxy Support 

This feature was added to Windows 10, Android, and iOS. Allows for the remote access of user files in the Work Folders via Azure Active Directory Application Proxy. 

  • Faster Change Replication 

This is an updated feature in Windows 10 and Windows Server 2016. When used in Windows Server 2016 any synchronized changes are immediately passed to the user. For this notification to work in this environment, the client must be a Windows 10 computer. 

  • Integrated with Windows Information Protection (WIP) 

An addition to windows 10. Using the WIP means that the Work Folders can enforce data protection through encryption.  The encryption key is linked to the enterprise ID that can be wiped using a supported device running Microsoft Intune package. 

 

Work Folder Software Requirements 

For effective utilization of Work Folders, the following software requirements are needed in a network environment. 

  • A Server system with Windows Server 2012 installed. Windows Server 2016 can also be used for hosting Sync shares and user files. 
  • Disk volumes using the NTFS file system 
  • Password policies enforced on Windows 7 PC using the Group Policy Password. When doing this, Windows 7 PC’s have to be excluded from Work Folders Policies. 
  • A server certificate for every file server on the network. The certificates are obtained from a public certification authority. 
  • An Active Directory Domain Services forest with schema extensions that enable correcting referencing of PC’s and devices when accessing multiple servers. 

Accessing Sync shares across the internet needs additional requirements: 

  • Making file servers accessible from the internet through rules created in the reverse proxy or gateway configuration. 
  • Using a public domain name with an option of creating additional DNS records 
  • Use an Active Directory Federation Service when using AD FS authentication 

Requirements for the Client Computers and Devices 

PC’s and devices should be running any of the following operating systems: 

  • Windows 10 
  • Windows 7 
  • Windows 8 

The latest Android Operating System 

  • iOS 10.2 and above 

For Windows & PC’s the following versions of windows are recommended: 

  • Windows 7 Professional 
  • Windows 7 Ultimate 
  • Windows 7 Enterprise 
  • All Windows 7 computers must be part of the organization’s domain network and not a Workgroup 
  • Additional space of about 6GB if the work folder is running on the system drive 
  • Work Folders do not support rolling back the virtual machines; therefore, regular backups are necessary. 

Conclusion 

Work Folders like any other remote file and synchronization technologies used over the network ensures file availability from PCs and Devices connected to the central file server. It supports a number of operating systems with the only difference when compared to other sync applications, it does not offer cloud services. 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

  • No more unauthorized access to sensitive data
  • No more unclear permission assignments
  • No more unsafe data
  • No more security leaks

Get your free trial of the easiest and fastest NTFS Permission Reporter now!

What’s New in Windows Server 2016 Federation Services?

The corporate environment requires many collaboration application services to promote a seamless workflow environment. Windows Server 2016 represents major steps towards an environment that supports cloud features and an improved level of security and innovations. Some of the improvements found in Windows Server 2016 include:

  • Active Directory Federation Services (ADFS)
  • Microsoft IP Address Management (IPAM)
  • Conditional Access
  • Temporary group membership

Our main concern will be to highlight the new things Active Directory Federation Services (ADFS) bring into a Windows Server 2016 network environment.

Active Directory Federation Services gives access to single Logons across the entire network on a different application such as Office 365, SaaS applications, and other cloud-based applications.

In general, the IT department can enforce Logons and access controls to both modern and legacy software. The user benefits by accessing a seamless Login using the same account credential and the developers will also have an easy time managing running applications because the authentication process is handled by the federation services.

Here are some of the new features that came with Windows Server 2016 Federation Service:

Eliminate the Use of Passwords on A Private Network

Active Directory Federation Services gives three possibilities for Logons without passwords. This eliminates the risk of the network being compromised by leaked or stolen passwords.

Using Azure Authentication Features

Federation 2016 services are based on Multi-Factor Authentication (MFA) that allows signing in using an Azure MFA code without the need to key in the username and password. The user will be prompted for a username and a one-time password (OTP) code for authentication.

When the MFA code is used as an additional authentication method, the user will be prompted to give the usual authentication credentials and later on prompted for text, OTP, or a voice password before logging in.

Setting a Federation Service to work with Azure MFA is now simple because organizations will implement Azure without the need of having a physical Azure server location. Azure can be configured to work in both local and private networks or be incorporated within an access control policy of the organization.

Allowing Password-less Access

Active Directory Federation Services 2016 uses device configuration capabilities to allow access on network-based devices. Users log in using the devices and its validity tested for attribute changes to maintain the integrity of the device and network security. Use of accepted devices ensures that granted access is granted to specific devices, private network access is only accepted via managed devices, and authentication requires several steps for any non-compliant computer or devices.

Using Windows Hello for Business Credentials

Workstations using the Windows 10 Operating System have an inbuilt Windows Hello and Windows Hello for Business. The credentials used are protected by gestures such as fingerprints, facial recognition, voice recognition, etc. Using the Windows 10 capabilities means that users can sign in to a Federation Server 2016 without the need of a password.

Secure Access to Applications

Windows Server 2016 Federation Services works with the latest modem protocols to offer a better experience to Windows 10, Android, and iOS users.

Some access control policies can be changed without necessarily having the knowledge of the claim rules language. This made it almost impossible to configure and maintain policies. Using Federation Services, one can simply use built-in templates to be applied in common policies such as:

  • Limit access to Local Area Network only
  • Allow everyone to access the server and ask for an MFA from private networks
  • Allow everyone to access the server and ask for an MFA from a specific group

Using templates is recommended because they are easy to customize and add exceptions or additional policies that can be applied to one or many applications.

Allow Logons without Active Directory Lightweight Directory Access Protocol (LDAP) Directories

Most firms use Active Directories alongside third-party directories for Logons. The introduction of Federation Services allows for the authentication of users whose credentials are stored in LDAP. This further helps third-party users whose data are stored in LDAP v3 compliant directories, also works with users in a forest with an Active Directory that has its two-way trust not configured. Users found in Active Directory Lightweight Directory Services are able to sign in.

Flawless Sign-in Experience

All applications using Active Directory Federation Services give users ability to customize Login experience. This is more appropriate for organizations dealing with various companies and brands. In previous editions, there was a common sign-on experience with customization facility available only for a single application. The Windows Server 2016 gives you the ability to customize messages, images, web themes, and logos. Additional customized web pages can be created for every business platform.

Improved Management and System Operations

Streamlined Auditing

Auditing is streamlined in Active Directory Federation Services 2016, unlike the previous versions where every single vent necessitated an event log.

Improved Interoperability with Security Assertion Markup Language (SAML 2.0)

Additional SAML protocols that support trusts importation with multiple entries are found in Active Federation Services 2016. This allows for the configuration of Active Directory to be part of confederations and implementations that conform to the eGov 2.0 standard.

Simple Password Management for Office 365 Users

Active Directory Federation Services enable password configuration that allows sending of password expiry claims within protected applications. For instance, Office 365 users rely on updates implemented via Exchange and Outlook to get notifications on the expiry status of their passwords.

Migration from AD FS Windows Server 2012 to AD FS Windows Server 2016 Made Easier

Previous editions demanded that configurations be exported from the old farm and importing into the new farm. When moving from Windows Server 2012 to Windows Server 2016, adding a new Windows Server 2016 to Windows Server 2012 and eventually adding Windows Server 2012 to the farm by verifying functionalities and removing the old server from the load balancer. The new features are ready to use once Windows Server 2016 is running and upgraded to farm behavior level 2016.

Conclusion

Federation Services help in managing identities across different networks and as such forms the foundation of cybersecurity in the cloud world. With this information, it is time to optimize your Active Directory environment by giving it a new design and restructure it before migrating to the latest Windows Server 2016 Federation Services.

 

 

 

Unauthorized Access to Sensitve Data?

Analyze and Report Data Access on Windows Folders in Under 60 Seconds!

 

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

How to Migrate Filesystems Data to Windows Server 2016

One of the most difficult and time-consuming tasks for IT Administrators is migrating file shares and their permissions. Before embarking on the migration, some procedures need to be followed to avoid mishaps like broken file systems or lost files.

The most common form of data migration is done by carrying all files and permissions. Microsoft has an inbuilt tool and PowerShell commands used as the migration tools. The migration utility eases the migration process by moving several roles, features, and even the operating system to a new server.

Depending on the prevailing circumstances prompting the migration we need to answer questions like:

1. Are we preserving the existing domain?
2. What are the settings of the old server?
3. Was the server running on a virtual machine?
4. Was the virtual machine on a different platform from the one we are moving files into?

Regardless of the reason behind the migration, different methods can be used to initiate the migration. If the existing server system has some pending issues, you are advised to sort them out before starting the migration process.

Using the Windows Server Migration Tool

We need to install the migration tool to ease the migration process. The Microsoft Server Migration tool will transfer server roles, feature, and some operating system to the destination server.

1. To get started, you need to install the migration tool through the PowerShell console using the following command:
Install-WindowsFeature –ServerName DestinationServer

2. Create a deployment folder on the destination server using the smidgeploy.exe utility (it is installed as an additional utility by the above command). To specify some specific attributes, use the following command:
C:\Windows\System32\ServerMigrationTools\SmigDeploy.exe /package /architecture amd64 /os WS08R2 /path <deployment folder path>

3. Create a deployment folder on the destination server, and then transfer its contents to the old server.

4. Use the Remote Desktop Protocol (RDP) to connect to the old server and run the smidgeploy.exe usually found on the following path:
C:\<DeploymentFolder>\SMT_<OS>_<Architecture>

5. After the installation, enable the destination server to accept deployment data. This is done using the PowerShell console using the following command:
Add-PsSnapin microsoft.windows.servermanager.migration

The PSSnapin command will activate all the PowerShell cmdlets.

6. Run the Receive-SmigServerData to open connection to the destination server. The time it takes to open connection is less than five minutes.

Sending Data to the Destination Server

1. Use the Send-SmigServerData in the PowerShell console. The following command defines the source path (remember the deployment folder that was copied from the destination server):
Send-SmigServerData -ComputerName <DestinationServer> -SourcePath <SourceDeploymentFolder> -DestinationPath <DestinationDeploymentFolder> -Include All –Recurse

2. When prompted for the password, use the password that was issued when running the Receive-SmigServerData on the destination server.

3. When the command completes, all file properties should be transferred to the destination server.

TIP: Confirm that all shares were transferred successfully by using Get-SmbShare in the PowerShell.

Alternatives to Windows Server Migration Tools

This involves taking the most recent backups and restores them on the new server. The backup method restores the data and not the file system. All the file permissions on the new server will be the same as before when they were on the old server. This is a generally fast approach, but the speed depends on file sizes.

1. Using the Free Disk2VHD Tool
If the current server is not virtualized, the Disk2VHD utility from Microsoft is reliable and fast because the subsystem allows the storage of files regardless of their sizes.

All NTFS permissions are retained and transferred to the new drive. The advantage of using this tool is the automatic creation of a fully compatible Hyper-V virtual drive.

2. Copy Utilities
Microsoft has many built-in coper utilities that transfer files with all permissions. The common server migration copy utilities are the XCOPY and ROBOCOPY.

Using XCOPY
The typical command should look like this:
XCOPY “\\sourceServer\ShareName\*.*” “\\destServer\ShareName\” /E /C /H /X /K /I /V /Y >Copy.log 2>CopyErr.Log

The parameters taken by the commander are:

/E – Copies both empty and directories with content.
/C – Copies without acknowledging errors.
/H – Copies all hidden and system files.
/X – Copies file audit settings (implies /O).
/K – Copies attributes; without this attribute will reset read-only attributes.
/I – Creates a directory if the file destination does not exist.
/V – Verifies the size of each new file.
/Y – Suppresses the prompt asking to overwrite existing destination file.

The command will execute and leave the output to a file and a corresponding error log file.

Using ROBOCOPY
The Robocopy command looks similar to this:
ROBOCOPY “\\sourceserver\ShareName” “\\destServer\ShareName” /E /COPYALL /R:0 /LOG:Copy.log /V /NP

The parameters taken by the command are:

/E – Copy all directories and its subdirectories.
/COPYALL – COPY ALL file info.
/R:0 – Number of Retries on failed copies: default 1 million. (When set to 0 it disables retries so that copy can go on uninterrupted.)
/LOG – Output the LOG file status.
/V – Produce output in details.
/NP – No Progress – Copy without displaying the percentage of files copied.

3. File Synchronization or Replication
Microsoft has many inbuilt tools that help system Administrators replicate data between two servers. This is disaster preparedness plan done to ensure data is available at all times.

The Distributed File System Replication (DSFR) is one way of synchronizing the contents between two shares. They can work with the Distributed File Name Space. Using the DFSR enables user shares via the path: \\Domain\share and not \\server\share

Both the DFSR and the DFS can bring together more than two servers to use one share pointing to multiple servers. Using the DFSR is easy when it comes to adding another server on an existing migration configuration.

Shares and Permissions

Since Windows 2000, file shares are stored in the registry at:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\LanmanServer\Shares

Instead of recreating shares, you can export this key to get all your drive paths and permission used by define shares. Using the registry to export shares means that all drive letters in the new server must match with the old server paths. To avoid any confusion, you are advised to assign same drive letters on both servers.

Conclusion

Whichever way you choose to migrate filesystems should be the most convenient and comfortable for you. All this depends on the level of skill and time needed to reduce the downtime likely to affect server operations.

Storage Replication in Windows Server 2016

Storage Replica is a new Windows Server technology feature on Windows Server 2016. This facilitates the replication of volumes between clusters for discovery or servers. It also allows the users to craft stretch failover clusters which span at least two sites, and with all the nodes kept in sync.

Note: This feature is only available in the Datacenter edition of Windows Server 2016.

Storage Replica reinforces asynchronous and synchronous replications.

  • Asynchronous replication mirrors the data across sites which lie beyond metropolitan ranges over the network links which have higher latencies, minus any guarantee that both sites have any identical copies of data by the instance of failure.
  • Synchronous replication has the duty of reflecting the data within the low-latency network site which have crash-consistent volumes to make certain that there is zero data loss at the file-system level amid the failure.

Why You Need Storage Replication

The storage replica is an ideal tool for the modern requirement for disaster recovery alongside the preparedness abilities in Windows Server 2016 Datacenter Edition. The Windows Server, for the first time, offers the users with a peace of mind of no data loss, an ability to synchronously safeguard data on various floors, racks, building, cities, counties, and campuses.

After a disaster strikes, the data will be accessible elsewhere without any data loss. The same principle applies prior to the striking of the disaster; the storage replica allows the users to switch workloads to much safer locations before catastrophes are served with a few moments warning (again, without any data loss).

The storage replica is also reliable as it reinforces the asynchronous replication for extended ranges and networks of higher latency. Since it is not a check-point, the delta of adjustments will be somehow much lower as compared to the snapshot-based outputs. Again, the storage replica mainly operates at the partition layer, and is therefore able to replicate all VSS snapshots modelled by the Windows Server and backup software. This permits the application of unstructured operator data synchronously replicated.

The storage replica can also permit users to decommission the existing file replication systems like DFS replication which were pressed into the duty as the low-end disaster recovery remedy. The DFS replication works quite perfectly over very low bandwidth networks, though its latency is relatively higher most of the time. This is majorly contributed by its need for files to close and also its artificial throttles which are meant to eradicate the network congestion.

Supported Configurations

The Stretch Cluster allows the users to configure storage and computer in one cluster, where other nodes share a set of symmetric storage whole, some nodes share the other, and then asynchronously or synchronously replicate with the site awareness. This instance can leverage storage spaces with the shared SAN, SAS Storage and ISCSI-attached LUNs. It is regulated with the PowerShel and Failover manager graphical gadget, and permits for the automated failover.

Cluster to Cluster permits the replication in between two separate clusters, where a single cluster asynchronously or synchronously replicates with another cluster. Ideally, the instance can permit the utilization of storage spaces directly, SAN and ISCSI-attached LUNs and Storage Spaces with shared SAS storage. It is naturally managed by the PowerShell and demands manual intervention for the failover. There is an inclusion of support for Azure Site recovery of this instance.

Server to Server permits both asynchronous and synchronous replication between at least two standalone servers leveraging the Storage Spaces with the shared SAS storage, ISCSI-attached LUNs and SAN. This is also managed by the PowerShell, alongside the server manager tool and demands a manual intervention for the failover.

The Key Features of Storage Replication

Simple Management and Deployment
The storage replica has a model mandate for an ease of use. The crafting of the replication affiliation between two servers demands only one PowerShell command. The deployment stretch clusters leverages the intuitive wizard in the Failover Cluster Manager gadget.

Host and Guest
All abilities of Storage Replica are in both virtualized guest and host-based deployments. This implies that the guests are able to replicate their data volumes if running on non-Windows virtualization platforms even in public clouds, so long as Windows Server 2016 Datacenter Edition in the guest is utilized.

Block-Level Replication, Zero Data Loss
With the help of synchronous replication, there is zero possibility of any data being lost. With the block-level replication, there is no probability of any file getting blocked.

User Delegation
The operators can delegate the permissions to manage the replication without being an affiliate of the built-in Administrators team on the replicated modes, hence reducing their access to the unrelated sections.

Network Constraint
The storage replica can at times be limited to the individual networks server and by the replicated volumes, with the aim of providing backup, application, and management software bandwidth.

High Performance Initial Sync
The storage replica reinforces the seeded initial sync, where there is already a subset of data on a target from the initial backups, copies, or shipped rives. The initial application can only copy the differing blocks, possibly reducing the initial sync time and regulating data with an aim of preventing the data from utilizing the limited bandwidth.

Use of SMB 3 as the transport protocol which is also supported via the TCP/IP model.

Prerequisites

  1. Two servers with two volumes on each server or location. One location will be for storage of data and the other for storage of logs.
  2. Volumes need to be of the same size both at the main server and remote server.
  3. Log volumes should also be of identical sizes across the two volumes.
  4. Data volumes should not exceed 10TB and should be of NTF
  5. Both servers need to be running Windows Server 2016.
  6. There must be at least 2GB of RAM alongside two cores for every server.
  7. There must be one TCP/Ethernet connection on each of the server for synchronized replication, but most preferably RDMA.
  8. The network between the servers with a reliable amount of bandwidth to accommodate the user’s IO write workload and an average of 5ms round-trip latency for an effective synchronous replication.

How it Works

The above diagram depicts how storage replication works in synchronous configuration.

The application will write data onto the File System volume labelled Data. This will be intercepted by I/O (input/output) filtering and be written onto the Log Volume located on the same server. This data will then be replicated across to the remote server’s log volume. When this data is written on the log volume, an acknowledgement is sent back to the primary server and to the application. On the remote server, data will be flushed from the Logs volume to the Data volume.

Note: The purpose of the Log Volume is to record and verify all the changes that occur across both blocks. Furthermore, in synchronous model configuration, the primary server needs to await acknowledgement from the remote server. If network latency is high, this will lead to a degraded network and slow down the replication process. Consider using RDMA which has a low network latency.

In asynchronous replication model, data would be written to the Log Volume located on the main server and thereafter, an acknowledgement sent to the application. Data would then be replicated from the Log Volume on the primary server to the Log Volume on the remote server. Should the link deteriorate between the two servers, the primary server will block all changes until the link is restored whereupon replication of changes will continue.

Setting Up Storage Replication

  1. Import-module StorageReplica
    Launch Windows PowerShell and verify the presence of Storage Replica Module.
  2. Test-SRTopology -SourcheComputerName CHA-SERVER1 -SourceVolumeName f: -SourceLogVolumeName e: -DestinationComputerName CHA-SERVER2 -DestinationVolumeName f: -DestinationLogVolumeName e: -DurationInMinutes 30 -ResultPath c:\temp
    Test the storage replica Volume by running the command above.
  3. PowerShell will then generate an HTML report that will give an overview of the requirements met.
  4. NewSRPartnership -SourceComputerName CHA-SERVER1 -SourceRGName SERVER1 -SourceVolumeName e: -SourceLogVolumeName f: -DestinationComputerName CHA-SERVER2 –DestinationRGName SERVER2 –DestinationVolumeName e: -DestinationLogVolumeName f:
    Begin setting up the replication configuration using the command above.
  5. Set-SRPartnership –ReplicationMode Asynchronous.
    Run Get-SRgroup to generate a list of configuration properties. It is set to run on synchronous replication by default & Log file set to 8GB. This can be set to asynchronous using the command above.

When we head out to the remote server and open File Explorer, Local Disk E will be inaccessible, while Logs will be stored on Volume F.

When data is written on the source server, it will be replicated block by block to the destination or remote server.

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

  • No more unauthorized access to sensitive data
  • No more unclear permission assignments
  • No more unsafe data
  • No more security leaks

Get your free trial of the easiest and fastest NTFS Permission Reporter now!

Performance Tuning for Windows Server Active Directory 2016

The Active Directory is a standardized and central database for Windows Server systems that houses user accounts used for authentication, file shares, printers, computers, and other settings such as security groups. The main purpose of Active Directory is to allow only authorized users to logon to the network and act as a central management for network resources.

Once you have set up a Windows Server in your environment, you might have business requirements that are not supported by your server’s default settings. For instance, you may desire to scale down on your power/energy consumption, maximize your server’s output and have the lowest server latency. It’s for this reason that we must always ensure that our AD is running optimally. And one way to ensure that is by performance tuning.

We are going to give you a few tips on how you can tweak your server settings and scale up your AD’s performance and energy efficiency, especially when you have varied workload.

For performance turning to reap maximum impact, tuning should be centered around server hardware, workload, energy budget, as well as performance objectives of the server. We are going to describe crucial tuning considerations that can yield improved systems’ performance coupled with optimal energy consumption.

We’ll break down each setting and outline its benefits to help you make an informed decision and achieve your goals as far as workload, system’s performance, and energy utilization is concerned.

Hardware Considerations

This encompasses the RAM, Processor, storage, and Network Card.

RAM

To increase scalability of the server, the least possible amount of required RAM is calculated as follows:

Current size of database + Total size of SYSVOL + Recommended RAM by OS + Vendor Recommendations

Any additional RAM can be added in anticipation of the database’s growth and workload in the server’s lifetime. For remote sites with few users, these requirements can be relaxed as they will not require much RAM to cache much information to service requests.

In virtualization scenarios, avoid committing too much memory to the host machine. In some cases, memory overcommit happens where more memory is allocated to the guest machines than the underlying host machine. This is not such a big deal, but it becomes a huge mountain if the total size of memory collectively allocated to guest machines exceeds that of the host machine and the host begins paging. Remember, the objective of RAM optimization is to minimize time required going back to the disk.

16GB RAM is a reasonable amount of memory for a physical server. For virtual machines, though, an estimated size of 12GB would be considered decent enough with anticipation of future upgrade and growth of the database and resources.

Cache Memory

This is a type of RAM that is easily and quickly accessible by the microprocessor more than the ordinary RAM. The cache performance of an Active Directory depends on the memory space allocated for caching. Data access done at the memory level is faster than access instructions on physical volumes.

To make this processing highly efficient, more memory must be added to minimize disk input / output requests. The viable option is to have enough RAM installed to handle all operations of the operating system and the installed applications. Therefore, system logs and databases should be placed on separate volumes to offer more flexibility in storage layout.

To improve the I/O request on a hard disk, the Active Directory should implement the following hardware configurations:

  1.     Use of RAID controllers
  2.     Increase the number of disks handling log files
  3.     Support write cache on disk controllers

The subsystem performance of each volume should be reviewed; the idea is to have enough room for sudden changes in load to avoid client request non-responsiveness. Data consistency will only be guaranteed when all changes are written to logs.

Non-critical tasks such as system scans, backups, and activities taking place when the system is not overloaded should be scheduled. Backup procedures and scanning programs with low I/O requests should be used because they reduce competition with critical services in the Active Directory.

Network

To investigate the degree of traffic which should be supported, it’s prudent to make a mention of 2 broad categories of network capacity planning for Active Directory Domain Services.

Firstly, we have replication traffic which passes back and forth across Domain controllers. Then, we have client-to-server network traffic also known as intra-site traffic. Client-server traffic is much simpler to plan for since it involves minimal client requests to the Active Directory in contrast to the huge volumes of data sent back by the Active Directory Domain Services.

A bandwidth of 100Mbps will be adequate in environments serving close to 5,000 users sharing a server. A 1GB Network Card is recommended for environments where users exceed 5,000 per server.

In virtualized environments, the network adapter should be in a position to support the Domain Controller load and the rest of the guests or virtual machines which are sharing the virtual switch which is attached to the physical network card.

Storage

Planning storage on the server entails two things: storage size and performance.

For Active Directory, sizing is only a consideration for large environments. This is because even for a 180GB hard drive, SYSVOL and NTDS.DIT can fit quite easily. It’s therefore not prudent to allocate so much disk space in this area.

However, you should ensure that 110% of the NTDS.DIT size is available for defragmentation. From there henceforth, one should plan for growth over a 3-to-5-year lifespan of the Hardware. An estimate of about 300% the size of NTDS.DIT database file will be satisfactory to accommodate growth over time and allow for offline defragmentation.

Processors

Processors with limited free cycles increase the wait times leading to execution. Server optimization should ensure that enough room is available to handle workload surges and in the long run minimize response time to client requests. Reducing the workload on the processors involve, selecting the best processors, directing client requests to available processors, and using processor information to gauge system performance.

Performance Tuning

Performance tuning on the Active Directory has two objectives:

  • The optimal configuration and performance of the Active Directory to balance the load efficiently
  • All work sent to the Active Directory have to be efficient

For the objectives above to work, three areas need to be looked at

Capacity Planning

This means having enough number of domains that can handle redundancy and client requests within a short time. All the server hardware must be able to handle existing load. Capacity planning involves scaling up operations across multiple servers. Adding more resources like RAM to the server is essential in preventing possible failures by ensuring that every aspect of the server is working as intended.

A typical capacity planning takes place in three stages:

  1.     Evaluating the existing environment by determining the current challenges.
  2.     Determining the hardware needed according to the findings in the step above.
  3.     Validating the employed system to ensure that it works within the defined specifications.

Server-side Tuning

The domain controllers in the Active Directory are configured to handle loads efficiently. The System Administrator is supposed to balance the demands of individual users against available resources. Add-on products that manage bandwidth and port usage may be implemented to restrict network resource uses.

Active Directory Client/Application Tuning

The Active Directory has to be set up so that the client and application requests use the Active Directory to achieve maximum efficiency.

Domain Controllers and Site Considerations

Placing domain controllers and site considerations revolve around optimization for referrals and optimizations with trusts in mind.

A well-defined site definition is central to the performance of servers. Clients not getting requested services may report poor performance when querying the Active Directory. Since client requests can come from IPv4 or IPv6, an Active Directory is supposed to be configured to get data from IPv6 addresses. By default, the operating system usually picks IPv6 over IPv4 if both are configured to send/receive data.

Most domain controllers use name resolution for reverse lookup when determining the client’s site. When this happens, delays in the thread pool are inevitable leading to unresponsiveness from the domain controller. By optimizing the name resolution framework, quick response is assured from the domain controllers.

An alternative is to locate read/written domain controllers where read-only domain controllers are used. Optimizing this scenario means:

  • Using an application code change to contact writable domain controllers when read-only domain controller would be sufficient.
  • Placing the read/write domain controller at the center of operations to reduce latency.

Optimization for Referrals

Referrals define how Lightweight Direct Access Protocol (LDAP) requests are processed when domain controllers do not have a copy of the requested partition. When the output of a referral request is found, it has the name of the partition, port number, and DNS name.

This information is used by the client to send requests to the server hosting the partition. The recommendation is to make sure that the Active Directory that has the site definitions and domain controllers are in place to reflect the client’s needs. Implementing domain controllers from multiple domains in a single site and relocation the applications may also help fine-tuning the domain controllers.

Optimization with Trusts in Mind

In a domain with multiple forests, trusts have to be defined depending on the domain hierarchy. All secure channels at the root of the forest may be overloaded due to increasing authentication requests between the domain controllers. This will cause delays in far-flung Active Directories and this overload in inter-forest and low-level trust scenarios. Some of the recommendations to help reduce forest trust overload.

  • Using MacConcurrentAPI to help distribute load across a secure channel.
  • Create shortcut links to trusts as needed depending on available load.
  • All domain controllers within a domain should be able to handle name resolutions and communicate trusted domain controllers.
  • All trust should be based on locality considerations.
  • Reduce the chances of running into MaxConcurrentAPI challenges by enabling Kerberos as needed as well as reducing the use of secure channels.

Name resolution taking place over firewalls takes a toll on the system and will, in turn, impact the clients negatively. To overcome this, access to trusted domains need to be optimized through the following steps:

  1.     The WINS and DNS should resolve names within the trusting domain controllers by listing the domains. This step is to counter the problem of static records which tend to cause connectivity problems over time. A manual maintenance of all the forwarders and secondary copies of the resource environment needed by the clients need to be maintained.
  2.     Converging all site names shared between trusted domains reflecting domain controllers that re on the same location by ensuring IP and subnet addresses are linked to sites within the forest.
  3.     Ensure all ports are open and firewalls configured to accommodate all trusts. Closed or restricted ports will lead to several failed communication attempts, forcing the client to experience timeouts and hung threads or applications.
  4.     Domain controllers forming a trusting domain should be installed on the same physical location.

When no domain is specified disabling trust checks on the availability domain, trust checks are recommended.

 

 

 

Do you have unclear NTFS Permissions assignments?
Do you have too many special permissions set on your fileservers?
Or blocked NTFS Permission Inheritance?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

File Server Resource Manager (FSRM) Overview

File Server Resource Manager (FSRM) is a Microsoft Windows Server role created for managing and classifying data stored on file servers. It includes some interesting features which can be configured by using the File Server Resource Manager snap-in or by using Windows PowerShell.

Here’s an overview of the features included in the FSRM.

File Classification Infrastructure

This offers automatic classification process based on custom properties with the purpose of an easier and a more effective way of managing files.

It classifies files and applies policies based on that classification. Once files are classified, a management task can be either public or private. As an example, we can take public or private file classification. Once the files have set class, a management task can be created to perform some actions on a file (RMS encryption for example).

It can be instructed to perform encryption on files classified as private but exclude files classified as public.

File Management Task

Enables applying of conditional policy or action to files based on classification. Conditions of the policies can include file location, classification properties, file creation date, file modification date, or date of last access to file.

The tasks that can be managed are ability to expire files, encrypt files, or run some custom command.

Quota Management

This allows a limitation of allowed space for a volume or folder. Quotas are automatically applied to new folders that are created on a volume. It is possible to define quota templates which can be applied to new volumes or folders.

File Screening Management

This provides control over the type of files that can be stored on a server. For example, the user can create file screen which does not allow storing JPEG files in the personal shared folder on a file server.

Storage Reports

Storage reports are used to help identify trends in disk usage and classification of user data. It can monitor selected groups of users and restrict attempts to save unauthorized files.

Important thing to notice is that File Server Resource Manager supports only NTFS File System format and does not support the Resilient File System (ReFS).

Practical Applications

Some practical applications for File Server Resource Manager include:

  • If File Classification Infrastructure is used with the Dynamic Access Control, a policy that grants access to files and folders based on the way files are classified on the file server.
  • The user can create File Classification rule which tags any file that contains at least 10 Social Security numbers as personal pieces of the information file.
  • Any file that has not been modified in the last 10 years can be set as expired.
  • Quotas (i.e. 200 MB) can be created per user. A notification to the admin user can also be set when the quota is at 80% (i.e. 180 MB of 200).
  • It is possible to schedule a report which runs at the specific time weekly with a purpose of generating a list of most recently accessed files from a previously selected period. This can help the admin user determine the weekend storage activity and plan server downtime accordingly.