Windows: How to Create Files that Cannot be Found Using the “…” Dots

All Windows folders must have two entries: the directory “.” (denoting the current directory) and “..” (denoting the parent directory).

On a Windows platform, it’s important to create a file extension with dots. This prevents attacks that the system may confuse with dots and parses.

However, as seen on the command above, you cannot create a file with “…”, including using it as a name.

All this can be bypassed using the ::$INDEX_ALLOCATION trick.

Using the folder name twice also creates the folders.

For example, you can pass the command mkdir “….\….\” to create a directory and another one inside it. This will enable you to enter the folders, store files, and execute programs from the same location.

It is not possible to enter the folder using its name. As such, after creating the files in the folder, you’ll be forced to use the “cd … \… \” syntax.

Please note that if you use “cd.” in the folder, it will take you one directory up because of the confusion in paths.

You may not open the same directory from the Graphical User Interface (GUI).

In some cases, if you stay in the same directory and maintain the same path, double clicking a folder may not have any impact.

In other cases, you may notice that you are in the folder but the path in the explorer changes. For instance, when opening the folder several times, you may notice many dirs in the path of the graphical interface.

By entering as many folders as you want, you may not show all the files inside the folder in the GUI, and you may also not open a folder by passing “C:\Sample\Test\…\…\” in the input field.

NOTE: Deleting the folder will crash the explorer because it will not stop counting files being deleted; best advice is to avoid doing this on your working system

Using the GUI to search for files may also not work for you; for example, searching for a Sample123.txt will keep searching forever, without anything to show.

Searching for the same file via the command prompt gives a positive result, as shown below.

However, most administrators prefer to use the PowerShell, which gives an endless loop.

If you use the Get-ChildItem –Path C:\Test –Filter Sample123.txt –Recurse –ErrorAction SilentlyContinue –Force commandon the PowerShell interface, it will iterate forever.

Some programs may seem to work correctly. For example, if you place some malware in the same directory and perform tests using an antivirus solution, nothing will happen because some of them may be unable to interpret their names and paths.

When searching for viruses inside the C:\Test\…\, the malware will be skipped inside the C:\Test\. Some Python programs that use the function os.walk() make it to work correctly.

Please note that creating a directory junction pointing to its own parent folder will not lead to an endless loop in both cmd and PowerShell.  

Shortcode

Protect yourself! Discover all security holes in the folder hierarchy on your Windows fileservers!

Get your free trial of the easiest and fastest NTFS Permission Reporter now!

How To Hide All NTFS Alternate Data Streams

It’s possible to dump Alternate Data Streams (ADS) using the /r switch in the dir command.

Moreover, you can also use the streams.exe tool found within the Windows Sysinternals to dump the streams

On earlier Windows versions, ADS was hidden by concealing the reserved names as the base names.

Examples of such names include CON, NUL, COM1, COM2, LPT1, and others.

However, in Windows 10, this seems to be fixed; and doing the same may not be possible, but it still works.

The ADS on “…” was successfully created and listed by the tools.

Creating an ADS on COM1 results in an error, but does not have an effect on the system.

ADS can also be created on the drive using echo Sample123 > C:\:Sampleabc.txt that hides it from the dir/r command inside the C:\.

However, it will show the ADS inside subfolders of C:\ for the “..” directory, as shown below

The 12 NULL:Sample.txt:$DATA was created by the C:\:Sampleabc.txt ADS. This stream is also visible using the Sysinternals streams.exe tool, if it is called on directory C:\. You can use the “…” to hide it from both tools.

There is also another way of hiding it by using “<space>”at the end of the file, and Windows will automatically remove the space.

However, we can create such a file with ADS using tools that cannot open the file because of the file name. After truncation, it will be changed to a name without any space, which, in actual sense, does not exist.

Have a look at the screenshot below.

The ADS foobar.txt is not visible using the normal searching tools

NOTE: such files can be created using the echo test> . ..:$DATA

Also, note that Sampleabc.txt uses the same ADS that was used to create one on C:\:Sampleabc.txt.

Going by that reasoning, we can create a directory with the name “..”, as shown below.

If you try entering the folder or opening it, you’ll get the following error.

Other techniques such as cd ..\..\ also do not work. However, cd “..::$INDEX_ALLOCATION” works (the double quotes are part of the command).

Directories using the name “..” can be entered using the earlier mentioned technique.

NOTE 1: The folder named Test22 can be opened through the GUI by clicking it twice and all its contents will be displayed correctly. The only downside is that you cannot open its files because Windows will interpret it as a wrong path. Using PowerShell will lead to endless loops when searching such folders.

NOTE 2: An ADS can be created on a folder with names such as Sampleabc, and be renamed by including a number, because the name will not work. To access the folder, you must rename it to its original Sampleabc name.

File System Tricks vs. Antivirus Products and Forensic Software

We conducted a quick verification of the file system tricks against an antivirus software to see if some malware could go past the system vulnerabilities. The most notable discovery was that files or folders ending with “..” bypassed the system with ease.

Upon re-enabling the antivirus software and scanning the folder and file, the program identified its own files, the folder containing the copied files, and bypassed the virus in “Sample123..” or in any of the “foo..” folders.

When the folder and the file were opened, the antivirus program found them because the contents were loaded from the system to memory. Using the “remove” action from Windows Defender could not remove the files but the “remove” action from the antivirus software deleted them.

You can change this behavior in the file guard settings by setting the scan to “Thorough” so that it can scan through all the files. The Windows defender blocks the reading of some antivirus’ text files.

Furthermore, we conducted another test using forensic software (in this case Autopsy 4.6.0) by loading “logical files” into the tool within the running system, and not using an image. As a result, we could open the “..” folder but not the “foo. .” folder.

If we created another file called “Valid”, in addition to the “..” folder that contained a space at the end of its name, it was read by the system as “..” and could be opened by double clicking.

This is possible only on “logical files” mode, disk image mode, and when running Autopsy live mode (with everything configured correctly to access data using the API).

Protect yourself and discover all permissions owner on your Windows fileservers!

Pass your next security audit without worrying about security leaks!

Get your free trial of the easiest and fastest NTFS Permission Reporter now!



How to Prevent Privilege Creep With FolderSecurityViewer

Ensuring the right access privileges are aligned with appropriate user roles is usually the headache of the IT department.

If there is a mismatch between a user’s responsibilities and their access privileges, it poses serious security risks, including data breach, exfiltration of sensitive information, and implantation of viruses and worms on the company’s systems.

In this article, we are going to talk about how to prevent privilege creep using a versatile tool known as FolderSecurityViewer.

What Is Privilege Creep?

Typically, privilege creep refers to the steady gathering of un-audited access rights beyond what a person requires to complete their tasks.

If a user requires rights to access an IT infrastructure, and sufficient justification has been given, those rights should be given.

However, when that same individual no longer needs those rights, and nothing is done to remove them, they remain unchanged. Over time, with the addition of more roles, a person can gather unnecessary and insecure rights.

How Privilege Creep Occurs

Simply, privilege creep takes place when users’ privileges are not cleaned out, especially after changing roles. Promoting employees, demoting employees, or carrying out transfers within departments are the major cause of access creep.

For example, a manager is hired and granted the access rights to the sensitive IT systems in a company. After some months in the position, he is demoted and a new manager is hired to replace him. However, instead of the access rights of the old manager being revoked, he still retains them.

The same scenario can happen when an employee is transferred to another department or an employee is promoted to a higher position. Also, if an employee is granted temporary access permissions to cover for vacations or prolonged absences, and the rights are not rescinded, privilege creep can ensue.

Dangers of Privilege Creep

Privilege creep usually leads to a two-fold security risk to organizations. The first risk occurs when an employee who still has uncleaned privileges gets tempted to gain unauthorized access to a sensitive system.

In most organizations, security incidences take place because of dissatisfied employees attempting to cause damage or just ‘make a point’. If such employees have unnecessary privileges, they can maliciously gain entry into systems away from their immediate work station, making finding them out difficult.

Second, if the user account of an employee with excess privileges is hacked, a criminal can collect more information than if the privileges of the account were not excessive. If an account is compromised, it becomes the property of the attacker, and it is more lucrative if it has excess rights.

How to Avoid Privilege Creep

Carry out access reviews

The best technique of avoiding privilege creep is carrying out frequent, thorough access reviews. The IT department should regularly confirm every employee’s access rights to ensure the unnecessary accumulated privileges are revoked.

If a company has invested in a robust identity and access management system (IAM), undertaking access reviews become less taxing and making decisions concerning employees’ continued access become easier. Implementing an IAM system will ensure granted access privileges are appropriately authenticated and audited.

Importantly, when conducting access reviews, the principle of least privilege should be applied. The permissions granted to users should be limited to the minimal level that enables them to carry out their tasks without any difficulties. For instance, someone in the HR department should not be given the privileges of accessing the organization’s customer database.

Access reviews should be maintained throughout the year, with a frequent rotation in every department within the company. Every employee, from the CEO to the lowest-ranked, should have their access permissions periodically reviewed, especially when there is a change in roles.

Communication of changes in roles

In case any employee changes roles, it should be promptly communicated to the IT department. If formal notification is not done, the IT department may not revoke the employee’s access rights, which can lead to harmful consequences.

So, the HR department should work together with the IT department to avoid such lapses, and enhance the security of the company’s infrastructure.

Ensure privileges are aligned

By ensuring the privileges of each employee are aligned to their specific roles and responsibilities, it becomes easier to prevent this creeping monster.

In the company’s employee lifecycle management policy, a comprehensive documented process should be included that clearly outlines the IT-related actions.

In case of any changes to roles, prompt notification should be made to the IT department for updating of the privileges and closure of redundant accounts.

How FolderSecurityViewer Can Help

The task of preventing privilege creep is delicate and demanding. If you try to manually sieve a big number of users’ privileges, it can consume a lot of your time and drain a lot of resources, besides the mistakes and oversights that can ensue.

Therefore, investing in an IAM system can greatly reduce the extensive costs of tackling the security vulnerabilities ensuing from privilege creep as well as misaligned or abused privileges.

For example, the FolderSecurityViewer is a powerful free tool you can use to see all the permissions accorded to users. After analyzing the permissions, you can clean them out, and reduce chances of privilege creep occurring.

First, you’ll need to download the tool from here.

After launching the tool, you’ll need to select the folder you need to review its permissions, and click the entry Permissions Report of the context menu for the magic to start.

  

You’ll then be provided with a comprehensive permissions report containing several things, including the names of users, department of users, and their respective allowed permissions.


If you want to get more information, you can click on the “Access Control List” button and see the various privilege rights accorded to users.

You can also export the permissions report in Excel, CSV, or HTML format, and make more analysis.

 After carrying out the access reviews using FolderSecurityViewer, you can audit identities and permissions to ensure role-based privileges are applied and excessive privileges are revoked.

Conclusion

The FolderSecurityViewer is a wonderful tool you can use to provide you with visibility into the permissions and access rights for your IT infrastructure. This way, you can easily prevent privilege creep and avert costly security breaches from occurring.

How To Upgrade Windows Server 2019

In-place upgrading of a Windows Server Operating System allows the Administrator to upgrade the existing installation of Windows Server to a new version without changing the existing settings and features.

The Windows Server 2019 In-Upgrade feature allows you to upgrade the existing The Long-Term Servicing Channel (LSTC) release like the Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019. The in-place upgrade service allows organizations to handle upgrades to newer versions within the shortest time possible. The direct upgrade is possible even when your existing Server Installation requires some dependencies before an upgrade.

Clients who do not document server installations or do not have the infrastructure or code for deployment will find it hard to upgrade to new Window Server versions. Without the Windows Server 2019 In-Place upgrade feature, you will miss many improvements on WS2019.

How to Upgrade to Windows Server 2019

Using the in-place upgrade to move to Windows 2019, use the Windows Server 2019 media on a DVD, USB or any appropriate method of installation. Start the setup.exe

The existing installation will be discovered, and you can perform the in-place upgrade. The installation should not take more than five minutes, but it all depends on the speed of the server and running roles and features.

The following example shows an in-place upgrade from Windows 2016 to Windows 2019 from an ISO file.

  1. Mount the ISO file and click on setup
  2. Accept defaults and click next (Download and install updates as the default option)
  3. On the next screen we will specify the product key and click next – the key can activate unlimited upgrades
  4. Select the edition with the desktop experience option and click next
  5. Accept user license terms and click accept
  6. Select the option for keeping personal files and programs because we intend to upgrade the Server. Click on Next.
  7. Windows will take time collecting updates and when done click on next when done (this depends on the speed of your internet)
  8. A warning will pop up about upgrading to a new Windows Version. Read the message and if you are okay with it click on confirm.
  9. The next step requires that you click on FlightSigning to enable it. (FlightSigning enables you to trust Windows Insider Previews builds that have signed certificates but not trusted by default.
  10. Click on install to initiate the installation process.

Once the upgrade is finished, you will notice some new features

  • The PowerShell replaces CMD
  • The Apps and Features open the settings panel and not Programs and Features as it in Windows Server 2016, which opens Control Panel where you can uninstall or change program and settings instead of the control panel.
  • Windows Defender Security Center has all the security settings.

Installing the Active Directory Domain System on Windows Server 2019

There is no much difference experienced if you have installed an Active Directory Directory Services on Windows Server 2016.

Run the server manager

  1. Click on Manage
  2. Roles and Features
  3. Follow the wizard and install AD DS
  4. Click on the link to promote the Server to a Domain Controller

Selecting Server Roles

  1. Click on the Add Roles and Features Wizard
  2. On the resulting wizard click on the roles, you want to add and click next

Creating a New Forest

  1. Click on the active directory domain service configuration Wizard
  2. On the deployment configuration wizard, choose the option to add a new forest
  3. Specify the domain information for the forest
  4. Click next

The Forest Functional Level (FFL) and the Domain Functional Level (DFL) are named Windows Servers in preview versions; use the Active Directory Service Configuration wizard to promote the server.

The Domain Controller options wizard will take you through the Server promotion wizard.

If you need more configuration options such as the Hyper-V installations, you can use the preview version for Windows Server 2019, which is 8.3

At the moment, most developers are still running tests on servers using the kind of hardware you will find in a professional environment. Testing using the Virtual Machines could also give good results however a server operating system should be verified using hardware deployments.

Detect Permission Changes in Active Directory

This articles describes how to track permissions changes in Active Directory.

Overview

Let’s start an article, with a small example :

If some example organization works in three shifts, with different server administrators, and , in meantime permissions on some Active Directory objects, change, overnight, it is the good practice to know which admin ,and when changed it.

For that information, auditing for changes to permissions on Active Directory should be enabled, and in this article, we will explain how to do it successfully.

Enable auditing of Active Directory service changes

The first step is enabling auditing of Active Directory service changes. It has to be done on the domain controller, on a way to change Group policy object, Default Domain Controllers Policy.

The operation should be done from a server, or a workstation with Remote Server Administration Tools (RSAT)  installed.

By opening Group Policy Management, and expanding Active Directory Forest, Domains, and then the Domain Controllers Organizational Unit (OU), access to Default Domain Controllers Policy GPO is granted, and by right-clicking Edit from the menu, Group policy management editor will open.

When in Group Policy Management Editor, navigate to ( and expand policies )  Computer Configuration, then  Windows Settings then  Advanced Policy Configuration and click DS Access.

Among the other subcategories, there will be Audit Directory Service Changes.

In the properties of Audit Directory Service Changes policy, Configure the following audit events option, both checkboxes ( Success and Failure ) should be ticked.

Adding a system access control list (SACL)

Next step is adding a system access control list (SACL) to the domain to audit for modified permissions.

System access control lists ( SACLs) are used for establishing security policies across the system for actions like logging or auditing resource access.

SACL specifies :

  • Which security principals (users, groups, computers) should be audited when accessing the object.
  • Which access events should be audited for these principals
  • Which access events should be audited for these principals
  • Adding system access control list (SACL) is made from Active Directory Users and Computers ( ADUC), by opening View menu, and check Advanced Features ( it has to be activated).

Click Active Directory Domain ( on the left), and select Properties > Security > Advanced, then switch to Auditing tab, and click Add. It will open Auditing Entry tab.

In the Auditing Entry tab, click Select a Principal.

Enter the “everyone” in the object name in the Select User, Computer, Service Account, or Group dialog, and click Ok.

Auditing Entry has to be set to “Sucess” and Applies to option has to be set to “ This object and all descendant objects”.

Under “Permissions” option, only selected option has to be “Modify Permissions”.

Check

And that is it. The only thing left to do is check the changes of permissions.

It can be done in PowerShell by usage of the command

Get-EventLog Security -Newest 10 | Where-Object {$_.EventID -eq 5136} | Format-List

The output should be the formatted list of information about changes ( who made changes on which object, and information about new security descriptor).

Windows Server – How To Close Open Files

Here I will describe how to close open server files and processes.

Every system admin on Microsoft Windows Server systems, at least once, will come in a situation that some file is open on a server, and it is needed to check what kind of process or user opened it.

This open files can cause some troubles, like upgrade errors, or reboot hold up etc.

It could be a huge problem, which, if not thought through, can cause the delay of updates, or errors in server maintenance.

More common, but less extreme issues regarding this could come from users. Sometimes, in situations when users leave shared files open on their accounts, some other users, when opening the same file can experience error messages, and cannot access the same file.

This article will show you the way how to deal with that kind of issues, how to find and close open files/process. The operations can be applied to Microsoft Windows Server systems 2008, 2012, 2016 and Windows 10 for workstations.

There are a lot of working methods to deal with that kind of problems, first, one that we will describe is a usage of computer management:

View open files on a shared folder

In a situation of locked files on the server, made by users, this method could come in handy to troubleshoot it.

Use right click on start menu and select Computer Management ( or in start menu search type compmgmt.msc)

The procedure is very simple, and in most cases, it works with no problems.

Click on Shared Folders”, and after that, on Open Files.

That should open the screen with a list of files that are detected as open, the user that opened it, possible locks, and mode that is opened in.

By right click on the wanted file, choose an option, “Close open file”, and that will close it.

With processes and file details, the process is bit different.

Usage of Windows Task Manager

Task Manager will not close opened shared files, but it can close processes on the system.

It can be opened with a combination of keys ctrl, alt, del ( and choose Task Manager), or right-clicking on the taskbar then choose open task manager option.

Under tab processes, you can see all active processes and line it by parameters CPU, Memory etc…

If there is a process that you want to terminate, it can be done by simply right click on the process, and then choose End Process option.

Usage of Resource Monitor

For every system administrator, Resource Monitor is “the tool” that allows control and overview overall system processes and a lot more.

Resource Monitor can be opened by typing “resource monitor” in a start menu search box.

Another option is to open up the task manager, click the performance tab and then click Open Resource Monitor.

When Resource Monitor opens, it will show tabs, and one, needed for this operation is Disk.

It shows disk activity, and processes, files open, PID, read and write bytes per second etc.

If the system is running a lot of “live” processes, it can be confusing, so Resource Monitor offers “stop live monitoring” option, which will stop processes on screen running up and down, and will give you an overview of all processes up to “stop moment”.

Resource monitor offers an overview of opened files paths and processes on the system, and with that pieces of information, it is not a problem to identify and close files or processes.

Powershell cmdlet approach

Of course, PowerShell can do everything, GUI apps can, maybe even better, and in this case, there are several commands, that can and will close all your system’s opened files and processes.

There are more than one solutions with PowerShell scripts, and it is not recommended for administrators without experience in scripting.

For this example, we will show some of the possible solutions with PowerShell usage.

The following examples are applied to  Server Message Block (SMB) supported systems, and for systems that do not support SMB, the following examples will show how to close the file with NET file command approach.

In situations where one, or small numbers of exact known open files should be closed, this cmdlet can be used. It is, as usual, used from elevated PowerShell, and applies to a single file ( unsaved data on open files, in all examples, won’t be saved).

Close-SmbOpenFile -FileId ( id of file )
Confirm 
Are you sure you want to perform this action? 
Performing operation 'Close-File' on Target ‘( id of file)’. 
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): N

There is a variation of cmdlet which allows closing open files for a specific session.

Close-SmbOpenFile -SessionId ( session id )

This command does not close a single file, it applies to all opened files under the id of the specific session.

The other variation of the same cmdlet is applying to a file name extension ( in this example DOCX).

The command will check for all opened files with DOCX extension on all system clients and it will force close it. As mentioned before, any unsaved data on open files, will not be saved.

Get-SmbOpenFile | Where-Object -Property ShareRelativePath -Match ".DOCX" | Close-SmbOpenFile -Force

There are a lot more this cmdlet flags, and variations which allow applying a lot of different filters and different approaches to closing open files.

Powershell Script approach

With PowerShell scripts, the process of closing open files and processes can be automated.

$blok = {$adsi = [adsi]"WinNT://./LanmanServer"

$resources = $adsi.psbase.Invoke("resources") | Foreach-Object {

 New-Object PSObject -Property @{

 ID = $_.gettype().invokeMember("Name","GetProperty",$null,$_,$null)

 Path = $_.gettype().invokeMember("Path","GetProperty",$null,$_,$null)

 OpenedBy = $_.gettype().invokeMember("User","GetProperty",$null,$_,$null)

 LockCount = $_.gettype().invokeMember("LockCount","GetProperty",$null,$_,$null)

 }

}

$resources | Where-Object { $_.Path -like '*smbfile*'} |ft -AutoSize

$resources | Where-Object { $_.Path -like '*smbfile*'} | Foreach-Object { net files $_.ID /close }

}

Invoke-Command -ComputerName pc1 -ScriptBlock $blok

Our example script enables closing a file specified by path, that should be inserted In the script.

This way of closing open files is not recommended for administrators without PowerShell scripting experience, and if you are not 100% sure, that you are up to the task, do not use this way.

Close A File On Remote Computer Using Command Line

There are two other ways to close the open files. Either Net File or PSFile (Microsoft utility) could be used to close them. The first command can be ruined by usage of NET File command using the Psexec.exe remotely. The NET command does not support any Remote APIs.

Net file command can list all open shared files and the number of file lock per file. The command can be used to close files and remove locks ( similar to SMB example before) and it is used, similar to example before, when user leave a file open or locked.

It can be done with the following syntax

C:>net file [id [/close]]

In this syntax, ID parameter is the identification number of file ( we want to close), and of course, parameter close, represents action we want to apply to ID ( file).

Best practice of NET file command usage is to list open files by running Net File command, which lists all open files and sign it with numbers 0, 1, etc

So when files are listed, the command which will close open files is ( for example),

C:>net file 1 /close

So command will apply in a way that will close a file signed with number 1.

PsFile usage

PsFile is a third party application, but I will not put it in a list of third parties, as any good system administrator should use it as “normal”.

commands are similar to net file commands, with a difference that it doesn’t truncate long file names, and it can show files opened on remote systems locally.

It uses the NET API, documented in platform tools, and it becomes available by downloading PsTools package.

 psfile [\\RemoteComputer [-u Username [-p Password]]] [[Id | path] [-c]]

Usage of PsFile “calls” remote computer with valid username and Password, and with path inserted it will close the open files on the remote system

For Processes opened on the remote system, there is a similar command called PsKill, which on same principle “kills” processes.

Release a File Lock

In some situations, a problem with closing files can be handled by releasing a file lock. There are many examples of users locking their files, and leave it open ( for some reason, the most common type of locked files are excel files).

So all other users get an error message of type: Excel is locked for editing by another user, and get no option to close it or unlock.

As an administrator, you should have elevated rights and with right procedure, that can be fixed easily.

With pressing windows key and R, you will get windows run dialog.

In run dialog type mmc ( Microsoft Management Console).

By going to option File > Add/Remove Snap-in, add a “Shared Folders” snap-in.

If you are already an operating system that has the issue, choose Local Computer option, if not, choose Another computer option and find a wanted computer name.

Expand the Shared Folders, then select open files option.

Choose locked/open file, and close it by right click and selection of Close open file.

The described procedure will unlock and close an open file ( similar as in the first example of an article), and users will be able to access it.

Usage of Third-party apps

There is a lot of third-party apps with the purpose of handling open server files on the market.

We will describe a few of most used ones in this purpose.

Process Explorer – a freeware utility solution from Windows Sysinternals, initially created by Winternals, but acquired by Microsoft. It can be seen as Windows Task Manager with advanced features. One of many features is the close open files feature, and it is highly recommended for Server Administrators and IT professionals.

Sysinternals can be accessed on the following link :

https://docs.microsoft.com/en-us/sysinternals/

OpenedFilesView – Practically a single executable file application, displays the list of all opened files on your system. For each opened file, additional information is displayed: handle value, read/write/delete access, file position, the process that opened the file, and more.

To close a file or kill a process, right-click any file and select the desired option from the context menu.

It can be downloaded on the following link :

https://www.nirsoft.net/utils/opened_files_view.html

Lockhunter – usually a tool with a purpose of deletion of blocked files ( to recycle bin). It can be a workaround for open files, and it has a feature of listing and unlocking locked files on your system. It is very powerful, and helpful in a situation when system tools fail.

It could be downloaded on following the link: http://lockhunter.com/

Long Path Tool – Long Path Tool is a shareware program provided by KrojamSoftthat, as its name suggests, helps you fix a dozen issues you’ll face when a file’s path is too long. Those issues include not being able to copy, cut, or delete the files in question because its path is too long. With a bunch of features, this could maybe be an “overkill” for this purpose, but it is definitely a quality app for all sysadmins.

It could be downloaded on following link: https://longpathtool.com/

How To Generate All Domain Controllers in Active Directory

In this article, we’ll describe how to generate all Domain Controllers in the Active Directory Sites and Services tool.

Active Directory Sites and Services can be seen as an administrative tool used to manage sites and the related components on Microsoft Server systems.

It contains a list of all Domain Controllers (DCs) connected to the system, regardless of their number.

In some situations, admins can notice more than one DC listed under Windows NT Directory Services (NTDS) settings.

What are these other DCs, and how can they be generated automatically?

KCC

Those DCs are called KCCs (Knowledge Consistency Checkers). They are nominated bridgehead servers per site that handle replication tasks between specific sites.

A bridgehead server is responsible for replicating any changes to all remaining DCs in its site.

In simple words, KCCs take care of replication by generating DCs, which communicate with other DCs and KCCs—consequently, the auto-generated domain controllers take care of the replication.

How to create automatically generated Domain Controllers

There are instances, such as during server moves or adding new organizational Domain Controllers, when   Active Directory is unable to create ‘Automatically Generated’ connections with the root Domain Controller.

In such a situation, the Domain Controller can be seen, but not on the “real” Domain Controller list.

There is more than one solution to this problem.

Let’s talk about two of the most used and tested solutions.

1. Manually forcing auto generation

This first method, although it can get in the quick “workaround” category,  involves manually forcing auto-generation.

It can be done by right clicking on the NTDS Settings option and then choosing ‘All Tasks and Check Replication Topology’ in the end.

That should force trigger auto-generation of all Domain Controllers, and your Domain Controllers should now be visible on the list.

2. Repadmin

Repadmin is a command line tool used for diagnosing and repairing replication problems.

It can be used from an elevated command prompt by typing ntdsutil.

Then, entering this command:

repadmin / showrepl*

To create an output that replicates the state of all DCs in the system, enter this command:

Repadmin/replicate

As a result, force replication will be started. This command forces replication and generates all Domain Controllers on the Sites and Services list.

Conclusion

It is usually not necessary to create manual connections when the KCC is being used to generate automatic connections; if any conditions change, the KCC automatically reconfigures the connections.

Adding manual connections when the KCC is employed can potentially increase replication traffic and conflicts with optimal settings stipulated by KCC.

If a connection is not working due to a failed domain controller, the KCC automatically builds temporary connections to other replication sites (if the damage is not too big) to ensure that replication occurs.

If all the domain controllers in a site are unavailable, KCC automatically creates replication connections between domain controllers from another site.

It is not recommended to manually modify this, unless you have a very specific use case.

As long as these records are auto-generated, they can survive a Domain Controller failure, as the KCC/ISTG will automatically create a new connection.

However, if you manually create a connection or specify a bridgehead server, and that server goes offline, KCC will not create a new connection and replication between the affected sites will stall.

How to Set Accurate Time for Windows Server 2016

Accurate Time For Windows Server 2016

It is important for Windows Server 2016 to maintain an accuracy of 1ms in sync with the UTC time. This is because new algorithms and periodic time checks are obtained from a valid UTC server.

The Windows time service is a component that uses a plugin for the client and server for synchronization.

Windows has two built-in client time providers that link with the third party plugins.

One of the providers uses the Network Time Protocol (NTP) or the Microsoft Network Time Protocol (MS-NTP) to manage the synchronizations to the nearest server.

Windows has a habit of picking the best provider if the two are available.

This article will discuss the three main elements that relate to an accurate time system in Windows Server 2016:

  • Measurements
  • Improvements
  • Best practices

Domain Hierarchy

Computers that are members of a domain use the NTP protocol that authenticates to a time reference in relating to security and authenticity.

The domain computers synchronize with the master clock that is controlled by domain hierarchy and the scoring system.

A typical domain has hierarchical stratum layers where each Domain Controller (DC) refers to the parent DC with accurate time.

The hierarchy revolves around the Primary Domain Controller (PDC) or a DC with the root forest, or a DC with a Good Time Server for the Domain (GTIMESERV) flag.

Standalone computers use the time.windows.com service. The name resolution takes place when the Domain Name Service resolves to a time owned by a Microsoft resource.

Like any other remotely located time references, network outages do not allow synchronization to take place. Paths that are not symmetrical in a network reduce time accuracy.

Hyper-V guests have at least two windows time providers; therefore, it is possible to observe different behaviors with either the domain or the standalone.

NOTE: stratum refers to a concept in both the NTP and the Hyper-V providers. Each has a value indicating clock location in the hierarchy. Stratum 1 is for high-level clock, and stratum 0 is for hardware. Stratum 2 servers communicate to stratum 1 servers, stratum 3 to stratum 2, and the cycle continues. The lower strata show clocks that are more accurate with the possibility of finding errors. The command line tool w32tm (W32time) takes time from stratum 15 and below.

Factors Critical For Accurate Time

1. Solid Source Clock

The original source of the clock needs to be stable and accurate at all times. This implies that during the installation of the Global Positioning Service (GPS) pointing to stratum 1, you should take #3 into consideration.

Therefore, if the source clock shows stability, then the entire configuration will have a constant time.

Securing the original source time means that a malicious person will not be able to expose the domain to time-based threats.

2. Stable Client Clock

A stable client takes the natural drift of the oscillator to make sure that it is containable. The NTP uses multiple samples to condition the local clocks on standalone to stay on course.

If the time oscillation on the client computers is not stable, there will be fluctuations between adjustments leading to malfunctioning of the clock.

Some machines may require hardware updates for proper functioning.

3. Symmetrical NTP Communication

The NTP connection should be symmetrical at all times because the NTP uses calculation adjustments to set time as per the symmetry levels.

If the NTP request takes longer than the expected time on its return, time accuracy is affected. You may note that the path could change due to changes in topology or routing of packets through different interfaces.

The battery-powered devices may use different strategies, which in some cases require that the device be updating every second.

Such a setting consumes more power and can interfere with power saving modes. Some battery run devices have some power settings that can interfere with the running of other applications and hence interfere with the W32time functions.

Mobile devices are never 100% accurate, especially if you look at the various environmental factors that interfere with the clock accuracy. Therefore, battery-operated devices should not have high time accuracy settings.

Why is Time Important

A typical case in a Windows environment is the operation of the Kerberos that needs at least 5 minutes accuracy between the clients and servers.

Other instances that require time include:

  • Government regulations, for example, the United States of America uses 50ms for FINRA, and the EU uses 1ms ESMA or MiFID II.
  • Cryptography
  • Distributed systems like the databases
  • Block chain framework for bitcoin
  • Distributed logs and threat analysis
  • AD replication
  • The Payment Card Industry (PCI)
  • The Time Improvements for Windows Server 2016
  • Windows Time Service and NTP

The algorithm used in Windows Server 2016 has greatly improved the local clock when synchronizing with the UTC. The NTP has four values to calculate the time offset based on timestamps of client requests or responses and server requests and responses.

The modern network environment has too much congestion and related factors that affect the free flow of communication.

Windows Server 2016 uses different algorithms to cancel out the disturbances. Besides, the source used in Windows for time references uses improved Application Programming Interface (API) with the best time resolution, giving an accuracy of 1ms.

Hyper-V

Windows 2016 Server made some improvements that include accurate VM start and VM restore. The change gives us an accuracy of 10µs of the host with a root mean square (RMS) of 50µs for a machine carrying a 75% load.

Moreover, the stratum level at the host sends to guests more transparently. Earlier hosts would be fixed at stratum 2, regardless of its accuracy and the changes in Windows Server 2016 the host reports at stratum 1, which gives better timing for the virtual machines.

Domains created in Windows 2016 Server will find time to be more accurate because the time does not default to the host and that is the reason behind manually disabling the Hyper-V time provider settings in Windows joining a Windows 2012R2 and below.

Monitoring

Counters tracking the performance counters are now part of the Windows Server 2016, they allow for monitoring, troubleshooting, and baselining time accuracy.

The counters include:

a. Computed Time Offset

This feature indicates the absolute time between the system clock and the chosen time source in microseconds. The time updates whenever a new valid sample is available. Clock accuracy is traced using the performance counter that has an interval of 256 seconds or less.

b. Clock Frequency Adjustment

This adjustment indicates the time set by the local W32Time measured in parts per billion. The counter is important when it comes to visualizing actions taken by W32time.

c. NTP Roundtrip Delay

NTP Roundtrip Delay is the time taken during the transmission of a request to the NTP server and when the response is valid.

This counter helps in characterizing the delays experienced by the NTP client. If the roundtrip is large or varies, it can lead to noise, especially when the NTP computes time, thereby affecting time accuracy.

d. NTP Client Source Count

The source count parameter holds the number of clients and unique IP addresses of servers that are responding to client requests. The number may be large or small compared to active peers.

e. NTP Server Incoming Requests

A representation of the number of requests received by the NTP server indicated as request per second.

f. NTP Server Outgoing Responses

A representation of the number of answered requests by the NTP server indicated as responses per second.

The first three show the target scenarios for troubleshooting accuracy issues. The last three cover NTP server scenarios, which help to determine the load and setting a base for the current performance.

Configuration Updates per Environment

The following is a description that changes the default configurations between Windows 2016 and earlier versions.

The settings for Windows Server 2016 and Windows 10 build 14393 are now taking unique settings.

Role

Settings

Server 2016

Windows 10

Servers 12 and 08 and Windows 10

Standalone or a Nano Server

    
 

Time server

time.windows.com

N/a

time.windows.com

 

Poling frequency

64-1024 seconds

N/a

Once a week

 

Clock update frequency

Once a second

N/a

Once a hour

Standalone Client

    
 

Time server

N/a

time.windows.com

time.windows.com

 

Polling frequency

N/a

Once a day

Once a week

 

Clock update frequency

N/a

Once a day

Once a week

Domain Controller

    
 

Time server

PDC/GTIMESERV

N/a

PDC/GTIMESERV

 

Polling frequency

64 to 1024 seconds

N/a

1024 to 32768 seconds

 

Clock update frequency

Once a day

N/a

Once a week

Domain Member Server

    
 

Time server

DC

N/a

DC

 

Polling frequency

64 to 1024 seconds

N/a

1024 to 32768 seconds

 

Clock update frequency

Once a second

N/a

Once every 5 minutes

Domain Member Client

    
 

Time server

N/a

DC

DC

 

Polling frequency

N/a

1024 to 32768 seconds

1024 to 32768 seconds

 

Clock update frequency

N/a

Once every 5 minutes

Once every 5 minutes

Hyper-V Guest

    
 

Time server

Chooses the best alternative based on host stratum and time on the server

Chooses the best alternative based on host stratum and time server

Defaults to host

 

Polling frequency

Based on the role above

Based on the role above

Based on the role above

 

Clock update frequency

Based on the role above

Based on the role above

Based on the role above

Impact of Increased Polling and Clock Update Frequency

To get the most accurate time, the defaults for polling frequencies and clock updates will give you the ability to make adjustments more frequently.

The adjustments lead to more UDP and NTP traffic that will in no way affect the broadband links.

Battery devices do not store the time when turned off, and when turned on, it may lead to frequent time adjustments. Increasing the polling frequency will lead to instability, and the device will use more power.

Domain controllers should have less interference after multiple effects of increasing updates from NTP clients and AD domain. NTP does not require many resources compared to other protocols.

You can reach the limits of the domain functionality before getting a warning, indicating increased settings in Windows Server 2016.

The AD does not use secure NTP, which does not synchronize time accurately but will increase the clients two strata away from the PDC.

You can reserve at least 100NTP requests per second for every core. If you have a domain with 4 CPUs each, the total NTP should be serving 1,600 NTP requests per second.

As you set up the recommendations, ensure you have a large dependency on the processor speeds and loads. Administrators should conduct all baseline tests onsite.

If your DCs are running on sizeable CPU load of more than 40%, the system is likely to generate some noise when NTP is responding to requests, which may impair domain time accuracy.

Time Accuracy Measurements

Methodology

Different tools can be used to gauge the time and accuracy of Windows Server 2016.

The techniques are applicable when taking measurements and tuning the environment to determine if the test outcome meet the set requirements.

The domain source clock has two precision NTP servers and GPS hardware.

Some of these tests need a highly accurate and reliable clock source as a reference point adding to your domain clock source.

Here are four different methods for measuring accuracy in physical and virtual machines:

  • Take the reading of the local clock conditioned by a w32tm and reference it against a test machine with a separate GPS hardware.
  • Measure pings coming from the NTP server to its clients using the “stripchart” of the W32tm utility
  • Measure pings from the client to the NTP server using “stripchart” of the W32tm utility.
  • Measure the Hyper-V output from the host to the guests using the Time Stamp Counter (TSC). After getting the difference of the host and client time in the VM, use the TSC to estimate the host time from the guest. We also consider the use of TSV clock to factor out delays and the API latency.

Topology

For comparison purposes, testing both the Windows Server 2012R2 and Windows Server 2016 based on topology is sensible.

The topologies have two physical Hyper-V hosts that point to a 2016 Server with a GPS hardware installed. Each of these hosts runs at least three domains joining the Windows guests, taking the arrangement shown in the diagrams below.

TOPOLOGY 1. Image Source

The lines on the diagram indicate time hierarchy and the transport or protocol used.

TOPOLOGY 2. Image Source

Graphical Results Overview

The following graph is a representation of the time accuracy between two members of a domain. Every graph shows both Windows Server 2012R2 and 2016 outcome.

The accuracy was a measurement taken from the guest machine in comparison to the host. The graphical data shown indicate both the best and worst case scenarios.

TOPOLOGY 3. Image Source

Performance of the Root Domain PDC

The root PDC synchronizes with the Hyper-V host using a VMIC that is present in Windows Server 2016 GPS hardware, which shows stability and accuracy. This is critical because a 1ms accuracy is needed.

Performance of the Child Domain Client

The child domain client is attached to a Child Domain PDC for sending communication to the Root PDC. Its timing should also be within the 1ms accuracy.

Long Distance Test

Long distance test could involve comparing a single virtual network hop to 6 physical network hops on Windows Server 2016.

Increasing network hops mean increasing latency and extending time differences. The 1ms accuracy may negatively change, which demonstrates a symmetrical network.

Do not forget that every network is different and measurements taken depend on varying environmental factors.

Best Practices for Accurate Timekeeping

1. Solid Source Clock

The machine timing is as good as its source clock. To achieve the 1ms accuracy, a GPS hardware or time appliance should be installed to refer to the master source clock.

The default time.windows.com may not give an accurate or stable local time source. Also, as you move away from the source clock, you are bound to lose time.

2. Hardware GPS Options

The different hardware solutions that offer accurate time depend on GPS antennas. Use of radio and dial-up modem solutions is also accepted. The hardware options connect through PCIe or USB ports.

Different options give varying time accuracy and the final time depends on the environment.

Environmental factors that interfere with accuracy depends on GPS availability, network stability, the PC hardware and network load.

3. Domain and Time Synchronization

Computers in a domain use the domain hierarchy to determine the machine to be used as a source for time synchronization.

Every domain member will look for a machine to sync with and save it as its source. Every domain member will follow a different route that leads to its source time. The PDC in the Forest Root should be the default source clock for all machines in the domain.

Here is a list of how roles in the domain find their original time source.

  • Domain Controller with PDC role

This is the machine with authority on time source for the domain. Most of the time, its issues are accurate and must synchronize with the DC in the parent domain–with exceptional cases where GTIMESERV role is active.

  • Other Domain Controller

This will take the role of a time source for clients and member servers in the domain. A DC synchronizes with the PDC of its domain or any DC in the parent domain.

  • Clients or Member Servers

This type of machine will synchronize with any DC or PDC within its domain or picks any DC or PDC in the parent domain.

When sourcing for the original clock, the scoring system is used to identify the best time source. Scoring takes into account the reliable time source based on the relative location, which happens only once when the time service starts.

To fine-tune time synchronization, add good timeservers in a specific location and avoid redundancy.

Mixed Operating System Environments (Windows 2012 R2 and Windows 2008 R2)

In a pure Windows Server 2016 domain environment, you need to have the best time accuracy.

Deploying a Windows Server 2016 Hyper-V in a Windows 2012 domain will be more beneficial to the guests because of the improvements made in Server 2016.

A Windows Server 2016 PDC delivers accurate time due to the positive changes to its algorithms, which also acts as a credible source.

You may not have an option of replacing the PDC, but you can add a Windows Server 2016 DC with the GTIMESERV flag as one way of upgrading time accurately for the domain.

Windows Server 2016 DC delivers better time to lower clients, but it’s always good to use it as a source NTP time.

As already stated above, clock polling and refresh frequencies are modified in Windows Server 2016.

You can also change the settings manually to match the down-level DCs or make the changes using the group policy.

Versions that came prior to Windows Server 2016 have a problem with keeping accurate time since their systems drift immediately you make a change.

Obtaining samples from accurate NTP sources and conditioning the clock leads to small changes in system clock, ensuring better time keeping on the low-level OS versions.

In some cases involving the guest domain controllers, samples from the Hyper-V TimeSync is capable of disrupting time synchronization. However, for Server 2016, it should no longer be an issue when the guest machines run on Server 2016 Hyper-V hosts.

You can use the following registry keys to disable the Hyper-V TimeSync service from giving samples to w32time:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider

“Enabled”=dword:00000000

Allow Linux to Use Hyper-V Host Time

For guest machines using Linux and run the Hyper-V, it is normal for clients to use the NTP Daemon for time synchronization against the NTP servers.

If the Linux distribution supports version 4 TimeSync protocol with an enabled TimeSync integration on the guest, then synchronization will take place against the host time. Enabling both methods will lead to inconsistency.

Administrators are advised to synchronize against the host time by disabling the NTP time synchronization by using any of the following methods:

  • Disabling NTP servers in the ntp.conf file
  • Disabling the NTP Daemon

In this particular configuration, the Time Server Parameter is usually the host, and it should poll at a frequency of 5 seconds, which is the same as the Clock Update Frequency.

Exclusive synchronization over NTP demands that you disable the TimeSync integration service in the guest machine.

NOTE: Linux accurate timing support must have a feature supported in the latest upstream Linux Kernels. As at now, it is not available across most Linux distros.

Specify Local Reliable Time Service Using the GTIMESERV

The GTIMESERV allows you to specify one or more domain controllers as the accurate source clocks.

For example, you can use a specific domain controller with a GPS hardware and flag it as GTIMESERV to make sure that your domain references to a clock based on a GPS hardware.

TIMESERV is a Domain Services Flag that indicates whether the machine is authoritative and can be changed if the DC loses connection.

When the connection is lost, the DC returns the “Unknown Stratum” error when you query via the NTP. After several attempts, the DC will log System Event Time Service Event 36.

When configuring a DC as your GTIMESERV, use the following command:

w32tm /config /manualpeerlist:”master_clock1,0x8 master_clock2,0x8” /syncfromflags:manual /reliable:yes /update

If the DC has a GPS hardware, use the following steps to disable the NTP client and enable the NTP server:

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpClient /v Enabled /t REG_DWORD /d 0 /f

reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpServer /v Enabled /t REG_DWORD /d 1 /f

Then, restart Windows Time Service

net stop w32time && net start w32time

Finally, tell network hosts that this machine has a reliable time source using this command:

w32tm /config /reliable:yes /update

Confirm the changes, run the following commands, which indicate the results as shown:

w32tm /query /configuration

Value

Expected Setting

AnnounceFlags

5 (Local)

NtpServer

(Local)

DIIName

C:\WINDOWS\SYSTEM32\w32time.DLL (Local)

Enabled

1 (Local)

NtpClient

(Local)

w32tm /query /status /verbose

Value

Expected Setting

Stratum

1 (primary reference – syncd by radio clock)

ReferenceId

0x4C4F434C (source name: “LOCAL”)

Source

Local CMOS Clock

Phrase Offset

0.0000000s

Server Role

576 (Reliable Time Service)

Windows Server 2016 on 3rd party Virtual Platforms

The virtualization of Windows means that the time responsibility defaults to the Hypervisor.

However, new members of the domain need to be synchronized with the Domain Controller for the AD to work effectively. The best that you can do is to disable time virtualization between guests and 3rd party virtual platforms.

Discover the Hierarchy

The chain of time hierarchy to the master clock is dynamic and non-negotiated. You must query the status of a specific machine to get its time source. This analysis helps in troubleshooting issues relating to synchronizations.

If you are ready to troubleshoot, find the time source by using the w32tm command:

w32tm /query /status

The output will be the source. Finding the source is the initial step in time hierarchy.

The next thing to do is to use the source entry and /Stripchart parameter to find the next time source.

w32tm /stripchart /computer:MySourceEntry /packetinfo /samples:1

The command below gives a list of domain controllers found in a specific domain and relays the results that you can use to determine each partner. The command also includes machines with manual configurations.

w32tm /monitor /domain:my_domain

You can use the list to trace the results through the domain and know their hierarchy and time offset at each step.

If you mark the point where time offset increases, you can get to know the cause of incorrect time.

Using Group Policy

Group policy is used to accomplish strict accuracy by making sure clients are assigned specific NTP servers. Clients can control how down-level OS should work when virtualized.

Look at the following list of all possible scenarios and relevant Group Policy settings:

  • Virtualized Domains

To gain control over the Virtualized Domain Controllers in Windows 2012 R2, disable the registry entry corresponding to the virtual domain controllers.

You may not want to disable the PDC entry because in most cases, Hyper-V host delivers a stable time source. The entry to the registry requires that you restart the w32time service after making changes.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider]

“Enabled”=dword:00000000

  • Accuracy Sensitive Loads

For any workload that is sensitive to time accuracy, ensure that the group machines are set to use the NTP servers and any related time settings like update frequency and polling.

This is a task handled by a domain, but if you want to have more control, target specific machines to point to the master clock

Group Policy Setting

New Value

NtpServer

ClockMasterName,0x8

MinPollInterval

6-64 seconds

MaxPollInterval

6 seconds

UpdateInterval

100 to once per second

EventLogFlags

3 – All special time logging

NOTE: The NtpServer and EventLogFlags are located on the System\Windows Time Service\Time Providers, if you follow the Configure Windows NTP Client Settings. The other three are under the System\Windows Time Service, if you follow the Global Configuration Settings

Remote Accuracy Sensitive Loads Remote

For systems running on the branch domains, such as the Retail and Payment Credit Industry (PCI), Windows will use the current site data and DC Locator to search the local DC, unless you have a manual NTP time source configured.

In such an environment, you need 1 second accuracy with the option of using the w32time services to move the clock backwards.

If you can meet the requirements, use the table below to create a policy.

Group Policy Settings

New Value

MaxAllowedPhaseOffset

1, if more than on second, set clock to correct time.

The MaxAllowedPhaseOffset is a setting you will find under System\Windows Time Service using global Configuration settings.

Azure and Windows IaaS Consideration

  • Azure Virtual Machine: Active Directory Domain Services

If you have Azure VM running Active Directory Domain Services as part of the existing configuration in a Domain Forest, then the TimeSync (VMIC) should not be running.

Disabling VMIC allows all DCs in both physical and virtual forests to use a single time sync hierarchy.

  • Azure Virtual Machine: Domain –Joined Machine

If you have a host whose domain links to an existing Active Directory Forest, whether virtual or physical, the best you can do is to disable TimeSync for the guest and make sure the W32Time is set to synchronize with the Domain Controller.

  • Azure Virtual Machine: Standalone Workgroup Machine

If your Azure is not part of a domain and it is not a Domain Controller, you can keep the default time configuration and let the VM synchronize with the host.

Windows Applications that Require Accurate Time

Stamp API

Programs or applications that need time accuracy in line with the UTC should use the GetSystemTimePreciseAsFileTime API to get the time as defined by Windows Time Service.

UDP Performance

An application that uses UDP to communicate during network transactions should minimize latency. You have the registry options to use when configuring different ports. Note that any changes to the registry should be restricted to system administrators.

Windows Server 2012 and Windows Server 2008 need a Hotfix to avoid datagram losses.

Update Network Drivers

Some network cards have updates that help improve performance and buffering of UDP packets.

Logging for System Auditors

Time tracing regulation may force you to comply by archiving the w32tm logs, performance monitors, and event logs. Later, these records may be used to confirm your compliance at a specific time in the past.

You can use the following to indicate time accuracy:

  • Clock accuracy using the computed time offset counter
  • Clock source looking for “peer response from” in the w32tm event logs
  • Clock condition status using the w32tm logs to validate the occurrence of “ClockDispl Discipline:*SKEW*TIME*.”

Event Logging

An event log can give you a complete story in the information it stores. If you filter out the Time-Server logs, you will discover the influences that have changed the time. Group policy can affect the events of the logs.

W32time Debug Logging

Use the command utility w32tm to enable audit logs. The logs will show clock updates as well as the source clock.

Restarting the service enables new logging.

Performance Monitor

The Windows Server 2016 Time service counters can collect the logging information that auditor’s need. You can log the data locally or remotely by recording the machine’s Time Offset and Round Trip Delays.

Like any other counter, you can create remote monitors and alerts using the System Center Operations Manager. You can set an alert for any change of accuracy when it happens.

Windows Traceability Example

Using sample log files from the w32tm utility, you can validate two pieces of information where the Windows Time Service conditions the first log file at a given time.

151802 20:18:32.9821765s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:223 CR:156250 UI:100 phcT:65 KPhO:14307

151802 20:18:33.9898460s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:64 KPhO:41

151802 20:18:44.1090410s – ClockDispln Discipline: *SKEW*TIME* – PhCRR:1 CR:156250 UI:100 phcT:65 KPhO:38

All the messages that start with “ClockDisplin Discipline” are enough proof that your system is interacting with the system clock via the w32time.

The next step is to find the last report before the time change to get the source computer that is the current reference clock.

Like in the example below, we have the Ipv4 address of 10.197.216.105 as the reference clock. Another reference could point to the computer name or the VMIC provider.

151802 20:18:54.6531515s – Response from peer 10.197.216.105,0×8 (ntp.m|0x8|0.0.0.0:123->10.197.216.105:123), ofs: +00.0012218s

Now that the first section is valid, investigate the log file on the reference time source using the same steps.

This will give you a physical clock such as the GPS or a known time source like the National Institute of Standards and Technology (NIST). If the clock is a GPS hardware, then manufacturer logs may be required.

Network Considerations

The NTP protocol algorithm depends on the network symmetry, making it difficult to predict the type of accuracies needed for certain environments.

You an use the Performance Monitor and new Windows Time Counters for Windows Server 2016 to create baselines.

The Precision Time Protocol (PTP) and the Network Time Protocol (NTP) are the two that you can use to gauge accurate time.

If clients are not part of a domain, Windows use the Simple NTP by default. Clients found within a Windows domain use the secure NTP protocol, also referred to as MS-SNTP, which help in leveraging domain communication, consequently giving an advantage over Authenticated NTP.

Reliable Hardware Clock (RTC)

Windows will not step time unless some conditions are beyond the norm. The implication is that the w32tm changes the frequency at regular intervals while relying on the Clock Update Frequency Settings, which is 1 second on Windows Server 2016.

It will move the frequency if it is behind, and vice versa when it is ahead of time.

This reason explains why you need to have acceptable results during the baseline test. If what you get for the “Computed Time Offset” is not stable, then you may have to verify the status of the firmware.

Troubleshooting Time Accuracy and NTP

The Discovering Hierarchy section gave us an understanding of the source and inaccurate time.

You need to look for time offset to identify the point where the divergence takes place from its NTP Sources. Once you can trace the hierarchy of time, you need to focus on the divergent system to gather more information in determining the issues causing all these inconsistencies.

Here are some tools that you can use:

System event logs

  • Enable logging:

w32tm logs – w32tm /debug /enable /file:C:\Windows\Temp\w32time-test.log /size:10000000 /entries:0-300

w32Time Registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time

  • Local network interfaces
  • Performance counters
  • W32tm /stripchart /computer:UpstreamClockSource
  • PING UpstreamClockSource (gauging latency and understanding the number of hops to source)

Tacert UpstreamClockSource

Problem

Symptoms

Resolution

Local TSC unstable

Use perfmon-Physical computer- Sync clock stable clock

Update firmware or try an alternative hard to confirm that it does display the same issue

Network latency

W32tm stripchart displays the RoundTripDelay exceeding 10ms. Use Tracert to find where the latency thrives

Locate a nearby source clock for time. Install a source clock on the same domain segment or point to one that is geographically closer. Domain environment needs a client with the GtimerServ role.

Unable to reliably reach the NTP source

W32tm /stripchart gives “request time out”

NTP source unresponsive

NTP Source is not responsive

Check Perfmon counters for NTP client Source Count, NTP server outgoing responses, and NTP Server Incoming Requests. Determine the outcome with your baseline tests results

Use server performance counters to determine change in load or if there is any network congestion

Domain Controller not using the most accurate clock

Changes in topology or a recently added master clock

w32tm /resync /rediscover

Clients Clocks are drifting

Time-Service event 36 in System event log or you see a text log with the following description: “NTP Client Time Source Count” going from 1 to 10

Identify errors in the upstream source and query if it may be experiencing performance issues

Baselining Time

Baseline tests are important because they give you an understanding of the expected performance accuracy of the network.

You can use the output to detect problems on your Windows Server 2016 in the future. The first thing to baseline is the root PDC or any machine with the role of GTIMESRV.

Every PDC in the forest should have a baseline test results. Eventually, you need to pick DCs that are critical and get their baseline results too.

It is important to baseline Windows 2016 and 2012 R2 using the w32tm /stripchart as a comparison tool. If you use two similar machines, you can compare their results and make comprehensive analysis.

Using the performance counters, you can collect all information for at least one week to give you enough references when accounting for various network time issues.

If you have more figures for comparison, you’ll gain enough confidence that your time accuracy is stable.

NTP Server Redundancy

A manual NTP server configuration in a non-domain network means that you should have a good redundancy measure to get better accuracy when other components are also stable.

On the other hand, if your topology does not have a good design and other resources are not stable, it’ll lead to poor accuracy levels. Take caution to limit timeservers’ w32time to 10.

Leap Seconds

The climatic and geological activities on planet earth lead to varying rotation periods. In an ideal scenario, the rotation varies every two years by one second.

When the atomic time grows, there will be a correction of a second up or down called the leap second. When doing the correction, it never exceeds 0.9 seconds. The correction is always announced six months before time.

Before Windows Server 2016, the Microsoft Time Service did not account for the leap seconds and relied on external time service to handle the adjustments.

The changes made to Windows Server 2016, Microsoft is working on a suitable solution to handle the leap second.

Secure Time Seeding

W32time in Windows Server 2016 includes the Secure Time Seeding Feature that determines the approximate current time of the outgoing Secure Sockets Layer Connection (SSL). The value helps in correcting gross errors on the local system clock.

You can decide not to use the Secure Time Seeding feature and use the default configurations, instead.

If you intend to disable the feature, use the following steps:

  • Set the UtilizeSSLTimeData registry value to 0 using the command below:

reg add KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Config /v UtilizeSslTimeData /t REG_DWORD /d 0 /f

  • If the machine does not detect any changes and does not ask for a reboot, notify the W32time service about the changes. This will stop enforcing time monitoring based on data coming from the SSL connections.

W32tm.exe /config /update

  • Rebooting the machine activates the settings immediately and directs the machine to stop collecting data from SSL connections.

For the above setting to be effective on the entire domain, set the UtilizeSSLTimeData value in W32time using the Group Policy Setting to 0, and make the setting public.

The moment the setting is picked by a Group Policy Client, the W32time service gets the notification and stops enforcing and monitoring SSL time data.

If the domain has some portable laptops or tablets, you can exclude them from the policy change because when they lose battery power, they will need to re-access the Secure Time Seeding feature to acquire the current time.

Conclusion

The latest developments in the world of Microsoft Windows Server 2016 means that you can now get the most accurate time on your network once you observe some conditions.

The Windows Time Service (W32Time) main work is to give your machine time, regardless of whether it is a standalone or part of a network environment.

The primary use of time in a Windows Server 2016 environment is to make sure that there is enough security for Kerberos authentication.

The W32Time makes it almost impossible to have replay attacks in an Active Directory or when running Virtual Machines on Hyper-V hosts.

Active Directory Authoritative Restore with Windows Server Backup

Overview 

In short lines, an authoritative restore is a Windows Server process of return of a designated deleted Active directory object or container of an object to the state before deletion, at the time when it was backed up. 

An authoritative restore process will replicate the restore object across organization’s domain controllers, but, restore process will increase the Unique Sequence Number (USN) of all attributes on the restored object.  

Because the object will have a much higher Unique Sequence Number, it will replicate across all domain controllers of organization and overwrite anything associated to the previous object. 

In this article, our goal is to describe the procedure and make test example of this process. 

Procedure and Examples 

In an example, hypothetic scenario, it is needed to restore user deleted from Active Directory Users and Computers. 

First thing in the scenario is a restoration from backup. For a backup start, it is needed to restart the domain controller in Directory Recovery Mode (Safe mode). It can be done with a reboot and press key F8 on startup. 

Login is made with local admin, using username.\administrator, and password setter up during domain controller installation for Directory Services Restore Mode ( DSRM ). 

After login, right-click start menu and choose Command Prompt ( Admin ) option. 

When Command Prompt is accessed, following command,  will show available backups:

wbadmin get versions 

Following command (followed by “yes” option ) will start restoration based on the chosen backup entry : 

wbadmin start systemstaterecovery version: (chosen version) 

 And user will be prompted to reboot with “Yes” option. 

After reboot, it is needed to start the Command Prompt (Admin) again, and run ntdsutil command for accessing and managing a Windows Active Directory (AD) database. (Ntdsutil should only be used by experienced administrators and it should be used from an elevated command prompt). 

At ntdsutil prompt, it is needed to enter following commands: 

 activate instance ntds 

And after that : 

authoritative restore 

At authoritative restore prompt terminal, the full path to the object that is wanted to restore should be entered. 

restore object cn=(object name),OU=(organizational unit) ,DC=(domain controller),DC=local 

It is needed to confirm it with “yes”, and restoration will start.  

Exit the authoritative restore with the command:”quit” and ntdsutil with the command: “quit”. 

From Command Prompt terminal, disable safe boot sequence of a server with a command:

bcdedit /deletevalue safeboot 

After reboot and login to the server, a wanted object should be restored in Active Directory. 

Do you want to prevent unauthorized deletion of directory objects or something similar to this problem?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Windows Server: Clean Up Orphaned Foreign Security Principals

This article will show different ways to clean up orphaned Foreign Security Principals. There are more ways to do it, but before any method described, it is needed to say that if it is possible, clean your “orphaned FSP’s” with GUI method. PowerShell methods are not recommended to users without excellent knowledge of console, due to possible issues that method can cause. 

But first, let’s see overview – What is FSP? 

Overview 

Foreign Security Principals (FSPs) are security principals, created when an object ( user, computer or group) is added to some domain group, but with origins from an external trusted domain. 

FSP is recognized by mark. It is marked with a red curly arrow connected to an icon of object and acts as a pointer. 

Active Directory creates them automatically, after adding security principal from another forest to a group from that domain. 

When security principal, on which FSP is pointing,  is removed, the FSP becomes orphan. Orphan is a term used for FSP’s which have no more principals, FSP is pointing on.   

In the case of the creation of the same principle, old FSP will still be orphaned. New FSP will have different   SID ( security identifier) number, no matter what principle is the same. 

The outcome could be that FSP that is once orphaned, stays orphaned forever until it is removed/cleaned up by the administrator. 

There are two ways of identification and cleaning orphaned FSP. It can be done by GUI ( Graphics User Interface), and by PowerShell console. 

Identification and clean up of orphaned FSP via GUI 

As mentioned before, this is the most recommended way of cleaning orphaned FSP’s. 

In “GUI” way, orphaned FSP can be found in Active Directory Users and Computers console, when advanced features are enabled (If advanced features are not enabled, FSP won’t be seen). They are stored in the ForeignSecurityPrincipals container. Orphaned FSP can be identified through column “Readable Name”. 

If FSP is orphaned, Readable Name column in the console will show up empty. 

They can be cleaned by selection and right-click deletion. 

Cleaning FSP via PowerShell 

For PowerShell cleaning, all FSP objects first have to be listed. 

All FSP’s can be listed by usage of Get-ADObject cmdlet. 

Get-ADObject -Filter {ObjectClass -eq ForeignSecurityPrincipal'} 

When listed, they can be removed by usage of the Translate method, but precaution is advised. There is a possibility, in case of network connectivity issue  FSP’s can be seen as momentarily orphaned, PowerShell method will delete them too, and that can make problems due to SID change.

$ForeignSecurityPrincipalList = Get-ADObject -Filter {ObjectClass eq 
 ‘foreignSecurityPrincipal' }    
foreach($FSP in $ForeignSecurityPrincipalList)  
{      
Try     
 {$null=(New-Object System.Security.Principal.SecurityIdentifier($FSP.objectSid)).Translate([System.Security.Principal.NTAccount])}      
Catch    
  {Remove-ADObject -Identity $FSP}  
} 

Scheduled removal of orphaned FSP 

A task can be scheduled to make removal of orphaned FSP automatic.  

The best way to remove FSP by schedule  is by created script like for example : 

 The fictive company has a monthly turnover of  50 employees. 

A custom script can be made to delete orphaned FSP’s in the time range of 1 month  : 

 

Import-Module -Name OrphanForeignSecurityPrincipals 
$MyCompanyTurnover = 20 
 $OrphanFSPListFilePath ='c:\temp\OFSP.txt' 
$OrphanForeignSecurityPrincipalsList = Get-OrphanForeignSecurityPrincipal TabDelimitedFile $OrphanFSPListFilePath 

If ($OrphanForeignSecurityPrincipalsList) 
 { 
    If ($OrphanForeignSecurityPrincipalsList.Count -gt $MyCompanyTurnover) 
    { 
        $MailParameters = @{ 
            SmtpServer = 'mail.mycompany.com' 
            From       = 'NoReply@mycompany.com' 
            To         = 'Administrator@mycompany.com' 
            Subject    = "Orphan Foreign Security Principals found" 
            Body       = 'Please check attached file.' 
            Attachment = $OrphanFSPListFilePath 
        } 
          Send-MailMessage @MailParameters 
    }    else { 
        Remove-OrphanForeignSecurityPrincipal -TabDelimitedFile $OrphanFSPListFilePath 
    } 
} 

Recovery of deleted FSP 

Deleted orphaned FSP’s can be restored by restoring it from recycle bin if recycle bin feature is activated before deletion is made. 

FSP can be restored via PowerShell cmdlets too : 

An object can be found  with the following cmdlet, and after they are listed, selected orphaned FSP’s can be restored: 

Get-ADObject -Filter 'IsDeleted -eq $TRUE' -IncludeDeletedObjects | Where-Object {$_.DistinguishedName -like "CN=S-*"} 

There is one more way of restoring orphaned Foreign security principals that we need to mention.  

It is the way of following same steps as made before first creation. By adding foreign user/computer/group account into the same groups where it has been before it got orphaned status. This step will create the same Foreign Security principal as it was before, just with different SID number. 

 

Avoid having problems on the FSP or Foreign Security Principals

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!