Windows Server 2016 Data Center brought interesting new and improved features in the field of virtual workload data centers (SDDC).
SDDC stands for Software-Defined Data Center, which is defined as data centers with a virtualized infrastructure delivered as a service. Microsoft finds SDDC as a more flexible, cost-effective data center platform based on Hyper-V. It offers the possibility of moving entire operation models away from a physical data center.
For virtualized workloads technology in Windows Server 2016 consist of 4 new and improved features:
- Storage spaces direct – A new Windows Server 2016 features an extended existing Windows Server SDS (Software-defined Storage). This enables the building of highly-available (HA) storage systems with local storage. HA storage systems are highly scalable and much cheaper than traditional SAN or NAS arrays. It simplifies procurator and deployment and offers higher efficiency and performance.
- Storage replica – This provides block-level replication between servers or clusters and is intended primarily for disaster prevention, such as the ability to restore service to an alternate data center with minimal downtime or data loss, or even to shift services to an alternate site. It supports two types of replication: synchronous (primarily used for high-end transactional applications that need instant failover if the primary node fails) and asynchronous (commits data to be replicated to memory or a disk-based journal which then copies the data in real-time or at scheduled intervals to replication targets).
- Storage Quality of Service (QoS) – A feature that provides central monitoring and managing of storage performance for virtual machines using Hyper-V and the Scale-Out File Server roles. In Windows Server 2016, QoS can be used to prevent all storage resources consumption of single VM. This also monitors performance details of all running virtual machines and the configuration of the Scale-Out File Server cluster from one place. Plus, it defines performance minimums and maximums for virtual machines and ensures that they are met.
- Data Deduplication – A feature that helps in reducing the impact of redundant data on storage costs. Data Deduplication optimizes free space on a volume by examining the data on the volume for duplication. Once identified, duplicated portions of the volume’s dataset are stored once and are (optionally) compressed for additional savings.
General Purpose File Servers
- Work folders, which were first presented in Windows Server 2012 R2, allows users to synchronize folder across multiple devices. It can be compared to existing solutions such as Dropbox, but with a difference of using your file server as the repository and that it doesn’t rely on a service provider. This way of synchronization is convenient for companies because of its own infrastructure used as a server, and for users who can work on files with no limits to corporate PC or being online.
- Offline Files and Folder Redirection are features that when used together, redirect the path of local folders (such as the Documents folder) to a network location while caching the contents locally for increased speed and availability.
- Separate Folder Redirection enables users and admins to redirect the local folder to other (network) locations. It makes files available from any computer on the network. Offline files allow access to files, even when online, or in case of slow network. When working offline, files are retrieved from the Offline Files folder at local access speeds.
- Roaming Users Profiles redirects user profiles to a file share so that users receive the same operating system and application settings on multiple computers.
- DFS Namespaces enables a user access to group-shared folders from different servers to one logically structured namespace. It makes handling shared folders on multiple locations easier from one place.
- File Server Resource Manager (FSRM) is a feature set in the File and Storage Services server role which helps classify and manage stored data on file servers. It uses features to provide insight into your data by automating classification processes, to apply a conditional policy or action to files based on their classification, limit the space that is allowed for a volume or folder, control the types of files that user can store on a file server and provides reports on disk usage.
- iSCSI Target Server is a role service which automizes management tasks. This is useful in a network or diskless boots as it creates block and heterogeneous storages. It’s also useful for testing applications before deployment in storage area networks.
File Systems and Protocols
- NTFS and ReFS – A primarily new and a more resilient file system, which maximizes data availability, scaling, and integrity of large data sets across different workloads.
- SMB (Server Message Block) – Provides access to files or other resources at a remote server. This allows applications to read, create, and update files on the remote server. It can also communicate with any server program that is set up to receive an SMB client request.
- Storage Class Memory – Provides performance similar to computer memory, but with the data persistence of normal storage drives.
- BitLocker – Protects data and system against offline attacks and stores data on volumes in an encrypted format. Even if the computer is tampered with or when the operating system is not running, this still provides protection.
- NFS (Network File System) – Provides a file sharing solution for enterprises that have heterogeneous environments that consist of both Windows and non-Windows computers.
SDDC represents a diversity of traditional data centers where infrastructure is defined by hardware and devices. Components are based on network, storage, and server virtualization.