Microsoft Active Directory Permissions: Best Practices for Data Protection

In this article, we are bringing the best practices for data protection in  The most famous directory service. Microsoft Active Directory Domain Services (AD DS).  

Microsoft Active Directory (AD) is a database that keeps track of all the “objects” in the system – users, computers, security groups, services, etc. In AD DS, at one central location, defining and updating all the rights a particular object has on the network. 

In short lines, the vital part of any Microsoft Server System with the recommended highest rate of security.  

So let’s start with tips and best practices for securing Microsoft Active Directory the best way possible.  

Least-Privilege User Access (LUA)  

The principle of least privilege (PoLP, also known as the principle of minimal privilege or the principle of least authority) requires that in a particular abstraction layer of a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose. (Wikipedia Definition) 

As part of those principles, the recommendation is the usage of LUA : 

LUA is the reverse of administrative privileges for all users, and then scaling back permissions as needed. It’s one of the best tips for keeping your network safe. 

The “hard way” of granting permissions to users. In some way, it is personalized per each user. It’s based to determine needs of all users on network and grant permissions for that needs, no more, no less. 

The process is not easy, it requires a lot of communication and takes a lot of time to configure the system that way. But in a long-term, your system will operate safely and as it should. 

There are variations of this plan, like creating “section groups” with different permissions then placing everyone from the section in it. But that is not personalized setup, and still can offer too much or too little to an individual user.  

Know Your Active Directory Security Model 

Microsoft Active Directory security model, keeps every object stored in an Active Directory, safe and protected. 

That includes domain user and computer accounts, security groups, and group policies. 

It can help administrator determining user access to any object, and gives the option to specify access for groups of users, as part of security management. 

Every single object in Microsoft Active Directory has a security descriptor associated with it. Security descriptor defines the permissions on an object. Of course, all these attributes include the permission set or Access Control. List (ACL), which contain numerous Access Control Entries (ACEs) which allows or denies specified security permissions to some user or security group. 

ACEs can be explicit or inherited; explicit ACEs generally override inherited ACEs. 

And this is just a tip of a Microsoft Active Directory Security Model iceberg. 

The security model is not an easy thing to learn or explain in a single article. Even some experienced administrators have a hard time understanding the full model. So it is advised to any system Administrator to make his/her personal goal gathering knowledge about it as much as possible. 

With a better understanding of it, it can provide better insight into system security functioning and better protection of your organization, and with that better productivity and quality of service. 

A lot more regarding Active Directory Security Model can be found at the following link: 

http://www.paramountdefenses.com/active-directory-security/model.html 

Keep Your Software Up To Date and Secure 

In May 2017, a lot of windows server based system got attacked by WannaCry ransomware worm attack. Even Microsoft has discovered a vulnerability and released a patch, a month before the attack took place, still, a lot of systems haven’t applied it, and got struck by a worm, which intruded system, encrypted data and demanded ransom for it in form of Bitcoin. 

The attack was stopped within a few days of its discovery due to emergency patches released by Microsoft, and the discovery of “kill switch” that prevented infected computers from spreading WannaCry further. 

The consequence of the attack was estimated to more than 200,000 affected computers across 150 countries, with total damages ranging from hundreds of millions to billions of dollars. 

Experts advise affected users against paying the ransom due to none reports of any data returned after payment and as high revenues would encourage more such attacks. After the attack had subsided, a total of 327 payments totaling $130,634.77 (51.62396539 XBT) had been transferred. 

As all examples, this one is a great opportunity to learn and adopt facts and previous errors so they would not be made again. 

This expensive and very real example shows the importance of software updating and applying official patches to your system software.  

Software without updates applied is unreliable software. Patch or update is made for a reason, and in most cases, it makes security better, and your system less liable for any type of attacks.

For that cases, Microsoft has great sites which can help administrators maintain their systems healthy and protected. It is highly recommended for all admins to monitor TechNet, and Microsoft Secure Blog, to keep up with system software, and security updates. 

It is not only up to administrators, but even their part of the job is also most important,  it is up to organizations to keep their hardware updated too. Even obsolete hardware can make the risk of security breaches high. So realizing that investing in hardware is not thrown money, but it is investing in security and functionality seems like the right way for all organizations. 

Usage of built-in Active Directory Features 

A lot of built-in Active Directory features can help administrators in protecting data and system environment.  None of them are “one program solves all” type of programs or some “big” lifesaving solutions, but correct usage of them can make a risk of potential security breaches lower. 

This is a list of some of the useful built-in features :  

Security Descriptor Propagator –  Compares the permissions on the domain object with the permissions on the domain’s protected user accounts and groups. If it finds any mismatch, it will reset the permissions. 

AdminSDHolder – Ensures enforcement of permissions on protected user accounts and groups, no matter of location on the domain. 

Privileged Identity Management – Allows the administrator to grant temporary rights and permissions to an account to perform any required functions. 

Role-based Access Control– Provides administrator the option of user grouping, and give them access to resources on the domain according to previously defined rules. 

Usage of Isolated workstations managing DCs 

If there is a need for logging on an Active Directory with an elevated account, because of any reason, these operations should always be performed from a special device, preconfigured to reduce the risks associated with everyday tasks.  

Such workstations should be isolated from the internet, and when used, they should be used with Least-Privilege User Access ( Lua) ( described before) principles. 

Those workstations should be completely protected by all kind of security software available. (anti-malware, endpoint firewall and application control). 

DC Workstations should be kept in their own organizational unit so they could have a special group policy set applied ( restricted local logons and other limitations). 

User accounts used on isolated workstations may be Service Desk accounts that have the ability to reset passwords for most of the users in a domain, accounts that are used to administer DNS records and zones, or accounts that are used for configuration management. Secure administrative hosts should be dedicated to administrative functionality, and they should not run software such as email applications, web browsers, or any type of productivity software. 

Conclusion 

In conclusion, security of Microsoft Active Directory is huge, live, topic, and it can be studied and elaborated over and over. The best practices are, with a usage of described tools and techniques, only learning and monitoring, not only your systems but Microsoft news and updates regularly. 

It is a hard job, without long-term solutions. As systems develop and change, so are potential threats and malware, but being server administrator is like that, never-ending process. 

 

 

Prevent Unauthorized Access to Sensitive Data!

  • No more unauthorized access to sensitive data
  • No more unclear permission assignments
  • No more unsafe data
  • No more security leaks

Get your free trial of the easiest and fastest NTFS Permission Reporter now!

Active Directory Design Guide

Companies use the Active Directory Domain Services (AD DS) in a server environment to make the work of network users less complicated and ensure resource sharing and management is secure, scalable, and all objects work as per their respective configurations. A well-designed AD DS can be used to manage the entire network infrastructure including the branch office and multiple forest environment. System Administrators should develop a habit of documenting all aspects of the domain structure and security strategies, as this becomes the new plan for future infrastructure and possible migration.  

The Basics of Active Directory Planning  

When planning for a domain, two things come into play: domain upgrading and domain restructuring. Upgrading your domain is more than just upgrading every domain controller; it involves the upgrading of both the Primary Domain Controller (PDC) and the Backup Domain Controller (BDC). Restructuring involves the creation of a new Active Directory from scratch. Restricting may lead to few but expanded domains. 

Develop a Migration Strategy

Having a migration strategy in place is an integral part of your overall design plan. Migration strategy involves studying the current or proposed configuration details and identifying which aspects of the domain will be migrated. A fall back system also has to be in place to counter any possible failure.  

Working with a Simple Design  

An Active Directory should be flexible in giving you an easy time when designing the forests. Designing a Domain for every department may look desirable in an organization but do not forget the general rule of running fewer but effective domains. An alternative to creating domains for every department is to use the Organizational Units, which are flexible and easy to manage.   

Active Directory Domain Design  

An Active Directory has four main divisions: the forests, the domain, the sites, and the organizational units. The system Administrators should maximize on the potential of these divisions to get the best out of any directory structure. 

When creating your domains, it is recommended that you use domain members who are near each other as possible. This is the best practice because the level of traffic within a domain is higher than you would expect between two different domains. Smaller domains also limit the need for investing in expensive connections to increase bandwidth. Remember to use the Organizational units to delegate Administrative privileges within an Active Directory. 

The Design of Groups and Organizational Units  

Before thinking of how the Groups and Organizational Units will work, System Administrators should know in advance the role of each group or units. The idea is to have a functional Organizational Unit and Groups in a bid to simplify the Active Directory environment. This goes a long way in simplifying management by giving you more control over the Active Directory. An active directory without a logical design of its users may lead to confusion. Here are some of the best practices when designing Organizational Units: 

  • Maintain a simple OU structure  
  • Limit OU nesting to less than 10 layers  
  • Apply Group Policy to groups via the Group Policy Filtering  
  • Do not utilize local groups for permissions in a domain environment 
  • Use local groups in the domain to control access to resources and group similar user groups. 

You can also use hidden OU to prevent viewing or altering in an environment where network application services are shared within departments and with external customers.  

Use Rules for Active Directory Sites  

Using Directory sites is an important element for any Active Directory domain. Sites can be limited to any computer object within a forest. Thus, they can be found across domains and organizational units. Sites are used to impose physical network to facilitate traffic flow. Sites also regulate traffic flowing to slower WAN links within the network; this will effectively increase productivity and serve to reduce costs on connectivity. 

The general good practice when designing sites  

  • Sites should be a reflection of the physical and geographical topology 
  • Every site should have at least one local Domain Controller 
  • Sites should be connected to faster links  
  • Remote clients do not need a dedicated site  
  • Sites are desirable when replication services are needed  
  • Sites can be added, changed, removed, without affecting network operations or configurations 

Active Directory Design Requirements  

Before the deployment of any Active Directory Services, the logical structure that reflects the working environment should be in place. The AD DS logical structure defines directory objects are organized and a method of managing individual accounts and shared resources. When planning for the logical structure, determine the number of forests, domain designs, the Domain Name System infrastructure, and Organizational Units. 

The Design of the Logical Structure should follow the following process 

  • Identification of the technical staff in charge of deployment  
  • Creation of the forest design  
  • Creation of the domain design for each forest  
  • Design a DNS infrastructure to support AD DS for every forest  
  • Design organizational units for delegating administrative tasks for every forest  
  1. Designing the Site Topology 

The site topology of the Active Directory network is a logical representation of the physical network. It has all the information about the AD DS location sites, the site of Domain Controllers, and the site links that support the AD DS replication taking place between sites.  

The site topology design goes through the following process 

  • Gather all network information  
  • Plan where to place the domain controllers  
  • Create the site design  
  • Create the link design  
  • Create the site link bridges

2. Planning for Domain Controller Capacity  

For an efficient output of the AD DS, System Administrators should determine the number of domain controllers for each site. Capacity planning for the domain controllers takes care of all the hardware requirements and avoids incidences of poor performance by the domain controllers 

The process of planning for the domain controller capacity planning involves: 

  • Collect site topology and design information  
  • Determine the number of domain controllers  
  • Create the site design  
  • Assess disk space and memory requirements  
  • Monitor domain controller performance  

Please note that some features can be added to the Domain design by raising the functional levels of the forests.  

Conclusion  

The strategies presented in this guide apply in any server-operating environment. If you are not sure if your environment can meet the minimum system requirements, consult with other professionals on what needs to be done to deploy the AD DS. 

 

Want to have efficient and accurate reports about NTFS permissions on all your folders on your Windows Server Environment?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Guide for Securing File Server

How hardened is your organization’s file server to protect data from unauthorized access? Every organization running a file server needs to have a way of protecting sensitive information from unauthorized access, especially from outside.

The fact that your server may be in a physically secure place does not mean the security configurations within the operating system should be ignored and the system left at the mercy of anyone with an access to the building.

The file server being the most visible and central network device with critical information should be secured using some of the approaches this guide proposes. System administrators should use the hacker’s point of view when reviewing security policies to help them find a better approach to server security settings.

The Guide

Securing a file server ensures that the organization enjoys the fill benefits of a server in its opium working condition. Here are some of the steps one needed to enforce server security settings.

1. The configuration of the operator group
Instead of using the default Administrators group, another group of people (user accounts) authorized to access the server using different access levels should be created. The default administrator account should be used as the final resort.

Operator Group can be defined by different tasks users are assigned as far as modifications of security settings are concerned. To perform this task, open the Organizational Unit and remove unwanted groups, but retain the default Domain Administrators. Create the Operators Group and give Full Control to both Domain Administrators and Operators group.

2. Create A Security Template
By the time you sit down to secure the File Server, you must already be having security settings that can be used on the network or in a standalone environment. Security setting templates can be created and then be imported into group policy to be implemented across the entire network.

The security templates need to be flexible in that when more than one is applied to the same server, there should never be a conflict.

Setting Account Policies
Account policies are settings that have to do with Passwords, Account Lockout, and Kerberos Policy. All policies should be applied at the domain level. For a standalone file server, the local accounts are used to access it, therefore security settings of these accounts need to be set up.

Other settings include:

  • Increasing the minimum password length – long passwords mean that the hacker longer time to crack local account passwords.
  • Decreasing maximum password age – reducing the aging date of a password means more frequent password changes, therefore, increasing the integrity of local user accounts.
  • Account lockout duration – this setting determines the lockout period a person has to wait to re-enter the password. A ‘0’ setting means the user logged out until password reset and a ’30’ means attackers will have to wait longer to key in more attempts.

Setting the Local Policies
Under local policies, system administrators need to look at the Audit policies, user rights, and security options.

  • Audit Policy
    Turing on the Audit Policy setting is an indication that the files, registry, folders, and printers can be tracked. This gives the administrators the freedom to choose which objects to log and the level of monitoring to use. In a high-security environment, knowing who is responsible for what activity is paramount, therefore auditing privileges is required when securing the File Server.
  • User Rights
    When setting user properties on a local machine, rights such as “Act as part of the operating system” should be disabled.
  • Security Options
    Some default settings under this option can be used to tighten the security levels of a file server. Some of these defaults may not work with all Server Operating Systems, therefore testing before implementation is recommended.

Settings on Services
Services not installed should be disabled and a person responsible for starting, stopping, and disabling them identified. There are two specific services that can be applied in specific scenarios: Distributed File System (DFS) and File Replication Service (NTRFS).

The DFS works on local disks, shared across networks, therefore in such a scenario, disabling DFS means that users must know the names of the shared resources and servers on the network. NTRFS controls the automatic maintenance of files across multiple servers and it must be on Active Directory and Domain Controllers.

3. Determine the Template Application Strategy
Security templates can be imported to Group Policies and the domain so that they can be shared across multiple workstations and can be refreshed as needed. When dealing with workstations, security policies can be set using the /secedit/ command and the batch files re-written to refresh the settings.

4. Set up Restricted Groups
Make use of the Local Group Policy to restrict group activities. Local Administrators and any group that was created can be restricted. New users can be added to the restricted groups. Restricting administrators who can change system settings prevents unauthorized access.

5. Write IPsec Policies
Setting IPsec rules that block certain or all ports by applying specific filters to allow only communication from specific computers will guarantee File Server security. When left open, File Servers share information through various protocols that would-be attackers will find useful.

6. Set the Correct System
The correct time is critical for many reasons. Some authentication protocols need the client and server clocks to be synchronized. Synchronizing events between computers on a network will not take place due to time differences. When reading logs, the time stamp is important for auditing purposes.

7. Set Specific Account Restrictions
Restrictions can be implemented on individual accounts by limiting the hours, restricting which workstation a particular user an use, prevent account delegation, etc.

8. Setting the Local Server Security Settings
Settings that cannot be automated in a domain such as Guest user account and Guest group have unique identifiers that make it difficult to have an automated security setting. In such scenarios, setting local policies is necessary.

9. Track Attack Indicators
Events with warnings such as ‘Logon Failure’ and an increasing number of similar events should be treated as unauthorized attempts.

10. Using the Network File System (NTFS)
Setting the Access Control List (ACL) and the System Access Control List (SACL) on FAT volumes is not possible. File Server security depends on its ability to have security settings done as file permissions.

11. Use of Administrative Template Settings
Some security settings are not available in security template settings. They can only be set via the Administrative Templates within the Group Policy. Disabling error reporting at this stage is recommended because some error logs contain sensitive information that can be intercepted by hackers.

12. Documenting and Maintaining the Security Settings
When server security settings need to be altered, previous settings can be used as a reference point. Third-party tools such as Security Configuration Analysis can be used to verify the compliance of the security settings in place.

Conclusion

The security settings above are supposed to make the File Server secure. However, when doing so, make sure the settings do not interfere with server’s normal operations. Since every network setup is unique, it is recommended that File Server security setting is set up according to the needs and expectation of the organization.

Are your Windows folders secure, too?

Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

Prevent Data Theft with File Audit

If you are operating a business or running an organization, you definitely know how vital it is to have your data safe. Cybercrime is one of the most widespread threats facing businesses. According to the Identity Theft Resource Center (ITRC) in 2017 alone, extremes of 174 million records were breached.

Well, this is only what was documented. Meaning, if every theft was documented, this number would have been even higher. The saddening fact is that while the discovery of data breaches, normally takes a long time at times even measured in years, data exfiltration takes minutes to be executed. This, therefore, means that businesses need to invest in modern means of securing their data.

Where the Data is Most Likely to be Found

Most organizations store their important data on the servers. Cybercriminals are aware of this reality and that is why their efforts are targeted on the servers, where they can obtain files with important data such as bank and credit card details, personal health info, personally identifiable information (PII), corporation’s trade secrets, intellectual property, other credentials, and much more.

The business processes of the company being attacked determine where they keep all their data. For instance, they can keep data on their databases, office documents, files for data transfer operations, making them a good target for the theft of data.

A company, therefore, needs to launch a data security to audit to prevent losing important data, henceforth financially.

What Data Security Auditing is

Data security auditing is the process whereby the company data is analyzed, getting to know how the data works, and who accesses it.

Once all these are established, a strategy is then created to document the data. To determine security risks on data, it is vital to first understand how delicate the business information moves within the company, through the company, and how the said data moves out of the company.

Why Your Company Needs Data Security Auditing

Every firm is often at risk from lawsuits which contain delicate data like privacy, emails, and web content. If the company has sensitive information on the company’s computers, they are held accountable for the information if it gets lost, stolen, or misused.

Reviewing your company’s data security from time to time means that you should perform an investigative audit of the businesses’ data.

The process of securing a company’s data should be water-tight and leave no data security loopholes. This is also one of the provisions in the federal regulations, which further states that data destruction and/or recycling should also be secure.

The FACTA, GLBA, HIPAA, and Sarbanes-Oxley are some of the bodies whose mandate is to oversee the data lifecycle. They ensure that both low and high-profile companies securely store their data.

Why Data Auditing is Important

Ponemon Institute and Symantec carried out a research that revealed that up to 39% of data breaches were as a result of negligent insiders. 37% of the data breach incidents were done by hackers. The later would turn out to be among the costliest violation, which was thought to cost up to $222 for every document that got lost or stolen.

These days, many small but growing businesses are turning to information security plan to see that they are protected against data breach. In the same vein, insurance plans such as Cyber Liability insurance policies are becoming more adapted by businesses. Adequate data security plan calls for companies to perform frequent data security audits on their own.

It’s not enough to have a complete suite of defensive anti-data breach programs but you need to also ensure that these programs and procedures are quite effective. Ensure that the business technology infrastructure does not contain any weak spots. Ensure that the company’s software and hardware are properly configured and running how it’s designed to.

Educate your employees on how they can properly protect their client’s private data. All this can be achieved if regular data security audits are performed.

Common Open Source Security Tools You Should Try

  1. NMAP
    This security tool will help you to discover devices that are attached to your network. NMAP is not only a host discovery too, but also work as a port scanner and checks various services in its database comprising of more than 2200 popular services. It will help you understand what is on your network and if the device is using a vulnerable port, protocol, or service.
  2. Suricata
    This open source IDS/IPS is so scalable. It inspected multi-gigabit traffic and also checks for matches on threats, violations of policies, and malicious activities. The automatic protocol detection enables Suricata to scan for malware, and the command, and control channels.
  3. TrueCrypt
    Initially, this tool was free and was used for file encryption, partitioning or a storage device. In 2014, this product ceased being maintained. However, its open source code gave rise to two new security tools, namely CipherShed and VeraCrypt. These tools are based on the original product. TrueCrypt however, they have undergone the code upgrade.
  4. Wireshark
    Most security professionals and network administrators prefer Wireshark as their tool of choice. Wireshark is a free packet analyzer, a perfect tool for network troubleshooting and analyzing. Users can examine data on their network or from a file capture in real-time. The tool is handy when diagnosing security devices and also event logging. 

     

     

    Do you have unclear NTFS Permissions assignments?
    Do you have too many special permissions set on your fileservers?
    Or blocked NTFS Permission Inheritance?

    Protect yourself and your clients against security leaks and get your free trial of the easiest and fastest NTFS Permission Reporter now!

7 Best Practices in Managing NTFS Permission

Whether you’re in the planning phase or have already implemented NTFS permission, following best practices ensures smooth administration, and it also aids in resolving access issues. Here we listed seven practices we find effective in managing NTFS permissions.

 

7 Best Practices in Managing NTFS Permission

#1 Full Control on the Share and specific NTFS permissions at folders

In this post, we’ve established that this is the best way of combining Share Permission and NTFS Permission. This is Microsoft’s way, and there’s no one more qualified to give “best practice” advice on how to do something with Microsoft but Microsoft themselves. If you’re quite unsure about this topic, pay this post a visit and get ready to be enlightened!

#2 Share folders with Groups not with Users

The logic is quite simple why this is recommended. It’s simply because it makes administration easier. Imagine if you’re sharing the “Sales” folder to 10 sales people. Sounds Easy? Okay, how about 100 sales people? Of course, the task is doable, but it would be a lot simpler if you just put them all in one group (i.e., Sales Group) then share the folder with that group. The same logic can be applied when applying for permissions.

#3 Organize your Resources

As the saying goes, don’t put all your eggs in one basket. Keep application files and data files on their own individual folders, and consolidate folders with the same security requirements to ease administration.

For instance, if users require “Read” permission for several application folders, store those folders within a single folder. This will allow you to share that larger folder instead of sharing each application folder.

It’s also easier to manage permissions to application or data folders when they are stored separately rather than mixed with other file types and lastly, doing backups will be less complex too since you can choose which folders to backup without worrying if other file types will be included.

#4 “Read & Execute” for Data or Application folders

When you assign permissions for working with data or application folders, assign the “Read & Execute” permission to the Users group and Administrators group. “Read & Execute” permits only viewing, accessing, and executing of the file so this will prevent application files from being accidentally deleted or damaged by users or viruses.

#5 Minimum permission only

Assign minimum permission that still allows users to perform required tasks. If users only need to read information in a folder and should never delete or create files, assign only the “Read” permission. Doing so prevents unauthorized access to critical data thus making your environment more secure.

In a complex environment, however, over-privileging can happen especially when users belong to multiple groups, causing users to have access they shouldn’t have. By using tools such as FolderSecurityViewer or Effective Permission tool, you can examine and see the permissions each user has and act upon them.

#6 Intuitive naming convention

Use intuitive share names so that users can easily recognize and locate resources. For example, for the Application folder, use “Apps” as the share name. Basic as it may be but this will save you from unnecessary calls or emails from employees asking which one is the right folder. Also, use share names that can be used across all client operating systems

#7 Document everything

And I mean everything, even the slightest changes. It’s always good to have something to go back to when you forget who has access to which. This not only serves as your guide but something you can share with other admin in your group to make sure everyone is on the same page. Also, changes in the organization are inevitable so whatever method you used to document, make sure it can easily be modified and expanded.

Useful Resources

Want to learn about NTFS Permissions, Share Permissions, and how to use them, get your free course HERE!

 

 

Prevent Unauthorized Access to Sensitive Windows Folders!

Get your free edition of the easiest and fastest NTFS Permission Reporter now!

Planning before implementing NTFS Permissions

If you’re a Windows Administrator, you’ve probably experienced the nightmares in managing folder permissions. This is common in large or even small environment where no proper planning is made before giving the permissions. Such negligence could lead to complication and exposes the environment to security risk. Below are some examples:

  • Users or groups having access to folders not intended for them (e.g., Sales Group can view Management’s folders)
  • Applications fail to run because of lack of permission (e.g., Backup Software unable to perform tasks on specific folders)
  • Or just too convoluted folder permission that Admins are better off doing them from scratch.

Why Planning is a Crucial Step Before Implementing NTFS Permissions

 

All above examples are all due to incorrect planning (or the lack of it) before the implementation of NTFS permissions. One may point out that it can also be due incompetency of the person doing the task. I agree that could also happen, but if there is proper planning, documentation, and layout, these problems can be avoided even if you let your junior admin do the task.

As part of the Planning phase, here are some of the things an Admin can do:

Design a Folder Structure

Before creating the actual folders, you must know what folders are to be created. Whether you prefer digital or physical board, list the shares that will be created for each department or group. Work with the knowledge you already have of your current environment. There will be changes along the way (e.g. new department or new projects) but this would be a good start.

Identify who has access

After listing the shares to be created, map out the users or groups that have access to specific folders. You may List down the users or groups and draw a line to connect them to the appropriate shares. How ever you want this done, make sure to have fun doing it!

Plan the Permissions

This one is critical so take your time going through the shares and groups and write down the appropriate permission. If you use naming conventions such as R for Read-only or F for Full Control, make sure to be consistent to avoid confusion along the way.

Proper Documentation

A good planning always has good documentation. It’s always good to have something to go back to when you forget. This not only serves as your guide but something you can pass down to your junior staff or even to your boss. With that said, documentation must be clear and concise. Also, changes in the organization are inevitable so whatever method you used to document, make sure it can easily be modified and expanded.

Being an Admin can be stressful, but if you have proper planning, implementation, and clear documentation, it smoothens administration and helps you focus on other areas.

A more detailed guide on Planning and Managing NTFS Permissions can be found here. Download your free course now!

 

Top Ten Mistakes Made While Managing Data Access on Windows Fileservers

What are the top ten mistakes made while setting up and operate Windows fileservers? Here I’ll talk about them.

Read more