DC AD and Group Policy

In the last post, I covered setting up a new domain controller and some things to help keep your domain healthy, well organized and your IT provider happy.

In this followup, I will keep going. Now that we have a Domain Controller, a Domain and DNS, we should look at Group Policy.

Group Policy Walk-Thru

One of the reasons that we chose to create OUs instead of Containers in the last post/video is that group policy can be applied to OUs, but not Containers.

In going over Group Policy, I’d like to start with User folders. In a corporate environment, losing a file can be a very bad thing. For the most part, servers are backed up, but workstations are not. So, how to protect the files of users? Server Shared folders are one option, but I’ll cover a couple others in this post – Folder Redirection and Home Folders. These let your users have more control over their files, as other users can not normally access either one.

It is a good idea to make a dedicated drive for Data files, separate from the OS drive.

For Home folders, create a folder on the Data drive named something like “HomeFolders.”

Open properties of the folder, security, advanced and disable inheritance.

Remove the Users permissions – give Authenticated users “This Folder Only” permissions to:
List folder / read data
Read attributes
Read extended attributes
Create folders / append data
and
Read permissions

The user “Creator Owner” should have “Subfolders and Files onlyfull control.


On the Sharing tab, use advanced and share the folder as “Home$” to make it a hidden share. Give Everyone read and Authenticated Users full control of the share.

In Active Directory Users and Computers, on the Profile tab, in the Home Folder section, choose a drive letter and put a path with a folder name that matches the user’s logon name.

Clicking Apply creates the folder. If you have a lot of users and don’t want to edit every user to add the home folder, you can use powershell – but you will need to use powershell to give them permissions to the folder as well.
Below is a powershell script to create the folders for existing users, give the users permissions and set the home folder for all users in active directory.

Import-Module ActiveDirectory

#Script for updating folder permissions to give the user full access to their home folder
# as long as its named the same as their username - so, jdoe will have full access to the jdoe folder.
# - with This "Folder, Subfolders and Files" level.
#
# --- change the domain name
 $domain = "kearan"
 $hdpath = "E:\KearanCo\HomeFolders"

# --- Make Home Directories
 $users=get-aduser -filter *
  Foreach($user in $users){
  $usern=$user.samaccountname
  $nhd = $hdpath + "\" + $($usern)
  New-Item -ItemType Directory -Path $nhd
  }

# ---- change the Folder Path
 $folders = Get-ChildItem -Path $hdpath | Where-Object -FilterScript {
     $_.PSIsContainer -eq $true
 }

# --- Set the folder permissions
 foreach ($folder in $folders) 
 {
     $path = $folder.fullname
     $ACL = Get-Acl -Path $path
     $user = $folder.name
     $AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("$domain\$user","FullControl",”ContainerInherit, ObjectInherit”,"None",”Allow”)
     $AccessRule1 = New-Object System.Security.AccessControl.FileSystemAccessRule("$domain\Domain Admins","FullControl",”ContainerInherit, ObjectInherit”,"None",”Allow”)
     $Account = New-Object -TypeName System.Security.Principal.NTAccount -ArgumentList "$domain\$user"
     $acl.SetOwner($Account)
     $acl.SetAccessRule($AccessRule)
     $acl.SetAccessRule($AccessRule1)
     $acl | Set-Acl $path
 }

# --- Set users Home Directory in AD ---
# --- change "FileServer" to the actual file server name 
# --- and Home$ to the actual share name. And H to your share letter.
 $users=get-aduser -filter * 
  Foreach($user in $users){
  $usern=$user.samaccountname 
  $HomeDir="\\FileServer\Home$\$($usern)" -f $usern
  Set-ADUser $user -HomeDirectory $HomeDir -HomeDrive H:
  }

You can use each section of the above script as a stand-alone script in order to do one at a time. Use the below code to change an existing home drive to a new server.

# Change Home Directory
$users=get-aduser -filter {homedirectory -like '*Old_Server*'} 
 Foreach($user in $users){
 $usern=$user.samaccountname 
 $HomeDir="\\NewServer\Home\$($usern)" -f $usern
 Set-ADUser $user -HomeDirectory $HomeDir -HomeDrive H:
 }

For Folder Redirection, create an AD group for all those you want to have redirected folders. Unless you are comfortable having all the users in an OU having the folder redirection, of course. To have more control over what accounts get the folder redirection, use the AD group method.

Create a folder, like the Home Folder above, with the same permissions. Now, go into Group Policy Management and create a new Group Policy.

Edit the group policy and go to User Configuration –> Policies –> Windows Settings –> Folder Redirection

Choose the items to redirect (See the video) and set the scope of the policy to Domain Computers (or whichever computer group you want, such as RDS Servers) and the Group you want to apply it to, ie “Folder Redirection Group.” Apply the policy to the domain, or the target OU.

See the video for more on Group Policy and Troubleshooting.

The GPupdate code from the Video:

gpupdate

gpresult /H e:\kearanit\%username%_GPResult.htm

BONUS:
– Video only available through the blog – how to enable the AD Recycle Bin – restore accidentally deleted user accounts!

New Domain Controller Best Practices and Troubleshooting

A lot of guides and how-to videos out there show you the basics and the bare-bones – this is real world, git’r’done right stuff!

35 Minute Video Walk-thru of a Good Domain Controller configuration

The first step to setting up a new domain, or creating a new domain controller for an existing domain, is, of course, to install the OS. We’ll assume that has already been done.

If you are adding a 2019 DC to an existing Domain, you will probably need to migrate the domain to use DFSR instead of FRS for syncing Active Directory and DNS – see the DFSR Migration section.

In the video above, I may do things in a different order, but here are the first steps:

A. Set a static IP address and Public DNS servers. The DNS servers you set here will become the DNS Forwarders of your new Domain Controller.

Well known public dns are 208.67.222.222 ; 208.67.220.220 ; 8.8.8.8 ; 4.4.4.4 and 1.1.1.1

B. Name the new server something those who come after you will understand. This means your organization name, the os and the server’s role should be in the name of the server. For a business named Kearan Company, our first domain controller could be Kearan-S19-DC – Server 2019, acting as a Domain Controller. -DCFS, -RDS, -APP, -SQL, -Web, -Intranet are all possibly good role names to use. Just make sure that it is not a very long name – there are limits! 16 Characters for the server name should be fine. (Changing the name requires a reboot!) (Update:) It has come to my attention that Domain Controller In-Place-Upgrades are easier and more reliable than before, so maybe having the OS in the name is not a perfect idea, in case you update from 2012 to 2019 OS. If this is something you would be comfortable with, make a naming convention – Kearan-DC-001 ; Kearan-RDS-002 ; Kearan-RDS2-003 ; etc.

C. Use Server Manager to install the Active Directory Domain Services role and DNS Server – see the above video for a walk-thru on that process.

D. With the roles installed and the server rebooted, promote to a domain controller! Document the DSRM Password!

  • Domain name is very important – keep it short and informative. For a company named Contoso Specialty Products Supply Company, a domain such as “Contoso” or “CSPSC” would be perfect. The dot-local (Contoso.local) is preferred, as it is the default. You can use another such as .corp or .main – I have even seen .private – but stay away from the major top level domain extensions such as .com, .net and .org as these can cause a conflict between local DNS and Public DNS.
  • Document the DSRM password where it can be found in the future! Just in case.
  • Reboot and Log in to your new Domain!

E. Set some important DNS settings to squash problems before they happen

  • Set Aging/Scavanging for all zones
  • Apply to existing Active Directory-integrated zones
  • Allow zone transfers to servers listed on the Name Servers tab
  • Automatically notify the severs listed on the Name Servers tab
  • Check and/or set forwarders

F. Open AD Users and Computers and create a good AD structure!

You will need to move newly added users and computers from their default “Users” and “Computers” containers into your structure, but it will make organization and group policy much easier to manage in the future.

(See below for an AD User Import powershell!)

G. Copy the Administrator user and create a domain admin user based on your company – such as KearanIT. Add the new admin to the Backup Operators group. Log off of Administrator and log in with the new domain admin account. Now, DISABLE ADMINISTRATOR!
Move the new Admin account into the Service Accounts OU created as part of the good structure.

H. Make the Domain Controller a Reliable Time Server using the commands below:

w32tm /config /manualpeerlist:"1.pool.ntp.org 2.pool.ntp.org 3.pool.ntp.org" /syncfromflags:manual /reliable:yes /update
w32tm /config /reliable:yes
net stop w32time && net start w32time
w32tm /query /peers

Powershell, CMD Line and Troubleshooting for
Domain Controllers

DFSR Migration

First, raise the Forest functional level to as high as possible – must be at least 2008 R2. Now get the Global State:

DFSRMig /GetGlobalState

Start the migration –

DFSRMig /SetGlobalState 1

Check on progress –

DFSRMig /GetMigrationState

Once the migration state says all domain controllers are synced, then go to SetGlobalState 2 – GetMigrationState until that is synced, then SetGlobalState 3 until that is synced. 3 is the Final state – you are all on DFSR now!

Powershell to Move FSMO Roles

Run the following in an Admin Powershell window on the server you want to be the new FSMO role holder:

Move-ADDirectoryServerOperationMasterRole -Identity $env:computername -OperationMasterRole 0,1,2,3,4

Don’t forget to Move the Last Two FSMO Roles using ADSIEdit.

Powershell to Import Users from a CSV file

You will need to copy the powershell code below into a new Powershell ISE module, then save it as a .ps1 and edit it for your needs. Create the CSV file with the two lines after “format of file:”

# - Imports given CSV file
# format of file:
# Firstname,Lastname,SAM,OU,Password,Description,EmailAddress
# bobby,kearan,bkearan,"OU=Employees,OU=Users,OU=_Kearan,DC=kearan,DC=local",[email protected],Awesome IT Engineer,[email protected]
#
$csvPath = "C:\KearanCode\ADUserImport.csv" # Read-Host -Prompt 'path to the csv file'
$Servername = $env:computername # Read-Host -Prompt 'Name of the DC (servername)'
#
$Users = Import-Csv -Path $csvPath
foreach ($User in $Users)
{
$Displayname = $User.'Firstname' + " " + $User.'Lastname'
$UserFirstname = $User.'Firstname'
$UserLastname = $User.'Lastname'
$OU = $User.'OU'
$SAM = $User.'SAM'
$Description = $User.'Description'
$Password = $User.'Password'
$Email = $User.'EmailAddress'
New-ADUser -Name "$Displayname" -DisplayName "$Displayname" -samaccountname $SAM -UserPrincipalName $SAM -GivenName "$UserFirstname" -Surname "$UserLastname" -Description "$Description" -Emailaddress "$Email" -AccountPassword (ConvertTo-SecureString $Password -AsPlainText -Force) -Enabled $true -Path "$OU" -ChangePasswordAtLogon $false –PasswordNeverExpires $false -server $Servername
}

You will need to set them all to not need to change their password on first logon by using the following two commands:

Import-Module ActiveDirectory
Get-ADUser -Filter * -SearchBase "OU=_Kearan,DC=Kearan,DC=Local" | Set-ADUser -ChangePasswordAtLogon:$False

DCDiag Commands

The following does a report and saves it in a txt file on the root of C:\ (adjust to your preferred file path)

DCDiag /c /v /f:c:\dcdiag.txt

The following does a report and attempts to fix any issues it found and puts the txt file in the root of C:\

DCDiag /fix /v /f:c:\dcdiag.txt

Force a Time Zone Change

For some reason, Windows has become a bit difficult about changing the time zone. Below is a command line to see the time zone and change it. Last line outputs a list of time zone names that can be used. Open the cmd window as admin to run this.

tzutil /g
tzutil /s "Central Standard Time"
tzutil /l

— what would you like to see covered next? Comment below —

The 7 FSMO Roles

5 FSMO roles? Oh, no. There are Hidden FSMO roles that they don’t tell you about!  They don’t want you to know about these until you run into a problem!

Have you even been unable to demote a domain controller?  It tells you that it can’t determine the fSMORoleOwner – even though a netdom query FSMO returns all 5 roles?

You may also get: “The Directory service is missing mandatory configuration information, and is unable to determine the ownership of floating single-master operation roles”

Well, there are two hidden roles: CN=Infrastructure,DC=ForestDnsZones  and CN=Infrastructure,DC=DomainDnsZones,DC=

So, the next time you are transferring FSMO roles, you need to move these two as well – before you Decom the old Role Holder!

Run adsi edit as admin.

Right click on ADSI Edit, select Connect to the naming context “DC=DomainDnsZones,DC=yourdomain,DC=tld”

Click and expand the new “Default naming context” – click on the connection point, move to the right column and click Infrastructure:

Right click and select properties or double click to edit.

Scroll to fSMORoleOwner

Double click to edit.

Repeat for naming context “DC=ForestDnsZones,DC=yourdomain,DC=tld”

The fSMORoleOwner in each “Infrastructure” should match.

Printers Deployed via Group Policy

So, ran into a situation the other day where some printers were added to some computers they were not supposed to be on. When we went to remove them – nobody could. Access denied. Enterprise Admin could not remove the printer from the computer.
Why?
Group Policy.

There are a few ways to deploy printers via group policy.
1. Click “Deploy” on your print server. Unless you want everyone and every system in the entire domain to have that printer – do not do this. You won’t know which policy it uses to deploy the printers, you won’t know where it is applied. ( probably sets a “printer” policy on the root of the domain )
2. Create a group policy using Computer Configuration –> Policies –> Windows Settings –>Printer Connections (on older DCs)  ( Don’t do this! )
3. Create a group policy using Computer Configuration –>Preferences –> Control Panel Settings –> Printers (Nobody will be able to delete these printers)
4. Create a group policy using User Configuration –> Preferences –> Control Panel Settings –> Printers (You will be able to delete these printers – and they will show back up on next reboot, unless removed from the policy)

 

O365 and your SPF Record

So, I recently discovered that Office 365 has a new trick up it’s sleeve – using SPF records WRONG.  Had several bouncing and rejection issues with some clients due to this new idiocy.

An SPF record is supposed to match your SENDING ip to the SPF record.  But now O365 is requiring the MX record – the Receiving ip – to be in the SPF record.  Why is this messed up?  Well, for one, many people use third party Spam Filtering services for their MX record – to filter out spam before it gets to their inbox.  So, many MX records are spam filters – not what is sending out the email.  Basically, O365 just opened up a Huge security hole.

Thanks, Microsoft.

Been Busy!

I’ve been doing a lot lately. I have stood up a Windows Server 2012 server, racked it and configured it for a client. I have updated boot code and firmware on firewalls and routers (Sonicwall, Cisco and Mikrotik). Fixed a couple “Read-only Filesystem” linux errors on Xen VMs.  Moved a few websites/email from a hosting service to our own hosting server.

Hmm, written down, it doesn’t seem like a whole lot – until you think about all the steps involved in moving a website, changing DNS in multiple places, syncing emails, doing documentation.

😀

Labtech and MySQL to Monitor Exchange Backpressure

So, we decided we needed to monitor Exchange servers for Backpressure so we can be more proactive in preventing problems.

I wrote a script in Labtech to monitor the event log for incidents that indicate potential issues.  Check out the SQL concat!  The logic checks for existing tickets, either creates a ticket, makes a note on an existing ticket or closes the ticket it if the situation no longer exists.  I didn’t include putting time into the ticket, but that would be fairly easy as well.

See the script export below:

Resend EventLogs
SET:  @[email protected] = SQLRESULT[SELECT Concat(eventlogs.TimeGen, ” “, eventlogs.Message) As dEvent FROM eventlogs WHERE eventlogs.`Message` like ‘%resource pressure increased from Medium to High%’ AND (timegen > DATE_SUB(NOW(), INTERVAL 1 HOUR)) AND ComputerID=%computerid% LIMIT 1]
IF  @[email protected]  Contains  High  THEN  Jump to :Alert
SET:  @[email protected] = SELECT COUNT(v_tickets.`TicketID`) FROM v_tickets WHERE v_tickets.`Subject` = ‘%locationname% / %computername% / Exchange Backpressure Detected’ ‘
SET:  @[email protected] = SQLRESULT[SELECT COUNT(v_tickets.`TicketID`) FROM v_tickets WHERE v_tickets.`Subject` = ‘%locationname% / %computername% / Exchange Backpressure Detected’]
IF  @[email protected]  >=  1  THEN  Jump to :ProcessTicket
GOTO :END
:Alert – Label
Note: Backpressure! – need to create a ticket!
:CheckTicket – Label
SET:  @[email protected] = SELECT COUNT(v_tickets.`TicketID`) FROM v_tickets WHERE v_tickets.`Subject` = ‘%locationname% / %computername% / Exchange Backpressure Detected’ ‘
SET:  @[email protected] = SQLRESULT[SELECT COUNT(v_tickets.`TicketID`) FROM v_tickets WHERE v_tickets.`Subject` = ‘%locationname% / %computername% / Exchange Backpressure Detected’]
IF  @[email protected]  >=  1  THEN  Jump to :UpdateTicket
:CreateTicket – Label
LOG:  Exchange Backpressure High! Creating Ticket
Create New Ticket for %clientid%\%computerid% Email:%ContactEmail% Subject:%locationname% / %computername% / Exchange Backpressure Detected
SET:  @[email protected] = SQLRESULT[SELECT v_tickets.`TicketID` FROM v_tickets WHERE v_tickets.`Subject` = ‘%locationname% / %computername% / Exchange Backpressure Detected’]
Send Email To:[email protected] Subject:Exchange Backpressure – %clientname% – %computername%
GOTO :END
:UpdateTicket – Label
SET:  @[email protected] = SQLRESULT[SELECT v_tickets.`TicketID` FROM v_tickets WHERE v_tickets.`Subject` = ‘%locationname% / %computername% / Exchange Backpressure Detected’]
LOG:  Exchange Backpressure High! updating Ticket
Comment Ticket @[email protected] to Admin
GOTO :END
:ProcessTicket – Label
Note: If the Ticket exists then Finish it.
SET:  @[email protected] = SQLRESULT[SELECT v_tickets.`TicketID` FROM v_tickets WHERE v_tickets.`Subject` = ‘%locationname% / %computername% / Exchange Backpressure Detected’]
IF  @[email protected]  =  0  THEN  Exit Script
IF [SQL SELECT COUNT(ticketid) FROM tickets WHERE [email protected]@]  <  1  THEN  Exit Script
IF @[email protected] Ticket Status equals Resolved  THEN  Jump to :ClearMonitorTicketID
RUN SCRIPT:  _System Automation\Functions\Load Properties – Ticketing*
Finish Ticket @[email protected] to @[email protected]
:ClearMonitorTicketID – Label
SET:  @[email protected] = 0
SET:  [STATE @[email protected]]  =  @[email protected]  for computer @[email protected]
:EndProcessTicket – Label
:END – Label

 

Whats so good about Labtech?

Labtech.  RMM tool.  (Remote Monitoring and Management).

What is so great about it?  Well, once you learn it… once you understand it… you can do anything you want with it.

Like what?  Well, you have your normal RMM things like keep track of the computers on the network, what OS, what programs, keeping up with Microsoft patching, installing software, removing software, etc.  Then you have the monitoring – you can monitor just about anything you can think of from registry entries (including installed programs), event logs, pretty much anything you can find in a readable file on the computer, you can monitor and trigger alerts or even emails and text messages on those events.

Then you have scripting.  Some of the things I can accomplish with scripting:

Extract the backup status of a computer from the logs of the backup program and send an email if there is a failure.

Read a file version, compare to an internet site and send an alert if the two are different or off by more than two, etc.

Launch a series of powershell scripts to configure a new windows server, with variables put in when starting the script in Labtech.

With a little creativity, anything you can do via command line or powershell, can be done remotely, in the background, with Labtech.

Yes, that is very cool.

Linux Joy

My home computer has been running Ubuntu Desktop (Debian Linux) for years.  It has only been recently that I have had the pleasure of working with Linux Servers.  I have set up, configured and administered a few LAMP (Linux, Apache, MySQL and PHP) web servers.  For the last couple days, I have been playing with Citrix Xen Server and HyperV to proof-of-concept virtual host NIC Bonding / Teaming and it’s effect on existing Virtual Machines on each platform.

I set up an environment (Xen, then HyperV) on a test server (Dell R710) and added a few Linux VMs (Got to play with Ubuntu Server 16.04 LTS and CentOS 7).  Then I did the Bonding/Teaming and watched for what happened to the existing servers and network connectivity.

Xen Server did very, very well! It had zero network disruptions as NIC Bonding was created and put into place.  Very impressive.

HyperV was less impressive.  There were multiple connectivity disruptions and panic-inducing bouncing before it settled down.

All that, to show off a screenshot:

Updating Ubuntu and CentOS