NIC Location on domain controller shows Public network

It could happen. I saw this issue couple of times, not only on domain controllers, but also on other domain joined computers.

The cause of this problem is the Network Location Awareness service. We know, that this service is recognising network location based on gateway and is trying to locate AD server thru port 389. Well, when gateway is changed or no server connection true port 389 is available, we have a new network location – by default it is Public.

Anyway, it can happened that NLA service starts before the AD services are started (or before DC is reachable on a non DC server). In this case, we will have public network profile on DC or domain joined computers. If firewall is enabled, most of network services will not run as the firewall for the Public profile is almost closed.
We have few possibilities to solve this situation. Maybe the most simple way is to restart the server, but I don’t know if I can restart the server at this moment and what was the original cause of the problem – maybe it will reappear. The second option is to disable / reenable the NIC adapter and in most cases, it will solve the issue. We will get the same result if we just restart the NLA service – this is a better way.
In some cases, you cannot connect to the computer for some reason. In this case, I use PowerShell remote session to solve the problem.

Here are the steps:
Enter-PSSession ComputerName (establish connection to computer with the problem)
Get-NetConnectionProfile (this will show you your current location profile – if this is the source of the problem, the location will not be Domain)
Restart-Service nlasvc (this cmdlet will restart NLA service; after this step you should see Domain network profile)
Get-NetConnectionProfile (just to check if the solution works)


Exit-PSSession (disconnect form the remote computer)

Based on my experience, this solution works always. Some administrators also suggest to change start option for NLA service to Automatic (Delayed Start). I am not sure if this is a good solution; be careful with it. Maybe you can do it in cases where this error is frequent (better: search for the original cause and solve the problem)

Windows Server 2016 may fail to boot after October update KB4041676

Some of my customers and friends had a problem: after installing KB4041676, VMs on Server 2016 didn’t boot. The problem is in update – Microsoft releases the update with a mistake and correct this update immediately the same afternoon, but in some cases the old update remained in cache on devices or WSUS servers. To be sure, that you have the right update, check this link and retrieve the right delta update.
What if you are already there and your VM is not booting?
To solve the issue, follow this steps:

  • Start the VM from the media (DVD, ISO…)
  • At the installation menu, select Repair computer and in Advanced options select Command prompt
  • In command prompt, you have to execute this commands:
    • reg load hklm\temp c:\windows\system32\config\software
    • reg delete “HKLM\temp\Microsoft\Windows\CurrentVersion\Component Based Servicing\SessionsPending” /v Exclusive
    • reg unload HKLM\temp
  • After correcting the registry, we still need to remove the update with commands:
    • Use dism /image:c:\ /get-packages to list all installed packages to check if the package is really installed
    • When you find the package, you can uninstall with command: dism /image:c:\ /remove-package /packagename:packageidentity /scratchdir:c:\temp (package identity is an identity reported in output from previous command)
  • Reboot the server

Hope it is helpful.

Azure File Services – first overview

Azure File Services (AFS) is new service in Azure, currently in public preview. From my perspective, it is a service with very strong fundamentals and a granted future.
What can we do? What are objectives? Well, we are producing more and more data every day, we are building every day new datacenters (on premise), open new corporate locations and this are all reasons why we have problems with disk space and syncing data around the world.
AFS is a technology dedicated to solve these problems and help us to have more control on our data and hardware usage. We can use AFS in various modes or combinations:

  • We can sync a server or cluster to Azure and duplicate all files from local storage to Azure – just because we want to have additional security or additional access point (Azure file share)
  • We can sync a server or cluster to reduce our hardware needs. We have locally stored only files that we use frequently; all other older files will be present only in Azure and we don’t need disk space for them. This is tiering space where we can write our rules how files will go to the cloud and they disappear from local storage. In this case, if we need a file that is present only in Azure, we can see it on local storage (grayed icon) and the file is transferred locally from Azure in a moment that we click on it – now it is located also on local storage and is under AFS rules.
  • We can sync more servers (clusters) in different datacenters across the world like DFS. Sync is done through Azure services and all files will exist on Azure (not necessary on premise), so Azure is in this case the new file store. Of course, because different locations work with different files, on premise content can (will) vary from server to server. We cannot expect that all server will have the same files stored on local storage and there will not be a point where you can find all your files together except on Azure storage.

Using this technology will change your environment, your way of thinking about some operations that are now clear and from this reason it is very important to know what and how will be impacted. For sure the most important thing that have to be changed is the backup. You have to know, that you have all files only in Azure, so backup has to be done there. If you want to do backup locally, there will be a problem because you will access to any file every time you will do a backup and those files will remain on premise – as frequently used files.
We have a nice short video for AFS. You can watch it here.

How to establish which files are good for AFS technology?
It depends from your usage, company infrastructure and of course file types. First, you have to identify files or shares. In some cases, maybe you will replace DFS technology with AFS (your users use different files in different locations and there is no need to have anywhere all files stored locally). Maybe you have a large number of old files (I am thinking about my client – advertising agency – they have really many old projects that they need to be stored in archive, but they practically never use). This are some cases where you can use ASF. You will have a good and long retention policy in Azure, you don’t need to care about backups, disk spaces … This is very important and is money value – also for an administrator.

It is difficult to establish the AFS?


No. I can say that is simpler than build some DFS infrastructures. In short, you just need to install AFS agent on server, create Storage account and AFS service in Azure and connect both ends. For few servers, you will be able to do it in few hours. But here you have to know, that synchronization will take some time and to have a complete infrastructure up to date and working, it will take longer; depend on data amount and internet bandwidth. If you will try to test it, just take your time, go slower, wait for steps to complete and you will be happy with the results.
I will write a post in few days with step by step instructions how to connect a server to AFS and make all working.

For me, this technology can be used in very small companies in one way and in large companies in another way. It is very flexible, with very large specter of usage and different solutions. I am sure that this approach is the best way to have a lot of implementations, successful stories and satisfied customers. This is what we want to do and I am sure that is done very well yet.

PowerShell license tips

Well as I know many users are trying to find Windows key with some key viewer software. Nothing wrong, but this software is not always “nice” and can do something else than just show you a key. Of course, with Windows 8.1 and Windows 10 you have many times a key in BIOS, so there is no need to search for it.
Anyway, if you feel better when you have a key printed on a piece of paper, you can do that simply with one PowerShell cmdlet:

Get-WmiObject -query ‘select * from SoftwareLicensingService’

This will show you more than only a key. There are a lot of information on licensing, like KMS server, OS version, … In some cases it can be useful.

Creating VM start order in Hyper-V Cluster 2016

Many times (or almost always), you have to define start order of VMs as services on one VM in an exactly defined order. We tried to solve this problem with start delays in the past – or with some additional software, but there were always situations where we are unable to control all factors.
Now in Windows Server 2016 edition it is better, as we can define startup groups. This means, that we define a group of servers who will start together and another group of servers who will start later when the first group is started or also with some delay after the first group is started. Additionally, we can start just a last group of servers and because this group depends on other groups, system will first start the parent server group. So, you are not able to start some servers before all infrastructure depends on it is started. We have new cmdlets in PowerShell to define and manage Cluster Groups (you can list them with Get-Command -Noun *ClusterGroup*).


To do these settings, you have to know some concepts that are introduced in Windows Server 2016 and we will use them.

  • This post explains how to create groups in Hyper-V cluster; it will not work on non-clustered servers. If you want to setup startup order for a single Hyper-V host, this is the post where you can find how to do it.
  • Cluster Group: Represents clustered services or applications (resource groups) in a failover cluster – any HA VM has his group. You can view cluster groups with PowerShell cmdlet Get-ClusterGroup. You don’t need to change anything here; just leave them as they are.
  • Cluster Group Set: Is a set of Cluster groups (VMs) that we want to control together. This is a set of VMs that have similar services and we want to control them as a group. Here we can control some settings (startup delay, global or local, …). Cmdlet we have to use at this point is Set-ClusterGroupSet (cmdlet syntax)
  • Cluster group set dependency: Is a dependency where we specify which group and when it will start. To be clear, with dependency we define VM startup order.

How to setup environment?
I always start creating output with cmdlet Get-ClusterGroup, because it is easier to manage all VMs when I have all their names on a paper or TXT file. It is easier to review them, define services that they offer and later functionally define group sets.
When you have defined group sets (put VMs with similar services or dependencies together) it is time to create Cluster Group Sets. This operation is done in few steps:

  • Create Cluster Group Set: this will create an empty group for grouping VMs. To do this you have to use cmdlet New-ClusterGroupSet -Name GroupName.
  • Add Cluster Groups (VMs) to sets. In this step, we will populate Cluster Group Sets with VMs – this mean that we will put together all VMs with similar services or VMs that we need to start at the same time. When Cluster Group Set will be asked to start, all VMs that are in it will be started. There is no dependency inside the Cluster Group Set and we have no chance to control the start order inside the group. If we think that some VMs need to start before other VMs, we need more than one Cluster Group Set. To add VM to Cluster Group Set we will use PowerShell cmdlet Add-ClusterGroupToSet -Name GroupName -Group ClusterGroupName. In this cmdlet, you have to change GroupName with your Cluster Group Set name and ClusterGroupName with Cluster Group name (VM name – output in firs step). We have to repeat cmdlet for every single VM.
  • Create dependency between Cluster Group Sets: We have to use cmdlet Add-ClusterGroupDependency -Group GroupName -ProviderSet GroupDependsOn. GroupName is the name of the Cluster Group Set we want to start and GroupDependsOn is a Cluster Group Set necessary to be started previously. At this point we need to create startup order (dependency) between groups. The start of any group can depend on successful start of one or more groups. If previous group will fail to start, the group who depends on it will not start. I suggest you to have in mind this situation (maybe develop a script to add and remove all VMs from groups – you will quickly solve problems if they appear).

With this few steps, we created a startup order for our environment. If it is all OK, we will never have a situation when a service will not work because some dependency server is not started. Practically, the system will look to start all VMs in a defined order. This also means, that we have to add and remove depreciated and newly deployed VMs in this groups – we have to change this mechanism every single time we change our environment. Don’t forget it.