PXE and notes to self

The joy of supporting PXE-boot in a number of different environments have created a mental note-to-selflist when operating PXE with Configuration Manager. These are my notes, and if they can be of use to anyone else – thats great.

Overview

PXE-boot and its dependencies on operational and a managable network is key and therefore a basic understanding is vital. Coretech has published a great overview explaining PXE and its relationship with DHCP.

Basic setup

Some minor tweaks to get started are of course required even for the not-so-complex environment and 4SysOps has a great guide that details the generic troubleshooting steps on basic configuration.

In essence:

  • An operational Configuration Manager environment is key
      • A Distribution Point for the Configuration Manager site that is setup to handle PXE
      • A possibility to handle the broadcast requests when pressing F12 on the client

The last part (handle broadcast requests) is normally something that is managed by a network operations-team. There are of course (why should there be one?) many different ways to configure this and normally the discussion stands between DHCP Options (not supported by Microsoft) or IP-helpers. TechThoughts has an overview with detailed pros and cons, however clearly states that IP-helpers is the way to go forward.

Challenges

Once you get everything setup – here comes a list in order of likely hood to cause you issues.

What does a client see when pressing F12?

Well, you see on the screen that you pressed F12 and potentially some sort of diagnostic information is presented to you. Is this information useful? Complete? Of debugging-quality?

Most likely not! There are two tools available – one from Citrix (PXEChecker) and one from 2pint (powershell) – which provides great insight into what shows up at your client.

Test TFTP download

Wondering if you have a responsive PXE-server? Follow the previous guide howto test the download of a file

DNS and PXE

As PXE and its reliance on just ip-addresses is heavy – the first question that comes to mind is why DNS is a thing? Well, legacy servers (still supported for the role of Distribution Point) that run Windows Server 2008 R2 may exhaust the available ports if DNS and PXE-role is hosted on the same system.

Resolve the issue by configuring Windows Deployment Services to  dynamically request and allocate ports that are available.

Registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\WDSServer\Parameters
DWORD: UDPPortPolicy
Value: 0

PXE and DHCP

If the Distribution Point (with PXE enabled) and the DHCP-server is configured on the same server a few technical configurations are required. Normally these are set up properly, however it can always be a good thing to double-check.

Configure WDS to not use the same ports as DHCP
Registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSPXE
DWORD: UseDHCPPorts
Value: 0

Configure the DHCP-server to set option 60. Option 60 is intended for the client to know that in addition to the server providing an answer to the request for IP-address, a PXE-server also operates on the same server.

Command-line:
WDSUTIL /Set-Server /UseDHCPPorts:No /DHCPOption60:Yes

It’s slow

ConfigMgr has gradually provided improved ways to make PXE-boot go faster. CCMExec wrote article on howto tweak the configuration for boot-times (RamDiskTFTPBlocksize and RamDiskTFTPWindowSize). The sweetspot seems to be at 16384 / 8

Registry:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\DP
DWORD: RamDiskTFTPWindowSize
Value: 8
DWORD: RamDiskTFTPBlockSize
Value: 16384 (1456 allows for higher compatibility)

It starts to download but freezes (0xC0000001)

If the error is prevalent and not related to a specific device or model – most likely this relates to issues with something between the server and the client not beeing able to handle the default size of 1456 in block size. Windows Server 2003 was set to use a block size of 512, and since Windows Server 2008 this was bumped to 1456. If this fails – set it to 512 and verify compatibility. Previously the possibility was tested with 1432 and some success has been confirmed.

Registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WDSServer\Providers\WDSTFTP
DWORD: MaximumBlockSize:
Value: 512 (or 1432)

Windows 10–1607–LinkedConnections

So, lots of things have to fall into place for this to take effect, but here we go….

If you have UAC enabled (in some form or another) and have users that are able to run in two different contexts (elevated and non-elevated) with the same user account – it means that they are local administrators on the device. It also means…

.. that if they map a network drive in one context it will not be visible in the other context by default.

Microsoft created the great Linked Connection which will automatically allow the mapping of a network drive to be visible in both contexts.

Well, since roughly October 2016 this stopped working if you were running Windows 10 – 1607 with the October 2016 patch, and the user was local administrator on the device and mapped a drive in one context and also expected the EnableLinkedConnections to provide a smooth user experience – it simply stopped working.

To add insult to injury – if you map a drive within a login-script; the default context is elevated so the non-elevated process (such as explorer.exe) will not show the mapped drive.

As far as I can tell this is not listed as a known defect. Therefore its not listed as a fixed defect in the March 2017 patch for Windows (so, october.. november… december.. 6 months in total in a non-working state). According to Google (oh, what a reliable source for anything named Microsoft) Windows 10 – 1607 was noted as Business Ready (Current Business Branch) since 29 November 2016.

image

Ready for business, eh? Well – at least one step further….

ConfigMgr SDF-files

A twitter discussion revisited a previous thought that were investigated, but never bothered to really dig into it. As ConfigMgr 2012 introduced applications the payload that were required by the client became more complex. The use of SDF-files triggered some additional challenges as there seems to be certain hard-limits coded into how the ConfigMgr client is connecting to these quite simple SQL databases that are used in the background

Review the CCM-installation folder and the following files can be spotted;

image

These files are databases that store certain parts of information required for the operation of the client. The limit invoked seems to be defaulted to 128mb, however any developer can specify a limit of how large these files can grow.

To view these files a small tool named Compact View is required, aswell as the prerequisite SQL Server Compact 4.0 required to connect to the actual databases.

Opening the database is as simple as opening the file;

image

Opening the file directly from \CCM requires CompactView to run elevated, however a copy can always be made to a different location.

CCMStore.sdf seems to be specifically only contain information regarding Application Items and everything that belongs to them.

Well, what did the twitter thread discuss? It seems that if CCMStore.sdf hits 25mb the client stops. A quick workaround is to delete the file. As the ConfigMgr-client has halted most likely a different trigger is required to remediate this error (such as Group Policy, scheduled task etc etc).

image

 

Great twitter-thread, huh?

Group Policy Preference and Scheduled Tasks

For some reason it’s always the details in the basics that are the longest hurdle to get over. This particular topic is something that always needs to be re-googled before the last details are sorted out.

Purpose

To create a schedule task to either run as the system-account or the interactive user via Group Policy Preference.

The detail:

When resolving SYSTEM the normally resolved identity is BUILTIN\SYSTEM. Interactive is normally not able to resolve at all. This normally results in the following error client side when attempting to apply the Group Policy

‘0x80070534 No mapping between account names and security IDs was done.’

What should be done?

Click the Change User or Group and select the domain of your environment, and proceed to select the Builtin-container. This will resolve both Interactive (running in the user context of the logged on user) and system to NT Authority.

image

End-result;

image

or

image

ConfigMgr–User collection and direct membership for Security Group

Roger Zander wrote a brilliant article on Collections in Configuration Manager and some knowledge that aids in designing collection structure to reduce the workload of the ConfigMgr hierarchy.

One thing that I remember evaluating a few years back was to leverage direct memberships to a Active Directory Security Groups to reduce the total evaluation time for collections. After a brief discussion I noted that there wasn’t any guide on howto create this manually (found a scripted method on SCUG.BE) for User Collections.

Prerequisite

As a prerequisite the AD Security Group has to be discovered resource. You can review the collection members of “All Users and User Groups” and see what groups are discovered – if what you are looking for isn’t there most likely you are required to tweak the AD Discovery methods you are using.

Create the collection

Once the resource is located you can choose to create a new collection and set the limiting collection to “All Users and User Groups”.

image

All updates (full and incremental) can be removed to avoid any type of load. Choose to add a Direct Rule.

image

Change the default search for Resource class and Attribute name to User Group Resource and User Group Name. Enter the value you want and search all the resources you want to select.

image

Once the collection is created only a single resource is a member:

image

Ups and downs

The alternative that is mostly used when searching the web is to create a query rule that requires that collection to be updated (either a full schedule, incremental or an external trigger). Whats the difference between these methods?

Query rule

A query requires that AD Discovery has updated the group memberships in the database (full or incremental – both will suffice) and once that is completed the collection has to be updated. Quite common (based on all the blog-articles) is to set an Incremental update for all collections that require a fast update. The limit for this is (according to ConfigMgr 2012 documentation) roughly 200 collections depending and inaddition the queue will increase with updates.

Before the collection reflects the AD Security Group change there has passed a few minutes and once all the bells and whistles are done – the deployment is available for the user.

Direct Rule

A direct rule will not require that the collection is updated at all, however if the AD Security Group is recreated it is required to update the collection with a new direct rule (as the resource will have a new ID).

The user will not receive any deployments until their kerberos ticket has the AD Security Group membership update reflected. Most commonly this only happens during a lock / unlock or logoff / logon.

ConfigMgr site restore and WSUS Catalog version

After you restore a ConfigMgr Primary Site Server there are some losses of information that gets annoying.

Sample; WSUS Catalog version is stored in the registry and the ConfigMgr database. It seems that the registry alone is enough to reset the used WSUS Catalog version, however Registry alone is not enough to restore the catalog version with ConfigMgr 1606.

Roger Zander described the behaviour and gave the right path, however some additional steps were required for ConfigMgr 1606.

Step 1. Identify the necessary catalog version that is required (see Roger Zanders previous description)

Step 2. Update the registry (see Roger Zanders previous description)

Step 3. Update the database. Locate the table dbo.Update_SyncStatus within the ConfigMgr database. Choose Edit Top 200 rows (and there – you are now unsupported by Microsoft).

image

Update the ContentVersion to match your Catalog Version

image

Step 4. Trigger a new “Synchronize Software Updates”

Copy a ConfigMgr Application DeploymentType

A small function inspired by Fred Bainbridges post on howto append an OS requirement to a deployment type. The purpose of the function is to copy the Deploymenttype within an application, but if someone feels like a spending a few hours to rewrite it to copy between different applications that could possible work aswell.

 

function Copy-CMAppDT {
<#
.SYNOPSIS
Copy a single Deployment Type within an application
.DESCRIPTION
This will create a copy of a DeploymentType, with the lowest priority and the name specified
.EXAMPLE
Copy-CMAppDT -appName "PingKing 2.0.0" -DeploymentType "PingKing 2.0.0" -newDTname "PingKing Updated" -siteCode P01 -siteServer CM01
.EXAMPLE
.PARAMETER appName
This is the name of the configmgr application that has the deployment type. This accepts input from pipeline.
.PARAMETER DeploymentType
This is the name of the Deployment Type that you want to copy.
.PARAMETER newDTName
This is the name of the new DeploymentType.
.PARAMETER siteCode
This the ConfigMgr site code you are working with. Defaults to LAB
.PARAMETER siteServer
This the site server you are going to working with.  WMI calls are made to this server.  It is most likely your primary site server.
#>
[CmdletBinding()]
param (
[Parameter(
Position=0,
Mandatory=$true,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)
]
$appName,
$DeploymentType,
$newDTname,
$siteCode = "LAB",
$siteServer = "cm01.cm.lab"
)
begin {
write-verbose "Import module"
import-module 'C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1' -force #make this work for you
write-verbose "Connect to Provider and change location"
if ((get-psdrive $sitecode -erroraction SilentlyContinue | measure).Count -ne 1) {
new-psdrive -Name $SiteCode -PSProvider "AdminUI.PS.Provider\CMSite" -Root $SiteServer
write-verbose "Connect to the default scope"
try {
$connectionManager = New-Object Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlConnectionManager
$connectionManager.Connect($siteServer) | Out-Null
[Microsoft.ConfigurationManagement.ApplicationManagement.NamedObject]::DefaultScope = [Microsoft.ConfigurationManagement.AdminConsole.AppManFoundation.ApplicationFactory]::GetAuthoringScope($connectionManager)
}
catch {
throw-error "$error[0]"
}
}
write-verbose "Set location $sitecode"
set-location $sitecode`:

}

process {
write-verbose "Get Application $appName"
try {
$Appdt = Get-CMApplication -Name $appName
}
catch {
throw "Unable to get $appName - $error[0]"
}

$xml = [Microsoft.ConfigurationManagement.ApplicationManagement.Serialization.SccmSerializer]::DeserializeFromString($appdt.SDMPackageXML,$True)

$numDTS = $xml.DeploymentTypes.count
write-verbose "Number of DT: $numDTS"
$dts = $xml.DeploymentTypes

foreach ($dt in $dts)
{
if ($dt.title -eq $DeploymentType ) {
write-verbose "Found DT $deploymenttype"
$newDeploymentType = $dt.Copy()
write-verbose "Set new DT name $newDTname"
$newDeploymentType.Title = $newDTname
$newDeploymentType.ChangeID()

}
}
if ($newDeploymentType.GetType().name -eq 'DeploymentType') {

write-verbose "New DT created"
$xml.DeploymentTypes.Add($newDeploymentType)

write-verbose "Commit to AppObject"
$UpdatedXML = [Microsoft.ConfigurationManagement.ApplicationManagement.Serialization.SccmSerializer]::SerializeToString($XML, $True)
$appdt.SDMPackageXML = $UpdatedXML
Set-CMApplication -InputObject $appDT
}
else {
write-error "No DeploymentType $newDTname located"
}
}

end
{
write-verbose "Return to c:"
set-location c:
}
}

Boundary Groups and Secondary Sites

After spending a few hours reading about how-to configure Boundaries and Boundary Groups in regards to Secondary Sites in ConfigMgr 2012 I was yet to find something that really made anything explicitly clear. How does a client know that it’s it should be communicating with the Secondary Site?

So far I gathered that Site Assignment can not be conflicting with other boundaries, but Distribution Points can be assigned all over the place.

And you can associate one or more distribution point with each boundary group. You can also add a single distribution point to multiple boundary groups. The default behavior is to choose the closest server from which to transfer the content from. And remember that ConfigMgr 2012 supports that a client is a member of multiple boundary groups for content location, but not for automatic site assignment

From <https://msandbu.wordpress.com/2012/10/05/boundaries-and-boundary-groups/>

To further understand the site assignment I read the most quoted blog all over the internet. Something’s became more clear – such as the fact that the Primary Site should always be used as the Site Assignment for a boundary.

Note that none of this implies that MPs are located using Content Location Boundary Groups, just the fact that a client is within the scope of a secondary. MP retrieval in ConfigMgr 2012 is not based on client location, just site assignment. The above also does not imply that clients will fallback to a primary site if the MP in the secondary site is down; when an MP at a secondary goes down, clients within the scope of that secondary are essentially on an island unless you change the Boundary Groups and wait for their 25 hour re-evaluation cycle or the clients detect a network change.

From <http://myitforum.com/myitforumwp/2012/08/02/secondary-sites-and-boundary-groups/>

Yet another thread provided some insight into that MPs are actually evaluated if they are part provided as a preferred management point.

  • They enable clients to find a primary site for client assignment (automatic site assignment).
  • They can provide clients with a list of available site systems that have content after you associate the distribution point and state migration point site system servers with the boundary group.
  • Beginning with System Center 2012 Configuration Manager SP2, they support management points and can provide clients with a list of preferred management points.

From <https://technet.microsoft.com/en-us/ec3bae17-9b97-42d0-9c23-f634a3665606>

This last quote made it click though… If a boundary group is used for both site assignment and for content location the Management Point (of the Secondary Site) should also be specified in the list of Distribution Points.

Here is the conclusion:

Irrespective of the option “Clients prefer to use management points specified in boundary groups” is selected or not selected, If the hierarchy contains a Secondary Site with multiple Boundary Groups associated with it for site assignment, each Boundary Group “MUST” have the Management Point of that Secondary Site is added.

From <https://blogs.technet.microsoft.com/senthilkumar/2015/08/10/configmgr2012-sp2-r2sp1-preferred-management-points-configuration-and-secondary-sites/>

Well, how does this actually look?

image

Now, this has to be the piece of historic GUI that simply has been left behind. Its ugly, and no one truly gets this. In the above case – a client that is a member of the above Boundary Group will be communicating to the Secondary Site. I wonder what happens if there are conflicts with assigned MPs…?

The check-mark Use this boundary for site assignment has been recommended to separate into a separate boundary group (gives clarity I suppose). Secondary Sites should never be used for site assignment.I can only assume (based on the last quote I posted above) that if a Site Assignment and a Site Server System are separated the addition of both a Secondary Site MP and a local DP into the Site Systems Server-part are not necessary. I haven’t confirmed this though…

Incase you want to see how many clients are assigned to a specific Management Point a splendid fella just posted a simple SQL-query to identify this.

ConfigMgr and a backlog in distributions

Scenario

Do you have a primary site and a few secondary sites in ConfigMgr 2012+?

Do you schedule the legacy Package format to update on a schedule?

image

Do you have a backlog in the distribution manager?

Well, so far this is known (by Microsoft) defect that apparently is yet to be fixed (until 1606 – nothing confirmed beyond that)

Symptoms

If you review the database where ConfigMgr resides you can see that there is a constant growing amount of DistributionJobs. Sample query to get an overview;

use <database>
select COUNT(*) from distributionjobs

The problem grows the more packages you have set to update on a schedule. The frequency of the schedule is not relevant, the package will loop into a forever updating loop. Most likely the primary site will handle this efficiently, however the sending to secondary sites will cause a backlog that is not just an annoyance but causing severe problems as the backlog will continue to grow.

Repeating this: The frequency of the schedule is not relevant. Just check the above checkbox and the issue will occur.

SQL query to locate relevant packages

use <database>
select pkg.PkgID, pkg.Manufacturer, pkg.Name, pkg.Version, pkg.Language, pkg.RefreshSchedule from SMSPackages as pkg
where datalength(pkg.RefreshSchedule) !=0

Fixit

Easy – uncheck all these check-boxes that updates packages. If you still want to update packages on a schedule use a powershell script to trigger the update and use the task scheduler to run the update.

Run the command-line;

powershell -executionpolicy bypass -file SCCM.UpdatePkg.ps1 -packageid <PACKAGEID>

Code:
(I honestly don’t know if I have stolen / copied this from somewhere – if I have give me a ping and I will remove this)

#========================================================================
# Created on: 2014-10-28 15:06
# Created by: Nicke Källén
# Organization: Applepie.se
# Filename: SCCM.UpdatePkg.ps1
#========================================================================
Param(
[Parameter(Mandatory=$True,Position=1)]
[string]$packageid
)

Function Invoke-CMPackageUpdate
{
[CmdLetBinding()]
Param(
[Parameter(Mandatory=$True,HelpMessage="Please Enter Primary Server Site code")]
$SiteCode,
[Parameter(Mandatory=$True,HelpMessage="Please Enter Primary Server Name")]
$SiteServer,
[Parameter(Mandatory=$True,HelpMessage="Please Enter Package/Application ID")]
$PackageID
)

Try{
$PackageClass = [wmiclass] "\\$($siteserver)\root\sms\site_$($sitecode):SMS_Package"
$newPackage = $PackageClass.CreateInstance()

$newPackage.PackageID = $PackageID

$newPackage.RefreshPkgSource()
}
Catch{
$_.Exception.Message
}

}

Invoke-CMPackageUpdate -SiteCode <SITECODE> -SiteServer <SERVER> -PackageID $packageid

Quick test of WDS

Just a quick-test of a TFTP server – just to validate if it is responsive…

These commands be run from any client (screenshots are from Win7)

Step 1

Install the TFTP Client

image

Step 2

Run the command in a folder where you have permissions to write in

tftp -1 <servername> GET \boot\x86\wdsnbp.com

If the TFTP-client is not installed the below error message will be received

image

If it is successfull, you will have downloaded a small file

image